| { |
| "paper_id": "P17-1031", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:17:07.645161Z" |
| }, |
| "title": "Learning attention for historical text normalization by learning to pronounce", |
| "authors": [ |
| { |
| "first": "Marcel", |
| "middle": [], |
| "last": "Bollmann", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Universit\u00e4t Bochum", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "bollmann@linguistics.rub.de" |
| }, |
| { |
| "first": "Joachim", |
| "middle": [], |
| "last": "Bingel", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Copenhagen", |
| "location": {} |
| }, |
| "email": "bingel@di.ku.dk" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Copenhagen", |
| "location": {} |
| }, |
| "email": "soegaard@di.ku.dk" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.", |
| "pdf_parse": { |
| "paper_id": "P17-1031", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents. A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g. vnd \u2192 und 'and').", |
| "cite_spans": [ |
| { |
| "start": 348, |
| "end": 366, |
| "text": "(Piotrowski, 2012)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data. Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models. This is similar to models that have been proposed for neural machine translation (e.g., Cho et al. (2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", |
| "cite_spans": [ |
| { |
| "start": 511, |
| "end": 528, |
| "text": "Cho et al. (2014)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively. Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms. Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Contributions Our contributions are as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We evaluate several such architectures across 44 datasets of Early New High German.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We show that such architectures benefit from bidirectional encoding, beam search, and attention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German. 1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise. Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics. For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fra\u00fcwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others. 2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al. (2015) . For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions). Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", |
| "cite_spans": [ |
| { |
| "start": 92, |
| "end": 126, |
| "text": "(Dipper and Schultz-Balluff, 2013)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 672, |
| "end": 673, |
| "text": "2", |
| "ref_id": null |
| }, |
| { |
| "start": 796, |
| "end": 818, |
| "text": "Krasselt et al. (2015)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For all texts, we removed tokens that consisted solely of punctuation characters. We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts. Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf. the website).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task. This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes. We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms. The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs). For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", |
| "cite_spans": [ |
| { |
| "start": 246, |
| "end": 267, |
| "text": "(Baayen et al., 1995)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al. (2014) . It consists of the following:", |
| "cite_spans": [ |
| { |
| "start": 153, |
| "end": 176, |
| "text": "Sutskever et al. (2014)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model 3.1 Base model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 an embedding layer that maps one-hot input vectors to dense vectors;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model 3.1 Base model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model 3.1 Base model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model 3.1 Base model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model 3.1 Base model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) . LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks. We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 107, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model 3.1 Base model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top. Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model. Embedding layers for the inputs are not explicitly shown.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 194, |
| "end": 202, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model 3.1 Base model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "pairs of different lengths. Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows. An example illustration of the unrolled network is shown in Fig. 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 328, |
| "end": 334, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model 3.1 Base model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms. We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (\u0177 1 , ...,\u0177 n ) is the model's output, we minimize the mean loss \u2212 n i=1 y i log\u0177 i over all training samples. For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters. This only affects 172 samples across the whole dataset, and is only done during training. In other words, we evaluate our models across all the test examples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep. This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical. We therefore also experiment with beam search decoding, setting the beam size to 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Finally, we also experiment with using a lexical filter during the decoding step. Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon. This is again intended to reduce the occurrence of nonsensical outputs. For the lexicon, we use all word forms from CELEX (cf. Sec. 2) plus the target word forms from the training set. 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In our base architecture, we assume that we can decode from a single vector encoding of the input sequence. This is a strong assumption, especially with long input sequences. Attention mechanisms give us more flexibility. The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation. Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Our implementation is identical to the decoder with soft attention described by Xu et al. (2015) . If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vector\u1e91 t as a weighted combination of the output vectors a i :", |
| "cite_spans": [ |
| { |
| "start": 80, |
| "end": 96, |
| "text": "Xu et al. (2015)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "z t = n i=1 \u03b1 i a i (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The weights \u03b1 i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 = sof tmax(f att (a, h t\u22121 ))", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Attention", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We then modify the decoder by conditioning its internal states not only on the previous hidden state h t\u22121 and the previously predicted output character y t\u22121 , but also on the context vector\u1e91 t :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "i t = \u03c3(W i [h t\u22121 , y t\u22121 ,\u1e91 t ] + b i ) f t = \u03c3(W f [h t\u22121 , y t\u22121 ,\u1e91 t ] + b f ) o t = \u03c3(W o [h t\u22121 , y t\u22121 ,\u1e91 t ] + b o ) g t = tanh(W g [h t\u22121 , y t\u22121 ,\u1e91 t ] + b g ) c t = f t c t\u22121 + i t g t h t = o t tanh(c t ) (3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In Eq. 3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ). The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks. Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes. This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 131, |
| "text": "(Caruana, 1993", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-task learning", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task. We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation. Doing so, we suffer a loss with respect to the true output sequence and update the model parameters. The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-task learning", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "We used a single manuscript (B) for manually evaluating and setting the hyperparameters. This manuscript is left out of the averages reported below. We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyperparameters", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only). All these parameters were set on the B manuscript alone.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyperparameters", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "We implemented all of the models in Keras (Chollet, 2015). Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "3.7" |
| }, |
| { |
| "text": "We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training. We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Baselines We compare our architectures to several competitive baselines. Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec. 3.3) to align input and output characters. Our second baseline uses the same alignment, but trains a Table 1 : Average word accuracy across 43 texts from the Anselm dataset, evaluated on the first 1,000 tokens of each text. Evaluation on the base encoder-decoder model (Sec. 3.1) with greedy search, beam search (k = 5) and/or lexical filtering (Sec. 3.3), with attentional decoder (Sec. 3.4), and the multi-task learning (MTL) model using grapheme-to-phoneme mappings (Sec. 3.5).", |
| "cite_spans": [ |
| { |
| "start": 261, |
| "end": 293, |
| "text": "(Wieling et al., 2009, Sec. 3.3)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 390, |
| "end": 397, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "deep bi-LSTM sequential tagger, following Bollmann and S\u00f8gaard (2016). We evaluate this tagger using both standard and multi-task learning. Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012 ). 4", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 263, |
| "text": "(Bollmann, 2012", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We use word-level accuracy as our evaluation metric. While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful. Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 320, |
| "end": 327, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word accuracy", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines. All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and S\u00f8gaard (2016) .", |
| "cite_spans": [ |
| { |
| "start": 252, |
| "end": 279, |
| "text": "Bollmann and S\u00f8gaard (2016)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word accuracy", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms. For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help. In fact, attention hurts the performance of our multitask architecture quite significantly. Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word accuracy", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention. This is the hypothesis that we will try to validate in Sec. 5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 101, |
| "end": 108, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word accuracy", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Sample predictions A small selection of predictions from our models is shown in Table 2 . They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, \u00fcnsget) than the others. Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen. Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 80, |
| "end": 87, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word accuracy", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g. dicke, herzel). We will investigate this property further in Sec. 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word accuracy", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Target Base model MTL model GREEDY", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input", |
| "sec_num": null |
| }, |
| { |
| "text": "To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text. Fig. 2 shows the learned character embeddings. In the representations from the base model ( Fig. 2a) , characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text. Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals. On the other hand, the MTL model shows a better generalization of the training data (Fig. 2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>. Also, <n> and <m> are now in close proximity.", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 95, |
| "text": "(Maaten and Hinton, 2008)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 146, |
| "end": 152, |
| "text": "Fig. 2", |
| "ref_id": null |
| }, |
| { |
| "start": 238, |
| "end": 246, |
| "text": "Fig. 2a)", |
| "ref_id": null |
| }, |
| { |
| "start": 723, |
| "end": 732, |
| "text": "(Fig. 2b)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learned vector representations", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We can also visualize the internal word representations that are produced by the encoder (Fig. 3) . Here, we chose words that demonstrate the interchangeable use of <u> and <v>. Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>. However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization. In the MTL model, however, these examples are indeed clustered together.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 89, |
| "end": 97, |
| "text": "(Fig. 3)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learned vector representations", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy. However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 54, |
| "end": 61, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learned vector representations", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) . We put this hypothesis to the test by closely investigating properties of the individual models below.", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 181, |
| "text": "Caruana (1998)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learned vector representations", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities. We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively. 5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case). With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", |
| "cite_spans": [ |
| { |
| "start": 299, |
| "end": 300, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model parameters", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96. Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 311, |
| "end": 319, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model parameters", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system. We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors. Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average. Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Final output", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Finally, the attention and multi-task models display a word-level agreement of \u03ba=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (\u03ba=0.817 for attention and \u03ba=0.814 for multi-task learning).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Final output", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models. We follow Li et al. (2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models. The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep. Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction. Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model. Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'. Here, the scores for the attentional and the MTL model indeed correlate by \u03c1 = 0.615, while those for the base model do not correlate with either of them. A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (\u2265 7 characters), with a mean \u03c1 = 0.303 (\u00b10.177) for attentional vs. MTL model, while the base model correlates with either of them by \u03c1 < 0.21.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 135, |
| "text": "Li et al. (2016)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 735, |
| "end": 743, |
| "text": "Figure 5", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Saliency analysis", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 152, |
| "end": 176, |
| "text": "(Baron and Rayson, 2008)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 202, |
| "end": 218, |
| "text": "(Bollmann, 2013;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 219, |
| "end": 238, |
| "text": "Porta et al., 2013)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; S\u00e1nchez-Mart\u00ednez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljube\u0161i\u0107, 2016) . This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks. Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 134, |
| "text": "(Pettersson et al., 2013;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 135, |
| "end": 165, |
| "text": "S\u00e1nchez-Mart\u00ednez et al., 2013;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 166, |
| "end": 193, |
| "text": "Scherrer and Erjavec, 2013;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 212, |
| "end": 241, |
| "text": "(Scherrer and Ljube\u0161i\u0107, 2016)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 441, |
| "end": 459, |
| "text": "(Cho et al., 2014)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Neural networks have rarely been applied to historical spelling normalization so far. Azawi et al. (2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms. Bollmann and S\u00f8gaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step. Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) . It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", |
| "cite_spans": [ |
| { |
| "start": 230, |
| "end": 257, |
| "text": "Bollmann and S\u00f8gaard (2016)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 539, |
| "end": 563, |
| "text": "(Collobert et al., 2011)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 588, |
| "end": 609, |
| "text": "(Klerke et al., 2016)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 701, |
| "end": 720, |
| "text": "(Dong et al., 2015;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 721, |
| "end": 740, |
| "text": "Luong et al., 2016)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines. Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and S\u00f8gaard (2016) , without requiring a prior character alignment.", |
| "cite_spans": [ |
| { |
| "start": 288, |
| "end": 315, |
| "text": "Bollmann and S\u00f8gaard (2016)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task. We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms. We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model. Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others. Reranking the predictions with a language model could be one possible way to improve on this. , for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context. Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text. Table 3 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using the baseline models (cf. Sec. 4): the Norma tool (Bollmann, 2012) , an averaged perceptron model, and a deep bi-LSTM sequential tagger (Bollmann and S\u00f8gaard, 2016) . Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec. 3) and the multi-task model. G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model. Best results (also taking into account the baseline results from Table 3 ) shown in bold.", |
| "cite_spans": [ |
| { |
| "start": 985, |
| "end": 1001, |
| "text": "(Bollmann, 2012)", |
| "ref_id": null |
| }, |
| { |
| "start": 1071, |
| "end": 1099, |
| "text": "(Bollmann and S\u00f8gaard, 2016)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 846, |
| "end": 853, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1102, |
| "end": 1109, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1432, |
| "end": 1439, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We observe that due to this filtering, we cannot reach 2.25% of the targets in our test set, most of which are Latin word forms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/comphist/norma", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For the multi-task models, this analysis disregards those dimensions that do not correspond to classes in the main task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Marcel Bollmann was supported by Deutsche Forschungsgemeinschaft (DFG), Grant DI 1558/4. This research is further supported by ERC Starting Grant LOWLANDS No. 313695, as well as by Trygfonden.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": "For interested parties, we provide our full evaluation results for each single text in our dataset. Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 100, |
| "end": 107, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Supplementary Material", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Normalizing historical orthography for OCR historical documents using LSTM", |
| "authors": [ |
| { |
| "first": "Muhammad", |
| "middle": [ |
| "Zeshan" |
| ], |
| "last": "Mayce Al Azawi", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "M" |
| ], |
| "last": "Afzal", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Breuel", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2nd International Workshop on Historical Document Imaging and Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "80--85", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/2501115.2501131" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mayce Al Azawi, Muhammad Zeshan Afzal, and Thomas M. Breuel. 2013. Normalizing histor- ical orthography for OCR historical documents using LSTM. In Proceedings of the 2nd In- ternational Workshop on Historical Document Imaging and Processing. ACM, pages 80-85. https://doi.org/10.1145/2501115.2501131.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The CELEX lexical database (Release 2) (CD-ROM)", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Harald", |
| "middle": [], |
| "last": "Baayen", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Piepenbrock", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Gulikers", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Harald Baayen, Richard Piepenbrock, and L\u00e9on Gu- likers. 1995. The CELEX lexical database (Re- lease 2) (CD-ROM).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Linguistic Data Consortium", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Linguistic Data Consor- tium, University of Pennsylvania, Philadelphia, PA. https://catalog.ldc.upenn.edu/ldc96l14.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "VARD 2: A tool for dealing with spelling variation in historical corpora", |
| "authors": [ |
| { |
| "first": "Alistair", |
| "middle": [], |
| "last": "Baron", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Rayson", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Postgraduate Conference in Corpus Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alistair Baron and Paul Rayson. 2008. VARD 2: A tool for dealing with spelling variation in historical corpora. In Proceedings of the Postgraduate Conference in Corpus Linguistics. http://eprints.lancs.ac.uk/41666/.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Semi-)automatic normalization of historical texts using distance measures and the Norma tool", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the Second Workshop on Annotation of Corpora for Research in the Humanities (ACRH-2)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcel Bollmann. 2012. (Semi-)automatic normal- ization of historical texts using distance mea- sures and the Norma tool. In Proceedings of the Second Workshop on Annotation of Corpora for Research in the Humanities (ACRH-2). Lis- bon, Portugal. https://www.linguistics.ruhr-uni-", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Automatic normalization for linguistic annotation of historical language data", |
| "authors": [ |
| { |
| "first": "Marcel", |
| "middle": [], |
| "last": "Bollmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "3--310764", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcel Bollmann. 2013. Automatic nor- malization for linguistic annotation of his- torical language data. Bochumer Lin- guistische Arbeitsberichte 13. http://nbn- resolving.de/urn/resolver.pl?urn:nbn:de:hebis:30:3- 310764.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Improving historical spelling normalization with bidirectional lstms and multi-task learning", |
| "authors": [ |
| { |
| "first": "Marcel", |
| "middle": [], |
| "last": "Bollmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 26th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcel Bollmann and Anders S\u00f8gaard. 2016. Im- proving historical spelling normalization with bi- directional lstms and multi-task learning. In Pro- ceedings of the 26th International Conference on Computational Linguistics (COLING 2016). Osaka, Japan. http://aclweb.org/anthology/C16-1013.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Multitask learning: A knowledge-based source of inductive bias", |
| "authors": [ |
| { |
| "first": "Rich", |
| "middle": [], |
| "last": "Caruana", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 10th International Conference on Machine Learning (ICML)", |
| "volume": "", |
| "issue": "", |
| "pages": "41--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rich Caruana. 1993. Multitask learning: A knowledge-based source of inductive bias. In Pro- ceedings of the 10th International Conference on Machine Learning (ICML). pages 41-48.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Multitask learning", |
| "authors": [ |
| { |
| "first": "Rich", |
| "middle": [], |
| "last": "Caruana", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Learning to learn", |
| "volume": "", |
| "issue": "", |
| "pages": "95--133", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rich Caruana. 1998. Multitask learning. In Learning to learn, Springer, pages 95-133. http://dl.acm.org/citation.cfm?id=296635.296645.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "On the properties of neural machine translation: Encoder-decoder approaches", |
| "authors": [ |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Van Merrienboer", |
| "suffix": "" |
| }, |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-8)", |
| "volume": "", |
| "issue": "", |
| "pages": "103--111", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/W14-4012" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the proper- ties of neural machine translation: Encoder-decoder approaches. In Proceedings of the Eighth Work- shop on Syntax, Semantics and Structure in Statis- tical Translation (SSST-8). Doha, Qatar, pages 103- 111. http://dx.doi.org/10.3115/v1/W14-4012.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Natural language processing (almost) from scratch", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Karlen", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Kuksa", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "The Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2493--2537", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language pro- cessing (almost) from scratch. The Journal of Machine Learning Research 12:2493-2537.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The Anselm corpus: Methods and perspectives of a parallel aligned corpus", |
| "authors": [ |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Dipper", |
| "suffix": "" |
| }, |
| { |
| "first": "Simone", |
| "middle": [], |
| "last": "Schultz-Balluff", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the NODALIDA Workshop on Computational Historical Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefanie Dipper and Simone Schultz-Balluff. 2013. The Anselm corpus: Methods and perspectives of a parallel aligned corpus. In Proceedings of the NODALIDA Work- shop on Computational Historical Linguistics. http://www.ep.liu.se/ecp/087/003/ecp1387003.pdf.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Multi-task learning for multiple language translation", |
| "authors": [ |
| { |
| "first": "Daxiang", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hua", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Dianhai", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Haifeng", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1723--1732", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/P15-1166" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1723-1732. https://doi.org/10.3115/v1/P15-1166.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhu- ber. 1997. Long short-term memory. Neural Computation 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [ |
| "Lei" |
| ], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "The International Conference on Learning Representations (ICLR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimiza- tion. The International Conference on Learn- ing Representations (ICLR) ArXiv:1412.6980. http://arxiv.org/abs/1412.6980.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Improving sentence compression by learning to predict gaze", |
| "authors": [ |
| { |
| "first": "Sigrid", |
| "middle": [], |
| "last": "Klerke", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of NAACL-HLT 2016", |
| "volume": "", |
| "issue": "", |
| "pages": "1528--1533", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N16-1179" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sigrid Klerke, Yoav Goldberg, and Anders S\u00f8gaard. 2016. Improving sentence compression by learn- ing to predict gaze. In Proceedings of NAACL- HLT 2016. San Diego, CA, pages 1528-1533. http://dx.doi.org/10.18653/v1/N16-1179.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Guidelines for normalizing historical German texts", |
| "authors": [ |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Krasselt", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcel", |
| "middle": [], |
| "last": "Bollmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Dipper", |
| "suffix": "" |
| }, |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Petran", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "3--419680", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julia Krasselt, Marcel Bollmann, Stefanie Dipper, and Florian Petran. 2015. Guidelines for nor- malizing historical German texts. Bochumer Linguistische Arbeitsberichte 15. http://nbn- resolving.de/urn/resolver.pl?urn:nbn:de:hebis:30:3- 419680.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Visualizing and understanding neural models in NLP", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Xinlei", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "681--691", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N16-1082" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Ju- rafsky. 2016. Visualizing and understanding neu- ral models in NLP. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies. Associa- tion for Computational Linguistics, pages 681-691. https://doi.org/10.18653/v1/N16-1082.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Normalising Slovene data: historical texts vs. user-generated content", |
| "authors": [ |
| { |
| "first": "Nikola", |
| "middle": [], |
| "last": "Ljube\u0161i\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Katja", |
| "middle": [], |
| "last": "Zupan", |
| "suffix": "" |
| }, |
| { |
| "first": "Darja", |
| "middle": [], |
| "last": "Fi\u0161er", |
| "suffix": "" |
| }, |
| { |
| "first": "Toma\u017e", |
| "middle": [], |
| "last": "Erjavec", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 13th Conference on Natural Language Processing (KONVENS)", |
| "volume": "", |
| "issue": "", |
| "pages": "146--155", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nikola Ljube\u0161i\u0107, Katja Zupan, Darja Fi\u0161er, and Toma\u017e Erjavec. 2016. Normalising Slovene data: histor- ical texts vs. user-generated content. In Proceed- ings of the 13th Conference on Natural Language Processing (KONVENS). Bochum, Germany, pages 146-155.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Multi-task sequence to sequence learning", |
| "authors": [ |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Lukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "4th International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task se- quence to sequence learning. 4th International Con- ference on Learning Representations (ICLR 2016) https://arxiv.org/abs/1511.06114v4.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Visualizing data using t-SNE", |
| "authors": [ |
| { |
| "first": "Laurens", |
| "middle": [], |
| "last": "Van Der Maaten", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "9", |
| "issue": "", |
| "pages": "2579--2605", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Jour- nal of Machine Learning Research 9:2579-2605.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "An SMT approach to automatic annotation of historical text", |
| "authors": [ |
| { |
| "first": "Eva", |
| "middle": [], |
| "last": "Pettersson", |
| "suffix": "" |
| }, |
| { |
| "first": "Be\u00e1ta", |
| "middle": [], |
| "last": "Megyesi", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00f6rg", |
| "middle": [], |
| "last": "Tiedemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the NODALIDA Workshop on Computational Historical Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eva Pettersson, Be\u00e1ta Megyesi, and J\u00f6rg Tiede- mann. 2013. An SMT approach to auto- matic annotation of historical text. In Pro- ceedings of the NODALIDA Workshop on Com- putational Historical Linguistics. Oslo, Norway. http://www.ep.liu.se/ecp/087/005/ecp1387005.pdf.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Natural Language Processing for Historical Texts", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Piotrowski", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Number 17 in Synthesis Lectures on Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.2200/s00436ed1v01y201207hlt017" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Piotrowski. 2012. Natural Language Processing for Historical Texts. Number 17 in Synthesis Lectures on Human Language Tech- nologies. Morgan & Claypool, San Rafael, CA. http://dx.doi.org/10.2200/s00436ed1v01y201207hlt017.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Edit transducers for spelling variation in Old Spanish", |
| "authors": [ |
| { |
| "first": "Jordi", |
| "middle": [], |
| "last": "Porta", |
| "suffix": "" |
| }, |
| { |
| "first": "Jos\u00e9-Luis", |
| "middle": [], |
| "last": "Sancho", |
| "suffix": "" |
| }, |
| { |
| "first": "Javier", |
| "middle": [], |
| "last": "G\u00f3mez", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the NODALIDA Workshop on Computational Historical Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jordi Porta, Jos\u00e9-Luis Sancho, and Javier G\u00f3mez. 2013. Edit transducers for spelling vari- ation in Old Spanish. In Proceedings of the NODALIDA Workshop on Computa- tional Historical Linguistics. Oslo, Norway. http://www.ep.liu.se/ecp/087/006/ecp1387006.pdf.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Modernizing historical Slovene words with character-based SMT", |
| "authors": [ |
| { |
| "first": "Yves", |
| "middle": [], |
| "last": "Scherrer", |
| "suffix": "" |
| }, |
| { |
| "first": "Toma\u017e", |
| "middle": [], |
| "last": "Erjavec", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 4th Biennial Workshop on Balto-Slavic Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yves Scherrer and Toma\u017e Erjavec. 2013. Moderniz- ing historical Slovene words with character-based SMT. In Proceedings of the 4th Biennial Work- shop on Balto-Slavic Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Automatic normalisation of the Swiss German Archi-Mob corpus using character-level machine translation", |
| "authors": [ |
| { |
| "first": "Yves", |
| "middle": [], |
| "last": "Scherrer", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikola", |
| "middle": [], |
| "last": "Ljube\u0161i\u0107", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 13th Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yves Scherrer and Nikola Ljube\u0161i\u0107. 2016. Auto- matic normalisation of the Swiss German Archi- Mob corpus using character-level machine trans- lation. In Proceedings of the 13th Confer- ence on Natural Language Processing (KONVENS).", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Sequence to sequence learning with neural networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in Neural Information Processing Systems (NIPS 2014)", |
| "volume": "27", |
| "issue": "", |
| "pages": "3104--3112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems (NIPS 2014). 27, pages 3104-3112.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "An open diachronic corpus of historical Spanish: annotation criteria and automatic modernisation of spelling", |
| "authors": [ |
| { |
| "first": "Felipe", |
| "middle": [], |
| "last": "S\u00e1nchez-Mart\u00ednez", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabel", |
| "middle": [], |
| "last": "Mart\u00ednez-Sempere", |
| "suffix": "" |
| }, |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Ivars-Ribes", |
| "suffix": "" |
| }, |
| { |
| "first": "Rafael", |
| "middle": [ |
| "C" |
| ], |
| "last": "Carrasco", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felipe S\u00e1nchez-Mart\u00ednez, Isabel Mart\u00ednez-Sempere, Xavier Ivars-Ribes, and Rafael C. Carrasco. 2013. An open diachronic corpus of historical Spanish: annotation criteria and automatic modernisation of spelling. http://arxiv.org/abs/1306.3692v1.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Evaluating the pairwise string alignment of pronunciations", |
| "authors": [ |
| { |
| "first": "Martijn", |
| "middle": [], |
| "last": "Wieling", |
| "suffix": "" |
| }, |
| { |
| "first": "Jelena", |
| "middle": [], |
| "last": "Proki\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Nerbonne", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the EACL 2009 Workshop on Language Technology and Resources for Cultural Heritage, Social Sciences, Humanities, and Education", |
| "volume": "", |
| "issue": "", |
| "pages": "26--34", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martijn Wieling, Jelena Proki\u0107, and John Nerbonne. 2009. Evaluating the pairwise string align- ment of pronunciations. In Proceedings of the EACL 2009 Workshop on Language Technology and Resources for Cultural Heritage, Social Sci- ences, Humanities, and Education (LaTeCH - SHELT&R 2009). Athens, Greece, pages 26-34.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Show, attend and tell: Neural image caption generation with visual attention", |
| "authors": [ |
| { |
| "first": "Kelvin", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhudinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Rich", |
| "middle": [], |
| "last": "Zemel", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "JMLR Workshop and Conference Proceedings: Proceedings of the 32nd International Conference on Machine Learning", |
| "volume": "37", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, at- tend and tell: Neural image caption genera- tion with visual attention. In JMLR Workshop and Conference Proceedings: Proceedings of the 32nd International Conference on Machine Learn- ing. Lille, France, volume 37, pages 2048-2057. http://proceedings.mlr.press/v37/xuc15.pdf. B2 81.00% 81.20% 82.40% 83.40% 80.00% 80.40% 82.70% 83.00% K\u00c41492 83.00% 83.40% 83.60% 84.00% 83.40% 83.70% 85.10% 84.90% KJ1499 81.30% 81.30% 82.00% 84.60% 84.00% 84.00% 83.80% 82.50% N1500 79.50% 80.30% 81.30% 84.00% 82.20% 82.50% 83.60% 82.30% N1509 82.10% 82.40% 83.10% 85.00% 82.80% 83.50% 84.50% 82.80% N1514 80.40% 80.50% 81.10% 83.40% 82.30% 82.80% 84.20% 83.10% St 74.60% 74.60% 76.40% 79.70% 77.60% 77.80% 80.20% 77.70% D4 77.90% 77.20% 79.00% 81.40% 77.00% 77.90% 81.50% 79.90% N4 82.10% 82.30% 82.90% 84.80% 83.10% 83.00% 84.40% 84.00%", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF2": { |
| "uris": null, |
| "text": "t-SNE projections (with perplexity 7) of character embeddings from models trained on M4 (a) Base model (b) Multi-task learning model Figure 3: t-SNE projections (with perplexity 5) of the intermediate vectors produced by the encoder (\"historical word embeddings\"), from models trained on M4 Heat map of parameter differences in the final dense layer between (a) the plain and the attention model as well as (b) the plain and the multi-task model, when trained on the N4 manuscript. The changes correlate by \u03c1 = 0.959.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": "First-derivative saliency w.r.t. the input sequence, as calculated from the base model (left), the attentional model (center), and the MTL model (right). The scores for the attentional and the multi-task model correlate by \u03c1 = 0.615, while the correlation of either one with the base model is |\u03c1| < 0.12.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF2": { |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "Selected predictions from some of our models on the M4 text; B = BEAM, F = FILTER, A = AT-", |
| "num": null |
| } |
| } |
| } |
| } |