FigAgent / 2003.11562 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
fed8bfc verified

Method

The goal of an language model is to assign meaningful probabilities to a sequence of words. Given a set of tokens $\mathbf{X}=(x_1,....,x_T)$, where $T$ is the length of a sequence, our task is to estimate the joint conditional probability $P(\mathbf{X})$ which is $$\begin{equation} \label{cond} P(\mathbf{X})=\prod_{i=1}^{T} p\left(x_{i} | x_{1}, \ldots, x_{i-1}\right) , \end{equation}$$ were $(x_{1}, \ldots, x_{i-1})$ is the context. An Intrinsic evaluation of the performance of Language Models is perplexity (PPL) which is defined as the inverse probability of the set of the tokens and taking the $T^{th}$ root were $T$ is the number of tokens $$\begin{equation} \label{ppl} PPL(\mathbf{X})= P(\mathbf{X})^{-1/T}. \end{equation}$$ In our two approaches we use transformer based architectures: BERT and Transformer-XL as mentioned before. Calculating the auto-regressive $P(\mathbf{X})$ for the transformer-XL is quite straight-forward as the model is unidirectional but it doesn't factorize the same way for a bi-directional model like BERT.

BERT's bi-directional context poses a problem for us to calculate an auto-regressive joint probability. A simple fix could be that we mask all the tokens $\mathbf{x}_{>i}$ and we calculate the conditional factors as we do for an unidirectional model. By doing so though, we loose upon the advantage of bi-directional context the BERT model enables. We propose an approximation of the joint probability as,

\labelapproxP(X)i=1Tp(xix1,,xi1,xi+1,,xT).\begin{equation} \label{approx} P(\mathbf{X}) \approx \prod_{i=1}^{T} p\left(x_{i} | x_{1}, \ldots, x_{i-1}, x_{i+1}, \ldots, x_{T}\right). \end{equation} This type of approximations has been previously explored with Bi-directional RNN LM's [@inproceedings] but not for deep transformer models. We therefore, define a pseudo-perplexity score from the above approximated joint probability.

The original BERT has two training objectives: 'Masked language modelling', in which you mask input tokens randomly and then predict the masked tokens using the left and right context. Additionally, there is the 'next sentence prediction' task that jointly trains text-pair representations. For training the Masked language model the original BERT used Byte Pair Encoding (BPE) [@10.5555/177910.177914] for subword tokenization [@DBLP:journals/corr/SennrichHB15].For example the rare word "unaffable" to be split up into more frequent subwords such as ["un", ", ]. To remain consistent with experiments performed with LSTM's we use the morfessor for the subword tokenization in the Finnish Language. In Addition, we also apply boundary markers as in (Table 1{reference-type="ref" reference="Tab:markings"}) and train two separate models using this distinction. We train with left-marked markings as the original BERT was trained with such a scheme and the left+right-marked as it was the previous SOTA with the Finnish Language. For the transformer-XL experiments, we just train with the left+right marked scheme.

::: {#Tab:markings} subword marking Example


left+right-marked (+m+) two slipp+ +er+ +s left-marked (+m) two slipp +er +s

: Two methods of marking subword units such that the original sentence 'two slippers' is reconstructed :::

[]{#Tab:markings label="Tab:markings"}

The Next Sentence Prediction (NSP) is a binary classification task which predicts whether two segments follow each other in the original text. This pre-training task was proposed to further improve the performance on downstreaming tasks, like Natural Language Inference(NLI) but in reality removing the NSP loss matches or slightly improves the downstream task performance [@DBLP:journals/corr/abs-1907-11692]. In this paper, we have omitted the NSP task from the BERT pre-training procedure and changed the input from a SEGMENT-PAIR input to a SINGLE SEGMENT input. As seen in (Fig 1{reference-type="ref" reference="fig:BERT_label"})

BERT-Original sentence ’how are you doing today’

Transformer-XL introduced the notion of recurrence in self-attention by caching the hidden state sequence to compute the hidden states of a new segment. It also introduces a novel relative positional embedding scheme and both of them combined address the issue of fixed context lengths. Transformer-XL as mentioned is a unidirectional deep transformer architecture, therefore the perplexity can be calculated as (Eq [ppl]{reference-type="ref" reference="ppl"}). The only change is in the input format, were we use sub-word units rather than whole word units as Finnish is morphologically richer than English.

The Finnish text data used for the language modeling task is provided by [@ftc-korp_en]. The dataset consists mainly of newspapers and books of around 144 million word tokens and 4.2 million unique tokens. We use a Morfessor 2.0 [@smit2014morfessor] using the basic unsupervised Morfessor Baseline algorithm [@10.1145/1187415.1187418] with a corpus weight parameter ($\alpha$) of 0.001. We have a vocabulary of 34K subword tokens for the left+right-marked (+m+) markings and 19K subword tokens for the left-marked (+m) markings. We also pre-process the data to remove any punctuation marks such that we can use the same data with an ASR system. The input is one sentence per line and we shuffle the sentences at each epoch. The data is randomly divided into- training dataset and a validation dataset. The test dataset consists of 2850 Finnish news articles obtained from the Finnish national broadcaster YLE.