Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 17
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 101371)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 17
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}\label{sec:intro}
\subsection{Background}
Forecasting outstanding claims (``OSC'') is essential in insurance for effective loss reserving, liability reporting and pricing. In addition to producing accurate central estimates, forecasting the \emph{distribution} of future claims accurately is important in allocating capital which satisfies regulatory requirements. In Australia, the calculation of regulated risk margins involves determining quantiles such as the 75\% percentile. In Europe, Solvency II regulations require calculations as far as a 99.5\% percentile. Having an enhanced understanding of the distribution of OSC is beneficial beyond regulatory obligations, of course. With \emph{point forecasts} of OSC being the predominant focus on the literature, this paper focuses on \emph{distributional} forecasting of outstanding claims using neural networks.
Machine learning models, especially neural networks (``NN''), have gained momentum in the actuarial field in the past few years. Neural networks have stood out for providing ‘state of the art results’ \citep*{RoRi2019}. Their earliest use in reserving, to our knowledge, goes to \citet*{Mu2006}. Since 2018, this field has accelerated, with recent NN applications to reserving showcasing its accuracy and versatility. \citet*{Ku2019, GaRiWu2020} use NNs to learn claim development trends from multiple lines of business simultaneously. \citet*{WuMe2019} develop a GLM-NN hybrid model, which is used successfully by \citet*{GaRiWu2020,Po2019,Ga2020}. The NNs ability to handle large granular datasets has allowed individual claims modelling to flourish, with recent success demonstrated by \citet*{Ku2020,DeLiWu2020,Ga2020}.
\subsection{Motivation and contributions}
Despite the potential shown by NNs, several gaps in their current implementation can be observed, in particular with aggregate data. Firstly, most of the neural network loss reserving literature focuses on obtaining accurate central estimates. However, providing accurate \emph{distributional} forecasts for outstanding claims is essential for optimising capital allocation, reporting liabilities with more accurate risk margins, and allowing profit margins that suit the company's risk appetite to be set more accurately. Secondly, there is a lack of an \emph{explicit model selection framework}. The performance of neural networks is heavily dependent on its design, therefore having no explicit selection procedure risks choosing a model with significantly reduced accuracy. Furthermore, model building becomes more reliant on expert input, hindering their \emph{applicability in practice}. This paper aims to address those issues, as developed below.
\subsubsection{Distributional forecasting with Neural Networks using the MDN}
Probabilistic forecasts of outstanding claims are essential for capital allocation and optimising the risk margins set when reporting liabilities and pricing products. In this paper, we explain how such forecasts can be obtained with the Mixture Density Network (``MDN'') in a flexible way, working with aggregate loss triangles. The MDN is a design developed by \citet*{Bi1994} \citep*[see also][]{Bi06}, with applications found in many fields, such as financial modelling \citep*{OrNe1996}, acoustics \citep*{ZeSe2014} and electrical engineering \citep*{VoFeMo2018}. The essential idea is that the parameters of the distribution will be estimated by the NN architecture, allowing for heterogeneity. Related works in the (non mixture) Gaussian setting can be found in \citet{NiWe94} and \citet{LaPrBl17}. The MDN assumes that the target output follows a mixture (typically Gaussian) distribution, allowing effectively a flexible distribution to be fit to the data. In this paper, we will assume that incremental claims follow a mixture Gaussian or mixture Log-Gaussian distribution.
A special case of an MDN was used by \citet{Ku2020} who mixed a shifted lognormal with a degenerate distribution to represent (positive) and zero cashflows for individual claims, respectively. In contrast, in our application we utilise a mixture of Gaussian distributions, whereby the number of components as well as all parameters are kept flexible.
In loss reserving, previous literature has fitted members of the exponential family using NNs. For instance, \citet*{GaRiWu2020} considered aggregate claims, and \citet{DeLiWu2020,Ga2020} considered individual claims. However, they typically focused on the central estimate rather than distributional properties.
A typical approach is to utilise the NN to replace the linear component in a GLM setup \citep*[see also][for additional discussions]{DeTr2019}, including the GLM-based assumption of a homogeneous mean-variance relationship. Such an assumption is not as flexible for the modelling of volatility, as compared to the mixture Gaussian fit by the MDN, which allows for heterogeneity for \emph{each} random variable. In addition, the mixture Gaussian, given sufficient components, can approximate any distribution within a desired accuracy \citep[see][for details]{NgMc2019}. As the MDN considers a wide range of distributions of varying volatility and shape, the training and fitting process of the network is akin to distribution selection. Another benefit of MDNs is that the central estimates can be derived directly from the fitted mixture Gaussian parameters, which means that location and shape are fitted simultaneously under the single model.
In addition to analysing central estimates, we use qualitative and quantitative measures to assess the distributional and quantile accuracy of the fitted MDN forecasts; see Section \ref{sec:metrics}. Overall, we show that the MDN's flexibility yields more accurate probabilistic forecasts when compared to the cross-classified over-dispersed Poisson (``ccODP''), in both simulated and real data; see also Section \ref{S_motdata}.
\subsubsection{Model calibration and selection}
Neural networks are highly flexible in their design. Since different designs produce varying results, it is vital to have a clear methodology for testing between these different model designs.
Commonly, the data is partitioned into training, validation and testing sets.
The network is trained on the training set until the validation loss is minimised, then projected onto the testing set. The model producing the lowest test error is preferred.
To test between different models, a loss triangle must be split into training, validation and testing sets. \citet*{Ta2000} and \citet*{BeBe2012} explore two popular methods; fixed origin and rolling origin, used to partition sequential data. \citet*{BaRi2020} recently apply the rolling origin method to loss triangles, in order to compare between different traditional reserving models.
This paper contributes by performing the rolling origin data partition exclusively within the aggregate loss triangle, using it to select neural network designs. Taking the latest calendar periods for testing has been done by \citet*{Ku2019,RaAlNu2019}, which was facilitated by simultaneously training the NN on multiple triangles. This paper performs model testing, selection and fitting using only one triangle at a time. Furthermore, the neural network model testing framework implemented in this paper outlines a validation set, which is essential in training NNs.
Furthermore, this paper contributes by implementing a model searching and selection algorithm to loss triangle reserving, which methodically searches and selects different network features such as the number of layers, nodes, and components. This algorithm extends the methodology implemented by \citet*{GaRiWu2020} by also searching for the best-performing regularisation coefficients. This algorithm also makes NN design selection more methodical, requiring less expert input and assisting the popularisation of NNs in practice.
\subsubsection{Acceptability of results in practice: ResMDN, and projection constraints}
While the current neural network applications to loss reserving have shown their accuracy and flexibility in modelling, there are practical and technical obstacles which have hindered the acceptance and use of neural networks by the actuarial community \citep*{RoRi2019, WuMe2019}.
Producing interpretable forecasts is important for justifying business decisions to stakeholders. The neural network’s lack of interpretability has contributed to its lack of acceptance by actuaries. \citet*{WuMe2019} tackle this by adopting the resnet design from \citet{HeZhReSu16} to form the GLM-NN hybrid CANN (Combined Actuarial Neural Network) architecture.
In this paper, we adapt the MDN to the above approach, to create the ResMDN. While \citet*{GaRiWu2020}, \citet{Po2019} and \citet{RiWu2021} have adopted a similar methodology in loss reserving, the ResMDN provides the additional distributional flexibility of a (heterogeneous) mixture Gaussian distribution assumption. It boosts all parameters of the mixture Gaussian simultaneously and directly within one network, while maintaining a fixed GLM initialisation for interpretability.
We show that the ResMDN can successfully boost the embedded GLM, the ccODP, improving structural deficiencies in both mean and volatility estimates where visible, while maintaining the interpretable GLM backbone in its forecasts.
As a further consideration, the NN's black box modelling can cause long range forecasts to become unstable \citep{RoRi2019}. In other words, it may not be adequate to project the behaviour of the highly flexible function fit to the upper triangle to the lower triangle. In this paper, central estimate projections are able to be explicitly constrained by the actuary, forcing the neural network to only fit functions which produce reasonable mean forecasts. This framework thus also allows actuarial judgement to be explicitly incorporated into the modelling process.
\subsubsection{Aggregate data in loss triangles} \label{S_motdata}
Recent applications of machine learning to loss reserving have been mainly focused on more granular claims datasets, such as individual claims data. This article, however, works exclusively with aggregate loss triangles. Developing a neural network model that is applied exclusively to loss triangles is highly relevant, as it enhances comparability and interpretability of results (as opposed to traditional reserving), and some insurers may still have limited data, preventing them from applying individual loss reserving methodologies. Furthermore, it is not obvious that machine learning methods can replicate their documented accuracy when presented with limited data, which is an interesting research question in itself.
In this paper, we demonstrate how neural networks can improve on traditional models, even with limited triangular data:
\begin{arcitem}
\item \emph{Data scarcity:} The MDN was applied by Kuo (2020), but it was used in Individual Claims modelling. The current paper shows that the MDN also finds success when applied to loss triangles.
\item \emph{Model selection:} Rossouw \& Richman (2019) outline a fixed origin data partition (see Bergmeir \& Benitez (2012) for more detail), but use more granular data, hence this methodology hasn't been tested on a sparse loss triangle. While Gabrielli et al (2020) works with loss triangles, individual claims data are used to partition into training and validation sets. Balona \& Richman (2020) recently apply the rolling origin model validation methodology (see Bergmeir \& Benitez (2012) for more details) exclusively to loss triangles. However, as only non-machine learning models were tested, no validation set was constructed in the paper.
With the rolling origin methodology and model searching algorithm, the neural network designs chosen in this paper produce robust, smooth, and accurate central and distributional forecasts, showcasing their combined effectiveness. Given that neural networks can perform unreliably with small datasets, these results show that the rolling origin partition is feasible for loss triangles of a certain size (40 $\times$ 40 triangles were used in this paper), which should encourage further neural network modelling with loss triangles.
\item \emph{Data variety:} Additionally, for NNs to prove their practicality and reliability, they must provide accurate results in a variety of different triangles (of instance, in a range of claim development situations which would invalidate the chain ladder assumptions). The current paper contributes to the literature by testing the MDN on a variety of environments of varying complexity and specifications, using both simulated and real data. Some of the environments tested are derived in concept from triangles used by \cite{HaGaJa2017, GaRiWu2020}. The MDN achieved superior probabilistic forecasts relative to the ccODP in all environments tested (see Section \ref{sec:Data} for details).
\end{arcitem}
\subsection{Structure of the paper}
Section \ref{sec:ModelDesc} provides a detailed overview of the models used in this paper; the MDN, ResMDN and the benchmark ccODP models. Section \ref{sec:ModelDev} outlines the model development methodologies, including rolling origin validation and the hyper-parameter selection algorithm. The section concludes with an overview of the environments used. Section \ref{sec:training} provides details into the network training procedure and evaluation metrics. The standard MDN's results are analysed in Section \ref{sec:Results}, with practical considerations, including the ResMDN, explored in Section \ref{sec:practical}. Section \ref{sec:conclusion} concludes.
\section{Description of the ccODP, MDN and ResMDN models}
\label{sec:ModelDesc}
\subsection{Notation}
Let us first introduce some basic notation. Let
\begin{itemize}
\item $\Phi(x | \mu ,\sigma) = P(Z \leq x)$ be the distribution function of a normal distribution with mean $\mu$ and standard deviation $\sigma$, that is, $Z \sim N(\mu, \sigma)$;
\item $ d\Phi(x| \mu, \sigma)=\phi(x | \mu, \sigma) dx$, that is, $\phi(x | \mu, \sigma) $ is the probability density function of $Z$;
\item $X_{i,j}$ be the incremental claims paid in accident period $i$ and Development period $j$;
\item $\hat{X}_{i,j} $ be a random variable which a model predicts to match the distribution of $X_{i,j}$;
\item AQ and DQ be abbreviations for accident quarter and development quarter, respectively.
\end{itemize}
\subsection{GLM: Cross-Classified Over-Dispersed Poisson (ccODP) }
\label{sec:ccODP}
The benchmark model used in this paper is the Cross-Classified Over-Dispersed Poisson model (``ccODP''), in line with the existing NN loss reserving literature \citep*{Ku2019,GaRiWu2020,Ga2019,DeLiWu2020,Ku2020,Wu2018}. The ccODP model assumes that incremental claims, $X_{i,j}$, follow a Cross-Classified Over-Dispersed Poisson distribution
\begin{equation}
\frac{\hat{X}_{i,j}}{D} \sim \text{Poi} \bigg(\frac{A_iB_j}{D} \bigg),
\text{ where }E[\hat{X}_{i,j}] = A_iB_j\text{ and }Var(\hat{X}_{i,j}) = D A_iB_j.
\label{eqn:ccODP}
\end{equation}
The Cross-Classified structure of the model, as well as the assumption of a constant $D$, leads it to produce mean estimates that are identical to the Chain Ladder. Hence, the ccODP's strengths and weaknesses in central estimate accuracy follow from the Chain Ladder's characteristics. In practice, assuming the ccODP model is applied, some of these weaknesses can be easily mitigated.
For example, a low value for $X_{40,1}$ will lead the ccODP to estimate low losses for all of AQ40. Hence, in this paper, if the ccODP coefficient for AQ40, $ln(A_{40})$, is below the average of all AQ coefficients, it is taken as an average of the previous 3 quarters, as such:
\begin{equation}
\text{If } ln(A_{40}) < \frac{\sum_{i = 1}^{40}ln(A_i)}{40} \text{, then }ln(A_{40}) \leftarrow \frac{ln(A_{37}) + ln(A_{38}) + ln(A_{39})}{3}.
\end{equation}
\subsection{Mixture Density Network (MDN)}
In this paper, we use Mixture Density Networks (MDNs) to perform probabilistic forecasting of outstanding claims.
\subsubsection{Distribution of the incremental claims}
Incremental claims $X_{i,j}$ are assumed to follow a mixture Gaussian distribution
\begin{equation}
f_{\hat{X}_{i,j}}(x) = \sum_{k = 1}^{K}\alpha_{i,j,k} \phi({x | \mu_{i,j,k},{\sigma_{i,j,k}} } )\label{eq:MDN}
\end{equation}
With that distributional assumption, the output layer of the MDN estimates the parameters of the mixture distribution, $(\boldsymbol{\alpha}, \boldsymbol{\mu}, \boldsymbol{\sigma}$), which are used to form a mixture Gaussian density. A Negative Log Likelihood (NLL) loss function
\begin{equation}
NLLLoss(\textbf{X}, \hat{\textbf{X}}|\textbf{w}) = -\frac{1}{|\textbf{X}|}\sum_{i,j: X_{i,j} \in Train} ln(f_{\hat{X}_{i,j}}(X_{i,j} | \textbf{w}))\label{eq:NLL}
\end{equation}
is used to train the MDN, where $\textbf{X}$ is the set of cells $X_{i,j}$ in the training set, $|\textbf{X}|$ is the cardinal of $\textbf{X}$, $\hat{\textbf{X}}$ is the set of predicted distributions of $X_{i,j}$ and $\textbf{w}$ is the set of weights in the MDN.
The MDN isn't structurally restricted to fitting mixture Gaussians to the response; it can estimate the parameters of any desired distribution so long as the loss function is specified accordingly. In this paper, only the mixture Gaussian framework was considered, with the output layer estimating the $(\boldsymbol{\alpha},\boldsymbol{\mu} ,\boldsymbol{\sigma} )$ parameters.
As \citet*{Bi1994} notes, the mixture Gaussian distribution, given a sufficient number of components and hidden layers, is capable of approximating any desired distribution within a desired accuracy. Mixture densities with more components will certainly be more flexible, however, practical obstacles such as over-parametrisation and data insufficiency will limit the range of distributions fit by the MDN.
Alternatively, while maintaining a mixture Gaussian output layer, a mixture \emph{Log-Gaussian} can also be fit to $X_{i,j}$ by fitting a mixture Gaussian to $ln(X_{i,j})$---This holds by the definition of a mixture random variable; see Appendix \ref{app:log} for details). This distribution helps to address the practical limitations to the flexibility of the mixture Gaussian by providing a positive, heavier-tailed option. Furthermore, taking the log of incremental claims linearises the data, which can make training simpler and more efficient. Both the mixture Gaussian and mixture Log-Gaussian distributions achieved impressive results, analysed in Section \ref{sec:Results}.
\begin{remark} We modelled the log of the data where justified (as is often the case in actuarial applications), leading to a mixture Log-Gaussian, but alternative transforms could be readily used by the modeller if needed. \end{remark}
\subsubsection{Structure of the MDN}
Figure \ref{fig:MDN} provides a basic visualisation of the MDN's design. Mixture Density Networks differ from other neural networks due to their output layer, which estimates a mixture distribution, commonly mixture Gaussian, to the response variable. For a detailed overview of neural networks, their design, mechanism and terminology, see, for instance, \citet*{Ri2018}. In the case of aggregate loss triangles that we consider, the input variables of the MDN are $i$ and $j$, the accident and development periods, respectively. These variables are passed through a fully connected hidden layer, which consists of hidden layers, each layer consisting of neurons. Each neuron in a hidden layer takes a weighted sum of the previous layer's output, before passing it through an activation function. The final hidden layer's output is then passed to the output layer, which produces the desired distribution parameters.
\begin{figure}[htb]
\centerline{\includegraphics[width = 8cm]{Images/mdn.PNG}}
\caption{The basic design of the Mixture Density Network (MDN). The inputs $(i,j)$ are the accident and development quarters respectively. The outputs are the parameters of the mixture Gaussian distribution, $\alpha, \mu, \sigma$}
\label{fig:MDN}
\end{figure}
Let $w_{a,c}^l$ be the weight parameter connecting the $a^{th}$ neuron in the $l^{th}$ layer to the $c^{th}$ neuron in the $(l+1)^{th}$ layer. Define $b_{a}^{l-1}$ as the bias term added to the $a^{th}$ neuron in the $l^{th}$ layer. Weighted sums of the inputs, $(i,j)$, are passed into the first hidden layer, before an activation function $g$ yields such output for the $p^{th}$ node in that layer:
\begin{equation}
\textbf{z}_{i,j,p}^1 = g(i \times w_{1,p}^0 + j\times w_{2,p}^0 + b_{p}^0).
\end{equation}
We assume $L$ hidden layers in the MDN, with $D$ nodes in each layer. Each node in successive hidden layers take a weighted sum of the output from nodes in the previous layer, such that $\textbf{z}_{i,j,p}^L$, the $p^{th}$ node in the final hidden layer, is calculated as
\begin{equation}
\textbf{z}_{i,j,p}^L = g(\sum_{d = 1}^{D}w_{d,p}^{L-1}\textbf{z}_{i,j,d}^{L-1} + b_{p}^{L-1}).
\end{equation}
The output layer is split into three sections, each with $K$ nodes, $K$ being the number of components in the Mixture Density. Let's call these sections the alpha, mu and sigma sections, respectively. Similarly to the hidden layers, each node in the output layer takes a weighted sum of the output of all nodes in the last hidden layer, Layer $L$. The weighted sums are then passed through different activation functions for each section to yield the final output of the MDN, $(\boldsymbol{\alpha},\boldsymbol{\mu} ,\boldsymbol{\sigma} )$. Specifically:
\begin{description}
\item[\textbf{Alpha:}] The output of the $k^{th}$ nodes of the alpha section
\begin{equation}
\textbf{z}_{i,j,k}^\alpha = \sum_{d = 1}^{D}w_{d,k}^{L, \alpha}\textbf{z}_{i,j,k}^{L} + b_{k}^{L, \alpha} \text{, for } k = 1, 2, ..K
\label{eq:alpha1}
\end{equation}
leads to
\begin{equation}
\alpha_{i,j,k} = \frac{e^{\textbf{z}_{i,j,k}^\alpha}}{\sum_{k = 1}^{K}e^{\textbf{z}_{i,j,k}^\alpha}} .
\label{eq:alpha2}
\end{equation}
Note that the output $\textbf{z}_{i,j,k}^\alpha$ was passed through a Softmax activation function, which ensures that $ \sum_{k = 1}^{K} \alpha_{i,j,k} = 1 $.
\item[\textbf{Mu:}] Similarly,
\begin{equation}
\textbf{z}_{i,j,k}^\mu = \sum_{d = 1}^{D}w_{d,k}^{L, \mu}\textbf{z}_{i,j,k}^{L} + b_{k}^{L, \mu} \text{, for } k = 1, 2, ..K
\label{eq:mu1}
\end{equation}
leads to
\begin{equation}
\mu_{i,j,k} = \textbf{z}_{i,j,k}^\mu
\label{eq:mu2}
\end{equation}
as there are no constraints on the mu layer which would require an activation function. \citet*{Bi1994} notes that such a design represents an `un-informative prior' on $\mu$, which befits the lack of constraints on the mean.
\item[\textbf{Sigma:}] Finally, the sigma output
\begin{equation}
\textbf{z}_{i,j,k}^\sigma = \sum_{d = 1}^{D}w_{d,k}^{L, \sigma}\textbf{z}_{i,j,k}^{L} + b_{k}^{L, \sigma} \text{, for } k = 1, 2, ..K
\label{eq:sigma1}
\end{equation}
is passed through an exponential function,
\begin{equation}
{\sigma_{i,j,k}} = e^{\textbf{z}_{i,j,k}^\sigma},
\label{eq:sigma2}
\end{equation}
which ensures the standard deviation is always positive \citep*{HjNa2000}.
\end{description}
Here, $w_{d,k}^{L,\alpha}, w_{d,k}^{L,\mu}, w_{d,k}^{L,\sigma}$ are the weights connecting the output of node $d$ in layer $L$ to node $k$ in the alpha, mu and sigma layers, respectively. Thus, for each input cell $(i,j)$, a unique combination of parameters, $$(\alpha_{ i, j, 1}, \alpha_{ i, j, 2}.....\alpha_{ i, j, K}, \mu_{ i, j, 1}, \mu_{ i, j, 2}.....\mu_{ i, j, K}, \sigma_{ i, j, 1}, \sigma_{ i, j, 2}.....\sigma_{ i, j, K} ),$$is produced in the output layer of the MDN, which then generates the probability density for $\hat{X}_{i,j}$
\begin{equation}
f_{\hat{X}_{i,j}}(x) = \sum_{k = 1}^{K} \alpha_{i,j,k} \phi(x | \mu_{i,j,k}, \sigma_{i,j,k}).\label{eq:MDN2}
\end{equation}
\subsection{Boosting GLM based models with the MDN: ResMDN}
The Mixture Density Network (``MDN'') has greater computational complexity than the standard feedforward neural network, hence its interpretability is even lower. To implement a more interpretable structure, this paper adapts the residual neural network (``ResNet'') design implemented successfully by \citet*{GaRiWu2020}, \citet*{Ga2019} and \citet*{Po2019}, which boosted a GLM model with a neural network - resulting in a more interpretable and stable model. In this paper, this boosting design was adapted to the MDN to create the``ResMDN''. Note that in our ResMDN approach the mean of the resulting model can be interpreted as a boosted version of the GLM backbone, but the other probabilistic distributional properties for the resulting model are inherited from the MDN. This makes the ResMDN approach in principle quite different from the ResNet approach by \citet*{GaRiWu2020}, which focused on the mean.
The ResMDN uses a skip connection, applied in the form of an Embedding Layer, to connect the input layer directly to the output layer. This skip connection allows the MDN to initialise with an approximate GLM fit, subsequently enabling the feedforward module to boost the GLM during training.
In the following, we chose to illustrate the ResMDN by boosting the well known (and used) ccODP model. It is worthwhile to note that---quite naturally---some of the benefits and drawbacks of the ccODP will flow on to the outcomes of the resulting ResMDN model to a certain extent. Indeed, this is illustrated in Section \ref{sec:practical}.
\subsubsection{Distribution of the incremental claims}
The ResMDN embeds an approximation of the Cross-Classified Over-Dispersed Poisson (ccODP) model (see Section \ref{sec:ccODP} for more detail), which follows the distribution outlined in \eqref{eqn:ccODP}. In neural network terminology, an embedding is the mapping of a discrete or categorical input variable into a numerical vector, which is then fed into the network \citep{Ri2018}.
Since the output of the ResMDN takes the form of parameters for a mixture Gaussian distribution, the GLM initialisation's density is approximated by a mixture Gaussian, spread evenly over $K$ components. The parameters of this approximation are fed into the ResMDN as embeddings of the GLM. The distribution $ f_{\hat{X}_{i,j}^{ccODP}}(x)$ of $X_{i,j}$ as estimated by the ccODP is approximated by
\begin{equation}
f_{\hat{X}_{i,j}^{ccODP}}(x) \approx \sum_{k = 1}^{K}\alpha^{GLM}_{i,j,k} \phi_{i,j,k}(x | {\mu^{GLM}_{i,j,k}, \sigma^{GLM}_{i,j,k}}), \label{eq:MDNODP}
\end{equation}
where
\begin{equation}
\alpha_{i,j,k}^{GLM} = \frac{1}{K}, \quad \mu_{i,j,k}^{GLM} = E[\hat{X}_{i,j}^{ccODP}] = A_iB_j, \quad \sigma_{i,j,k}^{GLM} = \sqrt{Var[\hat{X}_{i,j}^{ccODP}]} = \sqrt{D A_iB_j}.
\end{equation}
\subsubsection{ResMDN structure}
The ResMDN's structure resembles the MDN very closely. The Input Layer of the ResMDN consists of the accident and development periods, $(i,j)$, as well as a unique categorical integer, $c_{i,j} = 40*(i-1) + j$, which allows the ResMDN's embedding layer to identify the specific cell $(i,j)$ and produce the corresponding GLM loss estimate for that cell as output. Hence, each cell $(i,j)$ is assigned a number from $(0,1599)$. The variables $(i,j)$ are passed through an MDN which excludes the activations of the final output layer. An embedding layer takes the categorical input $c_{i,j}$ and produces as output:
\begin{equation}
\bigg(ln(\boldsymbol{\alpha^{GLM}_{i,j}}), \boldsymbol{\mu^{GLM}_{i,j}}, ln(\boldsymbol{\sigma^{GLM}_{i,j}}) \bigg)
\end{equation}
The outputs from both the fully connected component and embedding layer are added together, before the Softmax and exponential activations are applied to the alpha and sigma additions, respectively. The final output of the ResMDN consists of the mixture Gaussian parameter estimates, $(\boldsymbol{\alpha^{ResMDN}}, \boldsymbol{\mu^{ResMDN}}, \boldsymbol{\sigma^{ResMDN}} )$. Figure \ref{fig:ResMDN} provides a visualisation of the ResMDN model. The key design features of the ResMDN are outlined below:
\begin{figure}[htb]
\centerline{\includegraphics[width = 8cm]{Images/resnetmdn.PNG}}
\caption{The ResMDN design with a mixture Gaussian output. The embedding layer converts the input to mixture Gaussian parameters approximating the GLM backbone. The feedforward module boosts the GLM initialisation during training.}
\label{fig:ResMDN}
\end{figure}
\begin{itemize}
\item \textbf{Fully Connected Module:} Let $\textbf{z}_{i,j,k}^\alpha, \textbf{z}_{i,j,k}^\mu, \textbf{z}_{i,j,k}^\sigma$ be as described in \eqref{eq:alpha1}, \eqref{eq:mu1} and \eqref{eq:sigma1}, that is, the MDN's output before the output layer activations are applied. The fully connected module of the ResMDN performs the function
\begin{equation}
(i,j) \mapsto \{(\textbf{z}_{i,j,k}^\alpha, \textbf{z}_{i,j,k}^\mu, \textbf{z}_{i,j,k}^\sigma), k = 1,2,..K \},
\label{eq:FullyConnected}
\end{equation}
\item \textbf{Embedding Layer:} The embedding layer weights are pre-set to provide the mapping
\begin{equation}
c_{i,j} \mapsto \bigg(ln(\boldsymbol{\alpha^{GLM}_{i,j}}), \boldsymbol{\mu^{GLM}_{i,j}}, ln(\boldsymbol{\sigma^{GLM}_{i,j}}\bigg).
\label{eq:Embedding}
\end{equation}
The log of the $\boldsymbol{\alpha}$ and $\boldsymbol{\sigma}$ parameters are produced in the embedding layer, since the Softmax and exponential Activation functions will take the exponent in the output layer nodes. \item \textbf{Addition and Final Activation:} The output from the embedding and fully connected modules are added together element-wise:
\begin{align}
\textbf{Addition: }&\bigg(ln(\alpha^{GLM}_{i,j,k}), \mu^{GLM}_{i,j,k}, ln(\sigma^{GLM}_{i,j,k}), \textbf{z}_{i,j,k}^\alpha, \textbf{z}_{i,j,k}^\mu, \textbf{z}_{i,j,k}^\sigma \bigg) \nonumber \\
& \mapsto \bigg( ln(\alpha^{GLM}_{i,j,k}) + \textbf{z}_{i,j,k}^\alpha, \mu^{GLM}_{i,j,k} + \textbf{z}_{i,j,k}^\mu, ln(\sigma^{GLM}_{i,j,k}) + \textbf{z}_{i,j,k}^\sigma\bigg) \nonumber \\
& \quad= \bigg( ln(\alpha^{GLM}_{i,j,k}) + \textbf{z}_{i,j,k}^\alpha, \mu^{ResMDN}_{i,j,k}, ln(\sigma^{ResMDN}_{i,j,k}) \bigg),
\label{eq:ResMDNAddition}
\end{align}
before the Softmax and exponential activations are applied to the alpha and sigma layers, respectively:
\begin{align}
\textbf{Final Activations: }&\bigg(ln(\alpha^{GLM}_{i,j,k}) + \textbf{z}_{i,j,k}^\alpha, \mu^{ResMDN}_{i,j,k}, ln(\sigma^{ResMDN}_{i,j,k})\bigg) \nonumber \\
& \mapsto \left(\frac{{\alpha^{GLM}_{i,j,k} e^{\textbf{z}_{i,j,k}^\alpha}}}{\sum_{k = 1}^{K}\alpha^{GLM}_{i,j,k} e^{\textbf{z}_{i,j,k}^\alpha}}, \mu^{ResMDN}_{i,j,k}, e^{ln(\sigma^{ResMDN}_{i,j,k})} \right) \nonumber \\
& \quad = \left(\alpha^{ResMDN}_{i,j,k}, \mu^{ResMDN}_{i,j,k}, \sigma^{ResMDN}_{i,j,k} \right).
\label{eq:ResMDNFinal}
\end{align}
Hence, the boosted mixture Gaussian parameters are produced in the Output Layer.
\item \textbf{Initialisation:} The activations, $\textbf{z}_{i,j,k}^\alpha, \textbf{z}_{i,j,k}^\mu, \textbf{z}_{i,j,k}^\sigma$, are generated using the parameters, $(\textbf{w}_L, \textbf{b}_L)$, defined in \eqref{eq:alpha1}--\eqref{eq:sigma2}. These parameters, representing the weights in the final hidden layer, are initialised at 0, such that
$$\textbf{z}_{i,j,k}^\alpha, \textbf{z}_{i,j,k}^\mu, \textbf{z}_{i,j,k}^\sigma = 0, \alpha_{i,j,k}^{ResMDN} = \alpha_{i,j,k}^{GLM} , \mu_{i,j,k}^{ResMDN} = \mu_{i,j,k}^{GLM} , \sigma_{i,j,k}^{ResMDN} = \sigma_{i,j,k}^{GLM}, $$
hence producing the GLM approximation in the Output Layer at the initialisation of the ResMDN. This initialisation follows the methodology of \citet*{GaRiWu2020} closely.
\end{itemize}
During training, the embedding layer maintains constant output, while the fully connected module adjusts its weights to capture non-linearities which the GLM has missed. The ResMDN's overall function at the termination of training is
\begin{align}
(i,j, c_{i,j}) &\mapsto \left(\frac{{\alpha^{GLM}_{i,j,k} e^{\textbf{z}_{i,j,k}^\alpha}}}{\sum_{k = 1}^{K}\alpha^{GLM}_{i,j,k} e^{\textbf{z}_{i,j,k}^\alpha}}, \mu^{GLM}_{i,j,k} + \textbf{z}_{i,j,k}^\mu, {\sigma^{GLM}_{i,j,k} e^{\textbf{z}_{i,j,k}^\sigma}} \right) \nonumber \\
&\quad= \left(\alpha^{ResMDN}_{i,j,k}, \mu^{ResMDN}_{i,j,k}, \sigma^{ResMDN}_{i,j,k} \right)\text{, for } k = 1,2,...,K.
\label{eq:ResMDNFinalEqn}
\end{align}
The NN boosting terms are relatively easy to analyse in relation to the GLM fit, especially the mean and volatility terms. Furthermore, the black box neural network modelling is only applied to the residuals, meaning the lack of interpretability is restricted to that domain only. Hence the ResMDN improves the interpretability of the model compared to the MDN.
\section{Model Development}
\label{sec:ModelDev}
\subsection{Model selection using the rolling origin method for training, validating, and testing}
The accuracy of the neural network depends heavily on its hyper-parameters, such as the number of hidden layers, number of neurons, weight regularisation penalty, etc. Therefore, to assume that one model design will work well in all environments will lead to sub-optimal performance. Hence, a training/testing split is required to assess different model designs and choose the best one found. This paper partitions the loss triangle using the rolling origin method, which performs the training and testing split in multiple stages, each one progressively shifting the testing set forward in time (see \citet*{Ta2000,BeBe2012,BaRi2020} for details). The total test error of the model is a weighted average of the test error in each stage. This methodology allows for a systematic hyper-parameter fine-tuning algorithm to be implemented; see Section \ref{sec:algorithm}.
The loss triangle has the characteristics of a time series, with the incremental claims generally decaying over successive development periods. Where the objective of modelling is to improve interpolation accuracy, randomly splitting the data into training, validation and testing sets is common and sufficient. With loss triangles, however, the objective is \emph{extrapolation}, hence the testing set needs to focus on assessing the model's \emph{projection accuracy}. This is done by assigning (a chosen number of) the \emph{latest} calendar periods of the triangle to the \emph{testing} set and the \emph{earliest} to \emph{training}. Similarly, the Validation set is chosen to be the latest calendar periods which aren't assigned for testing. That way, when combined with Early Stopping (see Section \ref{sec:training}), the MDN stops training when short term projection accuracy is maximised.
Hence, it is important to \emph{sequentially} split the data into training, validation and testing sets to more effectively assess the model's accuracy when extrapolating. In our illustrative example of a 40$\times$40 triangle, the rolling origin validation method was used in two partitions:
\begin{itemize}
\item In the first partition, the data is assumed to comprise a 30$\times$30 triangle, which leaves the latest 10 calendar periods for the testing set. This partition focuses on assessing the model’s long term forecasting accuracy.
\item The second partition works with a 36$\times$36 triangle, leaving 4 calendar periods for testing. Building on the first partition, later calendar periods are included in training, which helps to assess the model's ability to capture the more recent and more holistic trends present in the triangle.
\end{itemize}
For all partitions, the validation set included the 4 latest non-testing calendar periods, excluding the first 3 accident and development periods. This exclusion was done to provide the MDN more training data for the latest accident and development periods. Instead, the DQ2 and DQ3 validation points are taken evenly from earlier AQs, an arbitrary but simplistic approach. Figure \ref{fig:rollingorigin} visualises the data partitions.
A potential downside of the rolling origin method is that the training data does not include points from the latest calendar quarters. Therefore, in situations where we expect losses to possess substantially different characteristics in the later periods, this method might not be able to effectively capture the change in trends. In such situations, we implement an adjusted data partitioning methodology to allow more training data in those periods (see Section \ref{app:partition}).
\begin{figure}[htb]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.9\linewidth]{Images/Partition1.PNG}
\caption{Partition 1: Assesses projection accuracy}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.9\linewidth]{Images/Partition2.PNG}
\caption{Partition 2: Assesses trend fitting}
\label{fig:sub2}
\end{subfigure}
\caption{The 2-stage partition of the triangle into training, validation and testing sets. The first partition focuses on assessing projection accuracy, while the second assesses the model's ability to fit more recent trends in the data.}
\label{fig:rollingorigin}
\end{figure}
\subsection{Network hyper-parameter selection algorithm }
\label{sec:algorithm}
There is a vast array of literature surrounding neural network model search and selection, and that no set algorithm succeeds the rest.
The MDN's architecture was selected using an algorithm that successively optimised one hyper-parameter at a time. The number of components in the density is increased so long as the test error decreases, which allows the algorithm to consider fitting densities of infinite flexibility. The loss triangle size restricts the parametrisation of the mixture density, hence this algorithm aims to find the optimal distributional flexibility allowed by the given data.
Before the algorithm is run, certain aspects of the MDN's design are set constant:
\begin{itemize}
\item The sigmoid activation function is used for all hidden layers;
\item The number of neurons is equal for all hidden layers.
\end{itemize}
\noindent With this modelling framework, two additional considerations improved the MDN's performance:
\begin{itemize}
\item Mixture Gaussian or mixture Log-Gaussian: Fitting a mixture Log-Gaussian linearises the data (in our methodology), which provided smoother results in some instances. When used, the mixture Log-Gaussian gave noticeably improved results over the mixture Gaussian in the upper triangle.
\item Mean Squared Error term to the loss function: A Negative Log-Likelihood loss function sometimes failed to capture sharp points in the data, preferring to allocate volatility to these points. This issue was solved by adding a Mean Squared Error (``MSE'') term to the loss, which encouraged the MDN to provide more accurate central estimates.
\end{itemize}
\noindent The hyper-parameters chosen to fine-tune are:
\begin{enumerate}
\item $\lambda_w$, the L2 weight penalty. The values $[0,0.0001,0.001,0.01,0.1]$ were tested.
\item $\lambda_\sigma$, the L2 sigma activity penalty. When central estimates are inaccurate, the MDN may increase volatility estimates unreasonably to reduce the Negative Log Likelihood Loss \eqref{eq:NLL} ( \citet[Ch6.2]{GoBeCo2017}), hence a penalty was applied to the sigma output. The values $[0,0.0001,0.001,0.01,0.1]$ were tested.
\item $p$, the dropout rate (See \citet*{SrHiKrSu2014} for details). The values $[0,0.1,0.2]$ were tested.
\item $n$, the number of neurons in each hidden layer. The values $[20,40,60,80,100]$ were tested.
\item $h$, the number of hidden layers. The values $[1,2,3,4]$ were tested.
\item $K$, the number of components in the mixture density.
\end{enumerate}
\noindent The training loss becomes
\begin{align}
Loss(\textbf{X}, \hat{\textbf{X}}|\textbf{w}, \lambda_w, \lambda_{\sigma} ) &= - \frac{1}{|\textbf{X}|}\sum_{i,j: X_{i,j} \in \textbf{X}} ln(f_{\hat{X}_{i,j}}(X_{i,j} | \textbf{w})) \nonumber \\
&+ \lambda_w\textbf{w}\cdot\textbf{w} + \lambda_{\sigma}\sum_{i,j: X_{i,j} \in \textbf{X}}\sum_{k = 1}^{K}\sigma_{i,j,k}^2
\label{eq:trLoss}
\end{align}
Denote $\boldsymbol{\theta} = \{ \lambda_w, \lambda_\sigma, p, n, h, K \}$ as the set of hyper-parameters to fine-tune. The hyper-parameter selection algorithm was conducted as such:
\begin{enumerate}
\item Start with $\boldsymbol{\theta}^{initial} = \{ 0,0,0,n^{initial}, h^{initial}, K^{initial} \}$, a set of initial hyper-parameters deemed suitable through judgement. Setting $\boldsymbol{\theta}^{initial} = \{ 0,0,0,60, 2, 2 \}$ worked well in this paper, as allowing the algorithm to explore unregularised models vastly improved the fit in some instances.
\item Using $\boldsymbol{\theta}^{initial}$ and keeping all other hyper-parameters fixed, use \textbf{Grid Search} to test all desired values of $\lambda_w$, the weight penalty coefficient. Select the coefficient with the lowest test error, $\hat{\lambda}_w$, and update $\boldsymbol{\theta}^1 = \{ \hat{\lambda}_w,0,0,n^{initial}, h^{initial}, K^{initial} \}$
\item Using $\boldsymbol{\theta}^1$ and keeping all other hyper-parameters fixed, use \textbf{Grid Search} to test all desired values of $\lambda_\sigma$, the sigma activity penalty coefficient. Select the coefficient with the lowest test error, $\hat{\lambda}_\sigma$, and update $\boldsymbol{\theta}^2 = \{ \hat{\lambda}_w,\hat{\lambda}_\sigma,0,n^{initial}, h^{initial}, K^{initial} \}$
\item Using $\boldsymbol{\theta}^2$ and keeping all other hyper-parameters fixed, use \textbf{Grid Search} to test all desired values of $p$, the dropout rate. Select the rate with the lowest test error, $\hat{p}$, and update $\boldsymbol{\theta}^3 = \{ \hat{\lambda}_w,\hat{\lambda}_\sigma,\hat{p},n^{initial},\linebreak[0] h^{initial}, K^{initial} \}$
\item Using $\boldsymbol{\theta}^3$ and keeping all other hyper-parameters fixed, use \textbf{Grid Search} to test all desired values of $h$, the number of hidden layers. Select the number with the lowest test error, $\hat{h}$, and update $\boldsymbol{\theta}^4 = \{ \hat{\lambda}_w,\hat{\lambda}_\sigma,\hat{p},n^{initial}, \hat{h}, K^{initial} \}$
\item Using $\boldsymbol{\theta}^4$ and keeping all other hyper-parameters fixed, fine-tune the number of neurons and components. This process tests an increasing number of components until the test error ceases to improve. Let $n_K$ be the number of neurons which minimises the test error for a $K$-component model (among the values tested). Let $E_{n_K, K}$ be the test error of a model with $n_K$ neurons and $K$ components (with the other hyper-parameters as in $\boldsymbol{\theta}^4$). Starting at $K = 1$, increment $K$ until $E_{n_K, K} < E_{n_{K+1}, K+1}$. At this final increment, set $\hat{K} = K$ and $\hat{n} = n_{K}$. Update the hyper-parameters, $\boldsymbol{\theta}^5 = \{ \hat{\lambda}_w,\hat{\lambda}_\sigma,\hat{p},\hat{n}, \hat{h}, \hat{K} \}$
\end{enumerate}
Following the algorithm, select $\boldsymbol{\theta}^5$ as the final set of hyper-parameters and run the final model (see Section \ref{sec:training}.
\subsection{Data}
\label{sec:Data}
This paper tests the MDN's performance on both simulated and real data. Using simulated data allows the practitioner to simulate any desired trend, providing a controlled environment where the MDN can be directly assessed on its ability to capture these trends embedded in the data \citep*{AvTaWaWo2020}. A downside of simulating claims, as noted by \citet*{MuRyRe2011} is that complex interactions in real data may not be captured by the simulator, hindering model development. Fitting a model on unrealistic data will reduce its validity. However, we mitigate this by applying the MDN on real data (AUSI environment; see Section \ref{sec:AUSI}) as well as the default SynthETIC dataset from \citet*{AvTaWaWo2020}, which can mimic data in a wide range of realistic situations.
Note that while the AUSI Dataset was partitioned randomly into 10 triangles (via subdivision), we simulated 200 triangles (from four separate realistic scenarios) with SynthETIC, meaning that the MDN was fit on 210 triangles in total.
\subsubsection{Simulated environments from SynthETIC}
Thanks to the flexibility offered by SynthETIC \citep*[see][for a description of the R simulation package]{AvTaWaWo2020}, four different claim environments were simulated, of various features and complexities. Simulating different environments was done to test the MDN's versatility and ability to capture complex trends and produce accurate forecasts in a variety of controlled, challenging environments. As a loss triangle is a collection of random variables, it is important to run the MDN on a large sample of triangles to gain a better understanding of its accuracy and also test its ability to provide consistent results. Hence, for each simulated environment, 50 independent triangles were simulated (of size $40\times 40$), leading to 200 triangles. Namely, the scenarios, or environments, were:
\begin{enumerate}
\item \textbf{Environment 1 - Simple, short tail claims: } This environment simulates short tail claims which are homogeneous in composition for all accident quarters. The reporting and settlement delays have been approximately calibrated to show similar characteristics to the simulator developed by \citet*{GaWu2018}. Figure \ref{fig:D1} plots the incremental claims for Environment 1. A spike in claim payment in DQ2 complicates this dataset, but such a feature is not uncommon in practice. This environment was a preliminary test to the feasibility of MDNs in modelling 40x40 triangles and producing reasonable results.
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D1Incremental.png}}
\caption{A plot of the incremental claims of environment 1, for selected accident quarters. Solid lines represents data in the upper triangle, while dashed lines represent data in the lower triangle }
\label{fig:D1}
\end{figure}
\item \textbf{Environment 2 - Increase in claim processing speed:} A gradual shift from long tail to short tail claims along accident quarters is simulated, that is, an increase in claims processing speed. Initially, there are more long tail claims, however, the proportion of these claims decreases, while the proportion of short claims increases. From the incremental claim plot shown below, later AQs see higher losses early on, due to the increasing proportion of short tail claims. Figure \ref{fig:D2} plot the incremental claims of environment 2. The main question to be answered in testing this dataset is, given the systematic volatility in the claims data, can the MDN accurately distinguish between systematic and unsystematic volatility and capture the distribution of data points accurately? That is, will the MDN learn that claims are getting shorter, or will it attribute the trend to noise?\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D2Incremental.png}}
\caption{A plot of the incremental claims of environment 2, for selected accident quarters. Solid lines represents data in the upper triangle, while dashed lines represent data in the lower triangle }
\label{fig:D2}
\end{figure}
\item \textbf{Environment 3 - Inflation shock:} Superimposed inflation is changed instantly from 0\% to 8\% per annum, starting at AQ30. The 8\% inflation remains constant in the lower triangle. This environment tests the ability of the MDN to recognise changes in calendar effects and adapt projections accordingly.
Only the last 10 calendar quarters in the upper triangle contain information regarding the inflation shock, which increases the difficulty for the MDN. A further complication is that the rolling origin partition contains little training points featuring the change in inflation, hence this environment assesses its ability to capture recent trends. Figure \ref{fig:D3} plots the incremental claims.
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D3Incremental.png}}
\caption{A plot of the incremental claims of environment 3, for selected accident quarters. Solid lines represents data in the upper triangle, while dashed lines represent data in the lower triangle }
\label{fig:D3}
\end{figure}
\item \textbf{Environment 4 - High Systematic Complexity:} This environment is the default triangle generated by the SynthETIC simulator, which was designed to mimic features seen in real data \citep*[see][for details]{AvTaWaWo2020}. Complex dependencies exist between claim size, reporting and settlement delay, and superimposed inflation. Settlement delay, which depends on claim size, declines over the first 20 AQs. Superimposed inflation is up to 30\%, but declines for larger claims. A legislative change at AQ20 causes small claims to face a reduction in size and settlement speed until AQ30. The general trend can be summarised by slow development, high volatility and high superimposed inflation. The volatility is primarily caused by the low claim frequency and highly volatile severity. Hence, it is normal for claims in one AQ to follow a completely different pattern (reporting, settlement, volume, development pattern) than claims in the adjacent AQ. Figure \ref{fig:D4} plots the incremental claims. This environment assesses the MDN's ability to produce accurate forecasts in a volatile environment.
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D4Incremental.png}}
\caption{A plot of the incremental claims of environment 4, for selected accident quarters. Solid lines represents data in the upper triangle, while dashed lines represent data in the lower triangle }
\label{fig:D4}
\end{figure}
\end{enumerate}
\subsubsection{Real Dataset - AUSI Auto Bodily Injury}
\label{sec:AUSI}
We apply the MDN to a dataset obtained through a collaborative project between Allianz, University of New South Wales (UNSW), Suncorp and Insurance Australia Group (IAG). This forms the AUSI acronym, which we will use to refer to this dataset. The Auto Bodily Injury line of business is used, which features slow claim development and high volatility. The AUSI dataset consists of transactional data for individual claims, which we aggregated into quarterly triangles. We use quarterly data from January 2005 to December 2014, which provides a 36$\times$36 upper triangle and a 4 quarter forecasting period, which will be used to compare the MDN's results to the ccODP. Using the individual claims data, each claim was randomly allocated to one of ten triangles, meaning that ten aggregate triangles of roughly equal size were created using this dataset. As mentioned earlier, running the MDN on multiple triangles better assessed the model's consistency. Figure \ref{fig:AUSI} plots the incremental claims for this dataset. This environment aims to assess the MDN's ability to provide accurate forecasts for real data.
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/AUSI_Incremental.png}}
\caption{A plot of the incremental claims of the AUSI environment, for selected accident quarters. Solid lines represents data in the upper triangle, while dashed lines represent data in the lower triangle }
\label{fig:AUSI}
\end{figure}
\section{Model Training and Evaluation }
\label{sec:training}
The SynthETIC Simulator produces individual claims, which are aggregated into a 40x40 triangle. It is a common procedure in neural network modelling to standardise the input variables, in order to stabilise the training. Through early experimentation, normalising the response as well ($X_{i,j}$) was crucial in achieving convergence during training.
For each hyperparameter combination tested, $\boldsymbol{\theta} = \{ \lambda_w, \lambda_\sigma, p, n, h, K \}$, an MDN with such hyperparameters is trained on the training set of each partition, then projected on to the testing set. The loss function we selected is the Negative Log-Likelihood, with weight and sigma activation penalties applied during training:
\begin{equation}
Loss(\textbf{X}, \hat{\textbf{X}}|\textbf{w}, \lambda_w, \lambda_{\sigma} ) = - \frac{1}{|\textbf{X}|}\sum_{i,j: X_{i,j} \in \textbf{X}} ln(f_{\hat{X_{i,j}}}(X_{i,j} | \textbf{w}))
+ \lambda_w\textbf{w}\cdot\textbf{w} + \lambda_{\sigma}\sum_{i,j: X_{i,j} \in \textbf{X}}\sum_{k = 1}^{K}\sigma_{i,j,k}^2. \label{eq:train}
\end{equation}
Via experimentation, the Adam optimiser \citep{KiBa2014}, with a learning rate of 0.001 provided the most stable training compared with other optimisers like RMSProp and Stochastic Gradient Descent. To further minimise over-fitting, Early Stopping was used to stop training as soon as the validation loss was minimised \citep{GoBeCo2017}. The validation loss rarely decreased steadily, hence training was only stopped when it did not hit new lows in the last 1000 epochs. This is referred to as the patience measure in the Keras interface; a lower patience than 1000 would sometimes prematurely stop training. Training would usually last for several thousand epochs, with higher dropout rates and larger networks often requiring up to 10-15 thousand iterations. A 10000 epoch limit was set when running the hyper-parameter optimisation algorithm, in order to increase efficiency.
\subsection{Test error}
Denote $\boldsymbol{\theta}$ as the hyper-parameter values of the MDN being run. In addition, let $f_{\hat{X}_{i,j}} ( x| \textbf{w}, \theta)$ be the density of $\hat{X}_{i,j}$ projected by an MDN with hyper-parameters $\boldsymbol{\theta}$ and weights $\textbf{w}$. Let $T1$ and $T2$ be the set of cells $(i,j)$ in the testing set of the first and second partitions, respectively. A separate MDN is trained $T$ times in each partition; let $\textbf{w}_{p,t}$ be the weights of the $t^{th}$ model trained on the $p^{th}$ partition. The test error of the MDN with hyper-parameters $\boldsymbol{\theta}$ is calculated from \eqref{eq:test1} - \eqref{eq:test3}.
\begin{equation}
\text{TestError}(\boldsymbol{\theta}, \text{Partition } 1) = -\frac{1}{T|T1|}\sum_{t = 1}^{T}\sum_{i,j: (i,j) \in T1} ln(f_{\hat{X}_{i,j}} ( X_{i,j}| \textbf{w}_{1,t},\boldsymbol{\theta}))
\label{eq:test1}
\end{equation}
\begin{equation}
\text{TestError}(\boldsymbol{\theta}, \text{Partition } 2) = -\frac{1}{T|T2|}\sum_{t = 1}^{T}\sum_{i,j: (i,j) \in T2} ln(f_{\hat{X}_{i,j}} ( X_{i,j}| \textbf{w}_{2,t}, \boldsymbol{\theta}))
\label{eq:test2}
\end{equation}
\begin{equation}
\text{TestError}(\boldsymbol{\theta}) = \frac{ |T1|*\text{TestError}(\boldsymbol{\theta}, \text{Partition } 1) + |T2|*\text{TestError}(\boldsymbol{\theta}, \text{Partition } 2)}{|T1| + |T2|}
\label{eq:test3}
\end{equation}
Hence, the MDN is trained $2T$ times for each set of hyper-parameters $\theta$, as each run has a different weight initialisation and hence a different fit. Averaging the error of these runs reduces the impact of random weight initialisations on the performance of the hyper-parameter set $\boldsymbol{\theta}$.
\subsection{Projection constraints}
This paper implements a mechanism for directly constraining central estimates of cells in the lower triangle, thereby directly controlling projections made by the MDN. This method allows for the practitioner's judgement to be incorporated if required. The practitioner can place upper and lower bounds on the central estimates of any desired set of cells $(i,j)$ in the lower triangle and penalise the MDN if its central estimates fall outside those boundaries.
Let $\textbf{C}$ be the set of cells $(i,j)$ in the lower triangle, which have had constraints placed on their projections. Let $C_{i,j}^{Lower}$ and $C_{i,j}^{Upper}$ be the lower and upper constraints of the central estimates for cell $(i,j) \in \textbf{C}$. Let $\hat{\mu}_{i,j} = E[\hat{X}_{i,j}]$. The loss function during training follows \eqref{eq:projConst}:
\begin{align}
NLLLoss(\textbf{X}, \hat{\textbf{X}}|\textbf{w}) &= -\frac{1}{|\textbf{X}|}\sum_{i,j: X_{i,j} \in Train} ln(f_{\hat{X}_{i,j}}(X_{i,j} | \textbf{w})) \\
&+ Regularisation + \frac{\lambda_C}{|\textbf{C}|} \sum_{i,j: (i,j) \in \textbf{C}} [max(0,\hat{\mu}_{i,j} - C_{i,j}^{Upper})]^2 + [max(0,C_{i,j}^{Lower} - \hat{\mu}_{i,j})]^2
\label{eq:projConst}
\end{align}
Where $Regularisation = \lambda_w\textbf{w}\cdot\textbf{w} + \lambda_{\sigma}\sum_{i,j: X_{i,j} \in \textbf{X}_{Train}}\sum_{k = 1}^{K}\sigma_{i,j,k}^2$ from \eqref{eq:train}, and $\lambda_C$ is a constraint violation penalty coefficient. The constraints apply a square distance penalty to the loss function if the central estimate of constrained cells in the lower triangle violate the constraints. With a sufficiently high penalty coefficient, the MDN's projection will satisfy the constraints specified, providing projections that are more reasonable. The cells in $\textbf{C}$ are randomly split in half between the training and validation sets, as the validation loss should indicate how well the projection constraints have been met in order for Early Stopping to be used effectively.
\subsection{Fitting the final model}
Once all desired hyper-parameter combinations are tested, the combination with the lowest test error, $\boldsymbol{\theta}^{min}$ (see Section \ref{sec:algorithm} for details), is set as the model architecture of choice. To produce distributional forecasts of claims in the lower triangle, the chosen MDN is run on the entire upper triangle. Only a training/validation split is needed, since the testing set was only used to compare different hyper-parameters. The training/validation partition of the upper triangle was done sequentially, visualised in Figure \ref{fig:part3}.
\begin{figure}[htb]
\centerline{\includegraphics[width = 8cm]{Images/Partition3.PNG}}
\caption{The training/validation partition of the upper triangle. The chosen MDN design is fit on the training data and used to project claims in the lower triangle}
\label{fig:part3}
\end{figure}
An MDN with hyper-parameters $\boldsymbol{\theta}^{min}$ is fit 5 times on the training data of Partition 3, under different weight initialisations. The 5 fitted distributions are ensembled to produce the final forecast. This ensemble of models produces more robust results and reduces the error of bad runs that stop training at a poor local minimum of loss \citep{PeCo1992}. Let $w_z$ be the set of the MDN's final weights in the $z^{th}$ run. The 5 fits are ensembled to produce the distribution of incremental claims shown in \eqref{eq:finaldist}.
\begin{equation}
f_{\hat{X}_{i,j}}(x) = \frac{1}{5} \sum_{z = 1}^{5} \sum_{k = 1}^{K} \alpha_{i,j,k}^{w_z} \phi(x | \mu_{i,j,k}^{w_z}, \sigma_{i,j,k}^{w_z})
\label{eq:finaldist}
\end{equation}
\subsection{Model evaluation}\label{sec:metrics}
The final MDN model is fit on the lower triangle and \textbf{compared to the ccODP model}. The results of two main variables were analysed:
\begin{enumerate}
\item Individual cells, $X_{i,j}$
\item Total reserves, $R = \sum_{i,j: i+j > 41}X_{i,j} $
\end{enumerate}
Capital standards set by APRA and Solvency II require reserve allocations to meet total reserves in the lower triangle with a 75\% and 99.5\% probability of sufficiency, respectively. Hence measuring the accuracy of total reserves is important. However, it is also desirable for a model to achieve accurate total reserves by correctly modelling the individual cells, $X_{i,j}$.
\textbf{Results were analysed qualitatively and quantitatively}. Qualitative analysis allowed the MDN's strengths and weaknesses to be located graphically, while quantitative analysis provided a more objective measure of the model's accuracy. The qualitative analysis conducted included the following plots:
\begin{enumerate}
\item \textbf{Central Estimates:} Plots of the MDN and ccODP's central estimates $\hat{\mu}_{i,j}$ were compared to actual losses of the dataset $X_{i,j}$, as well as the empirical mean calculated from hundreds of simulations of the same dataset.
\item \textbf{Risk Margins:} Plots of the MDN and ccODP's mean-centred risk margins (at the 25\%, 75\% and 95\% level) were compared to empirical risk margins.
\item \textbf{Total reserves:} The distributions of total reserves estimated by the MDN and ccODP, $\hat{R}$, were plotted alongside the empirical distribution of total reserves.
\end{enumerate}
When analysing individual cells, $X_{i,j}$, the RMSE, Log Score and Quantile Score statistics were calculated on each loss triangle involved in the modelling. For total reserves, $R$, the MDN and ccODP was fit on \textbf{50 triangles for each of the four simulated data environments, and 10 independent triangles randomly partitioned from the AUSI dataset}, to generate reserve estimates for each triangle, $\hat{R}_i$ for $i= 1,2,3..50$. Let $\textbf{X} = \{ X_{i,j}: i + j > 41 \} $, $\textbf{R} = \{R_i, i = 1,2,3,..50\}$, $f_{\hat{\textbf{X}}} = \{f_{\hat{X}_{i,j}}: X_{i,j}\in \textbf{X} \}$ and $X_q$ be the $q^{th}$ quantile estimate of the variable $X$. The quantitative metrics used are calculated as follows:
\begin{description}
\item[1. Distributional forecast accuracy, using the log score metric (\eqref{eq:LogScore})]:
\begin{equation}
LogScore(\textbf{X},f_{\hat{\textbf{X}}}) = \frac{\sum_{(i,j): X_{i,j} \in \textbf{X}}ln(f_{\hat{X}_{i,j}}(X_{i,j}))}{|\textbf{X}|}
\label{eq:LogScore}
\end{equation}
A higher log score is desirable as it indicates a more accurate distributional fit for the lower triangle. The log score wasn't calculated when analysing total reserves, as the fitted distributions usually fell completely outside the simulated empirical distribution, setting the likelihood to 0.
\item[2. Central estimate forecast accuracy, using the RMSE metric (\eqref{eq:RMSE1}, \eqref{eq:RMSE2})]:
\begin{equation}
RMSE(\textbf{X},{\hat{\textbf{X}}}) = \sqrt{\frac{\sum_{(i,j): X_{i,j} \in \textbf{X}} (X_{i,j} - \hat{X}_{i,j})^2 }{|\textbf{X}|} }
\label{eq:RMSE1}
\end{equation}
\begin{equation}
RMSE(\textbf{R},{\hat{\textbf{R}}}) = \sqrt{\frac{\sum_{i = 1}^{D} (R_i - \hat{R_i})^2 }{D} } \label{eq:RMSE2}
\end{equation}
A lower RMSE indicates more accurate central estimates for the lower triangle and total reserves.
\item[3. Quantile forecast accuracy (75\% and 95\%), using quantile scores (\eqref{eq:qs1}, \eqref{eq:qs2} )]:
\begin{equation}
QS(\boldsymbol{\hat{X}_{q}}, \textbf{X}) = \frac{\sum_{(i,j): X_{i,j} \in \textbf{X}} (\mathbf{1}(X_{i,j} < \hat{X}_{i,j,q}) - q)( \hat{X}_{i,j,q} - X_{i,j}) }{|\textbf{X}|}
\label{eq:qs1}
\end{equation}
\begin{equation}
QS(\boldsymbol{\hat{R}_{q}}, \textbf{R}) = \frac{\sum_{i = 1}^{D} (\mathbf{1}(X_{i,j} < \hat{X}_{i,j,q}) - q)( \hat{X}_{i,j,q} - X_{i,j}) }{D}
\label{eq:qs2}
\end{equation}
A lower quantile score indicates more accurate quantile estimates, for individual cells and total reserves.
\end{description}
\section{Results}
\label{sec:Results}
In this section, we analyse the results of the MDN, with the ResMDN separately analysed in Section \ref{sec:practical}.
\subsection{Stable forecasts - rolling origin model validation}
Generally, the set of hyper-parameters selected by the rolling origin method produced very reasonable and accurate central and distributional forecasts. All models were successful in predicting a decrease in the mean and volatility of claims in the later development quarters (DQs), which is a significant achievement given the low quantity of data available to the MDN in those periods. Figure \ref{fig:D2plotResults} plots the MDN's mean and volatility estimates on environment 2, showing the accuracy of projections despite its systematic complexity.
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D2Overall.png}}
\caption{Environment 2 (speed up in claim processing): Plots of the mean (red) and standard deviation (black dotted) estimates of the MDN against actual losses (blue). The grey area represents the lower triangle, the forecasting region. These plots show the MDN producing reasonable and accurate forecasts, which were consistently observed.}
\label{fig:D2plotResults}
\end{figure}
The MDN also produced smooth, robust predictions. This can especially be seen in Figure \ref{fig:D4plotComparison}, which plots the MDN's and ccODP's fits to the highly volatile environment 4. The ccODP's coefficients are derived from few data points, especially in the later AQs and DQs, leading to a more volatile fit throughout. Meanwhile, the MDN produced a more holistic fit, resulting in a smoother more robust forecast.
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D4ccODP.png}}
\caption{Environment 4: Plots of the MDN's central estimate (red) and standard deviation (dotted black) fits against the ccODP model (green) fit and actual losses (blue). The grey area represents the lower triangle, the forecasting region. These plots demonstrate the smooth and robust forecasts produced by the MDN, especially relative to the ccODP, despite the volatile data given.}
\label{fig:D4plotComparison}
\end{figure}
The rolling origin method, in the third partition, uses the latest calendar quarters for validation. A model that overfits the data will not project accurately, and hence the MDN is encouraged to produce a smooth fit. In addition to Figure \ref{fig:D4plotComparison}, the smoothness can be visualised in Figure \ref{fig:AUSIplotResults}, where the MDN produces a significantly smooth fit despite the huge volatility present in the dataset.
The rolling origin model validation method proved successful at partitioning triangles of a size as small as 36$\times$36. The scarcity of data relative to the large datasets to which neural networks are usually applied would normally discourage the use of this method. However, both the MDN and rolling origin partition performed well on less than 700 data points, showing their appropriateness in a practical loss triangle reserving setting.
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/AUSIplotresults.png}}
\caption{AUSI: Plotting the MDN's central estimate (red) and standard deviation (black dotted) fits against actual losses (blue). The grey area represents the lower triangle, the forecasting region. The MDN provides smooth and accurate forecasts using real data.}
\label{fig:AUSIplotResults}
\end{figure}
However, as suspected, the rolling origin method was visibly unable to capture the inflation shock accurately for environment 3, due to a lack of training data in the later calendar periods. This shortcoming was detected through analysis of the residuals in the upper triangle, which were consistently negative in the high inflation periods. Hence, an adjusted data partition methodology was implemented for environment 3, visualised in Section \ref{app:partition}, which allocated more training data to the later calendar periods, allowing the MDN more exposure to the inflation shock. This adjustment enabled the MDN to capture the later trends more effectively, and is recommended for data sets where a significant change in the claim pattern is observed in later calendar periods.
\clearpage
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D2plotReal.png}}
\caption{Environment 2: Plots comparing the mean estimates of the MDN (red) and ccODP (green) models to the empirical mean claims based on 250 simulations (black). The grey area represents the lower triangle, the forecasting region. The MDN captured the increase in claim settlement speed, while the ccODP did not.}
\label{fig:D2plotReal}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D3plotReal.png}}
\caption{Environment 3: Plots comparing the mean estimates of the MDN (red) and ccODP (green) models to the empirical mean claims based on 500 simulations (black). The grey area represent the lower triangle, the forecasting region. The MDN captured the inflation shock more accurately than the ccODP.}
\label{fig:D3plotReal}
\end{figure}
\subsection{Distributional forecasting with the Mixture Density Network}
The Mixture Density Network (MDN), overall, outperformed the ccODP for all environments in all qualitative and quantitative metrics. When analysing both individual cells and total reserves, the MDN outperformed the ccODP, often in a decisive manner.
Generally, the MDN's out-performance relative to the ccODP is due to its high flexibility, being able to fit a highly flexible function to the data and capture non-linearities. This flexibility can easily lead to over-fitting, but the rolling origin model validation method ensured that the MDN's flexibility was enough to capture the relevant trends in the data while minimising over-fitting.
\subsubsection{Central estimate analysis}
The MDN produced excellent central estimate projections in all environments. Where the environment had more structural heterogeneity, i.e. where the ccODP assumptions are not satisfied, the MDN decisively outperformed the ccODP in all metrics. Several key observations can be made:
\begin{itemize}
\item In environment 1, the data (by design) satisfies the ccODP assumptions well, hence the ccODP was very competitive. Nevertheless, the MDN slightly outperformed in all quantitative metrics when measuring the accuracy of incremental claims, $X_{i,j}$. This can be attributed to the smooth function fit by the MDN.
\item In environment 2, the MDN successfully learned that claims processing speed is increasing, predicting a sharper spike in claim payments in the later AQs. Figure \ref{fig:D2plotReal} plots the results for this environment. The ccODP, assuming homogeneity in claim development, approximated claims as medium-tailed, leading to a clear over-estimation of claims in later AQs.
\item For environment 3, the MDN accurately captured the inflation shock at calendar quarter 30 (CQ30) onwards. Figure \ref{fig:D3plotReal} plots the results. The ccODP did not keep pace with the increased inflation due to its limited ability to handle heterogeneity, leading to its under-estimation of claims from CQ30 onwards.
\item In both environment 4 and the AUSI environment, the MDN handled volatile data well and provided accurate central estimates, outperforming the ccODP. This accuracy is shown in Figure \ref{fig:AUSIplotReal}, which plots the MDN's central estimates against the empirical mean based on 10 simulations.
\end{itemize}
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/AUSIplotreal.png}}
\caption{AUSI: Plots comparing the mean estimates of the MDN (red) and ccODP (green) models to the empirical mean claims based on 10 triangles (black). The grey area represent the lower triangle, the forecasting region. These plots show the MDN producing fairly accurate mean forecasts in a real environment, outperforming the ccODP.}
\label{fig:AUSIplotReal}
\end{figure}
Figure \ref{fig:RMSE} provides boxplots of the MDN's \% reduction of the RMSE relative to the ccODP for 50 triangles in each of the environments tested (10 for the AUSI environment). The boxplots show that the MDN achieved a positive reduction in the RMSE (lower RMSE) relative to the ccODP for the majority of triangles in each environment, further showing the MDN's higher forecasting accuracy. \begin{figure}[htb]
\centerline{\includegraphics[width = 9cm]{Images/Results/BPNew.png}}
\caption{
\textbf{Left:} conventional boxplots displaying the MDN's RMSE as a percentage of ccODP's for each of the 50 triangles run for environments 1,2,3,4, and 10 triangles for AUSI.
\textbf{Right:} conventional boxplots displaying the MDN's increase in log score relative to the ccODP for each of the 50 triangles run for environments 1,2,3,4, and 10 triangles for AUSI. }
\label{fig:RMSE}
\end{figure}
Despite the MDN's success, it showed weaknesses in several areas, some of which were mitigated:
\begin{itemize}
\item Using the NLL loss function alone can encourage the MDN to over-estimate the volatility when its central estimate is inaccurate. This was seen in environment 2, where a MDN with a NLL loss function under-estimates claims in the (40,2) cell, leading to an excessively high volatility estimate for that region. In addition to sigma activity regularisation (see Section \ref{sec:algorithm}, adding an MSE term to the loss function helped to resolve this issue, as it encouraged the MDN to achieve more accurate central estimates. Hence, an MSE term was added to the loss function for environments 1 and 2, and is recommended for loss triangles with sharp shifts in claims development.
\item Taking the log of aggregate claims linearised the data, which often led to faster and more accurate modelling. In this paper, environments 1 and 3 were fit with a mixture Log-Gaussian, as they showed significantly more accurate results, especially capturing the claims decay in later DQs.
\end{itemize}
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D2Shape.png}}
\caption{Environment 2: Plots comparing the 25\% (solid) and 75\% (dashed) risk margin estimates of the MDN (red) and ccODP (green) models to the empirical margins based on 250 simulations (black). The grey area represents the lower triangle, the forecasting region. These plots demonstrate the MDN providing more accurate volatility forecasts than the ccODP benchmark.}
\label{fig:D2plotShape}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D3Shape.png}}
\caption{Environment 3 (Inflation Shock): Plots comparing the 25\% (solid), 75\% (dashed) and 95\% (dotted) risk margin estimates of the MDN (red) and ccODP (green) models to the empirical margins based on 500 simulations (black). The grey area represents the lower triangle, the forecasting region. Similarly to Figure \ref{fig:D2plotShape}, this figure demonstrates the MDN's ability to capture the increase in claim volatility due to the inflation shock, while the ccODP did not.}
\label{fig:D3plotShape}
\end{figure}
\clearpage
\subsubsection{Volatility estimate analysis}
The Mixture Density Network produced very smooth and accurate volatility estimates. Where noise in the data was low, the MDN projected low volatility, and vice versa. Overall, it outperformed the ccODP qualitatively and quantitatively in estimating the volatility of individual cells.
In relation to individual cells, the MDN's risk margin estimates at the 25th and 75th percentile were more accurate overall than the ccODP's margins in almost all environments tested. The ccODP's variance is a function of its mean, and hence it failed where the central estimates failed. For example, in environment 2 (Figure \ref{fig:D2plotShape}), the ccODP over-estimates claims in later AQs, which led to it over-estimating margins in that same period. In environment 3, the ccODP under-estimated claims in later calendar Quarters (CQs), as it didn't effectively capture the inflation shock. This led to volatility estimates also being too low in those periods, as Figure \ref{fig:D3plotShape} illustrates. The MDN dealt with these structural issues more effectively, leading to more accurate dispersion estimates of claims. In the volatile AUSI environment, the MDN produced smooth and accurate margin estimates, as Figure \ref{fig:AUSIplotShape} illustrates.
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/AUSIplotshape.png}}
\caption{AUSI: Plots comparing the 25\% (solid) and 75\% (dashed) risk margin estimates of the MDN (red) and ccODP (green) models to the empirical margins based on 10 triangles (black). The grey area represents the lower triangle, the forecasting region. This figure demonstrates the MDN producing accurate and smooth volatility forecasts on real data.}
\label{fig:AUSIplotShape}
\end{figure}
While some correlation was found in the results between central and volatility forecasts, the MDN allows a large degree of independence between the $\mu$ and $\sigma$ parameters estimated, allowing it to fit a much wider range of distributions than the ccODP.
Figure \ref{fig:RMSE} provides boxplots of the MDN's increase in the log score relative to the ccODP for 50 triangles in each of environments 1,2,3 and 4. The AUSI boxplot was based on 10 triangles. The boxplots show that the MDN achieved a higher log score relative to the ccODP for the majority of triangles in each environment, indicating a more accurate probabilistic forecast.
With the MDN's success, there are some weaknesses which must be addressed:
\begin{itemize}
\item The MDN still showed signs of attributing noise to systematic trends. In environment 2, even though the DQ2 spike was fixed, the volatility was still too high for AQ30 and AQ40.
\end{itemize}
\begin{itemize}
\item Similar to central estimates, the MDN often over-estimates volatility in later DQs, also due to a lack of data in that region.
\end{itemize}
\clearpage
\subsubsection{Quantile estimate analysis}
The MDN provided more accurate 75\% and 95\% quantiles for all environments in the majority of triangles run for each environment. These results follow from the MDN's ability to provide more accurate central estimates and volatility estimates. The quantile analysis was mainly quantitative, using the quantile scores. Table \ref{table:mean} confirms that in all environments, the MDN reduces the 75\% and 95\% quantile scores for the majority of triangles, indicating more accurate quantile estimates at the 75\% and 95\% levels.
The MDN and ccODP models were run on fifty triangles of each of environments 1,2,3,4, and the ten triangles partitioned from the AUSI data. The quantitative metrics are calculated for the 50 triangles and averaged, with results between the MDN and ccODP models compared in Table \ref{table:mean}. As the table shows, the MDN, on average, had a lower RMSE and Quantiles Scores and had a higher log score for each environment, which is a significant out-performance by the MDN. Table \ref{table:triangles} further reinforces these results by showing the percentage of triangles in which the MDN outperformed the ccODP for each quantitative metric. In each environment, the MDN outperforms the ccODP in each metric for the majority of triangles.
\begin{table}[h!]
\centering
\begin{tabular}{|c ||c|c|c| c| c| c |}
\hline
Environment & Model & Mean RMSE & RMSE & Mean LS & Mean QS & Mean QS \\
&&&(\% of ccODP)&&(75\%)&(95\%) \\ [0.5ex]
\hline\hline
1 & ccODP& 1,656,921.0&100 & -14.99 & 380,968.5 & 144,423.7 \\
\hline
\textbf{1} & \textbf{MDN} &\textbf{ 1,527,799} &\textbf{92.2}& \textbf{-14.93} & \textbf{375,413.6} & \textbf{140,754.3} \\
\hline\hline
2 & ccODP& 591,505.4 &100& -14.97 & 111,733.3 & 31,213.2 \\
\hline
\textbf{2} & \textbf{MDN}& \textbf{182,041.1} &\textbf{30.8}& \textbf{-13.31} & \textbf{50,628.9} &\textbf{ 16,326.1} \\
\hline\hline
3& ccODP& 190,482.4 &100& -13.46 & 57,562.2 & 32,164.1 \\
\hline
\textbf{3} & \textbf{MDN} & \textbf{162,621.8} &\textbf{85.4}& \textbf{-13.05} & \textbf{47,837.0} & \textbf{18,817.4} \\
\hline\hline
4 & ccODP& 1,053,008.0 &100& -14.72 & 232,011.3 & 96,506.1 \\
\hline
\textbf{4} & \textbf{MDN}&\textbf{ 652,230.7} &\textbf{61.9}& \textbf{-13.88} & \textbf{210,694.1} & \textbf{95,235.0} \\
\hline\hline
AUSI & ccODP& - &100& -14.04 & 124,976.3 & 58,760.6 \\
\hline
\textbf{AUSI} & \textbf{MDN}&\textbf{ -} &\textbf{84.2}& \textbf{-13.07} & \textbf{105,559.9} & \textbf{53,773.9} \\
\hline
\end{tabular}
\caption{The average score, over 50 triangles, of each quantitative metric; the RMSE, log score (LS) and quantile scores (QS) for the 75\% and 95\% levels. The MDN outperformed the ccODP in all environments and metrics when the average is taken.
}
\label{table:mean}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{ |c||c|c|c|c|c| }
\hline
Environment& Model & RMSE &Log Score&Quantile Score (75\%)& Quantile Score (95\%)\\
\hline\hline
1 & MDN & 76 &80& 60 & 58\\
\hline\hline
2& MDN& 100 & 100 &100& 100\\
\hline\hline
3 & MDN&88 & 94& 90& 100\\
\hline\hline
4 & MDN&84 & 98& 66& 50\\
\hline\hline
AUSI &MDN &100 & 100& 100& 90\\
\hline
\end{tabular}
\caption{The percentage of triangles in which the MDN outperformed the ccODP for each environment and metric.
}
\label{table:triangles}
\end{table}
\subsubsection{Total reserves}
The MDN, in all environments except environment 1, showed more accurate central and dispersion estimates of total reserves compared to the ccODP estimate (the dispersion accuracy is qualitatively measured through visualising reserve density plots, Figure \ref{fig:TotalReserves} plots these results). An empirical distribution of total reserves, based on many simulations, is used as an estimate of the actual distribution of $R$. This out-performance follows from the MDN modelling the mean and volatility of individual cells more accurately than the ccODP. Because the claims in environment 1 are homogeneous in development, the ccODP provides highly competitive results and the MDN didn't outperform. However, for the more complicated environments 2,3 and 4, the MDN had more accurate 75\% and 95\% quantiles of total reserves compared to the ccODP. Table \ref{table:totalreserves} calculated the quantitative metrics for both models for total reserve estimates, $\hat{R}$. The qualitative analysis (Figure \ref{fig:TotalReserves}) also supports these results.
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/TotalReserves.png}}
\caption{A plot of the total reserve density estimates for all environments, $\hat{R}$, showing the MDN (red) and ccODP's (green) estimated densities against the empirical density (black) based on hundreds of simulations. For each environment, only one triangle is analysed for each plot. The MDN consistently provides more accurate results, except for environment 1 }
\label{fig:TotalReserves}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|c ||c| c| c| c |}
\hline
Environment & Model & RMSE ($\times 10^6$)& QS(75\%) ($\times 10^6$) &
QS(95\%) ($\times 10^6$)\\ [0.5ex]
\hline\hline
1 & MDN & 111.7 & 37.3 & 15.4 \\
\hline
\textbf{1} & \textbf{ccODP}& \textbf{92.9} &\textbf{28.9} & \textbf{13.2} \\
\hline\hline
\textbf{2} & \textbf{MDN}& \textbf{80.4} & \textbf{20.1} &\textbf{4.25} \\
\hline
2 & ccODP& 260.2 & 65.9 & 13.45 \\
\hline\hline
\textbf{3} & \textbf{MDN} & \textbf{39.5} & \textbf{19.7} & \textbf{20.00} \\
\hline
3& ccODP& 53.0 & 37.0 & 44.37 \\
\hline\hline
\textbf{4} & \textbf{MDN}&\textbf{99.4} & \textbf{24.1} & \textbf{9.00} \\
\hline
4 & ccODP& 322.8 & 56.5 & 11.98 \\
\hline
\end{tabular}
\caption{The RMSE and quantile scores (QS) at the 75\% and 95\% levels, calculated for total reserve estimates, $\hat{R}$. The ccODP outperforms for Environment 1, but the MDN outperforms otherwise.
\label{table:totalreserves}
\end{table}
\section{Practical Considerations }
\label{sec:practical}
\subsection{Projection constraints}
\label{sec:Projection}
For both the MDN and ResMDN, central estimate projections in the lower triangles can be explicitly constrained for any desired set of cells, $X_{i,j}$. Without constraints these projections can, and sometimes do, produce negative results in individual cells, and even for total reserves for some accident years (as was seen for the ResMDN in Section \ref{sec:ResMDN}). This is unrealistic in some circumstances, and is best prevented by constraining projections to non-negativity. Similarly, it may be desirable to force projected payments to converge toward zero with increasing DQ. There might be other “reasonableness constraints” that the actuary wishes to apply. In this paper, projection constraints were applied successfully to the ResMDN for environments 2 and 3. Mean forecasts were constrained in the later DQs to be non-negative. There are some points to note:
\begin{itemize}
\item The MDN's forecast was virtually unchanged in the upper triangle, meaning the constraints set did not distract the model from fitting the in-sample data accurately.
\item The MDN produced a natural, smooth curve while still meeting the constraints. There was no evidence of a sudden jolt in the fitted function, nor did the function simply rest on the closest constraint boundary. Consequently, only a small proportion of cells need to be constrained to achieve reasonable results. For example, the non-negativity constraint described above was only applied to approximately 10\% of cells in the lower triangle.
\end{itemize}
To visually demonstrate the effect of constraining projections, we apply this methodology in environment 4, shown in Figure \ref{fig:D4ECN}. The volatile data in this environment caused the MDN to occasionally over-estimate losses in the later DQs. Hence, the mean was constrained in that period to be approximately 0, which the model followed once set. In general, the actuary can set any desirable boundary to any cell in the lower triangle, to ensure the MDN strongly leans towards fitting functions with sensible projections.
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D4ECN.png}}
\caption{Environment 4: A plot of the central estimates forecasted by the MDN (red) and MDN with projection constraints (orange), against the actual losses (blue). The grey area represents the lower triangle, the forecasting region. This figure demonstrates the MDN producing more reasonable results when projections are constrained. }
\label{fig:D4ECN}
\end{figure}
\subsection{Interpretability: ResMDN} \label{sec:ResMDN}
The ResMDN shows plenty of potential in boosting the residuals of its GLM backbone while providing more interpretable results than the MDN. In this paper, we analyse the ResMDN on environments 2 and 3, as in both environments, the ccODP provided a smooth backbone with clear and detectable flaws which could be analysed. In both environments, the ResMDN successfully detected the ccODP's shortcomings and corrected them, to an extent. Both mean and volatility estimates were corrected; the ResMDN didn't just boost the mean, but also the distribution. A visible shortcoming of the ResMDN is that it can produce unreasonable forecasts; for environment 2 the model occasionally under-estimates claims in later DQs, while for environment 3 there were some instances where it over-estimated the inflation shock. Hence we apply projection constraints in the later DQs to mitigate these deviations.
\begin{itemize}
\item The ResMDN demonstrated the ability to recognise errors in the ccODP's central estimates and correct them. In environment 2, where the claim processing speed gradually increases, the ccODP models the claims as being middle-tailed, as it assumes homogeneity in claim development. Hence, the ccODP under-estimated claims in early AQs and over-estimated claims later on. Figures \ref{fig:map1} and \ref{fig:map2} show heatmaps of the ccODP and ResMDN's residuals for environment 2, respectively. As can be seen, the ResMDN successfully understood the ccODP's shortcomings just mentioned, and reduced the residuals. Figure \ref{fig:ResMDNplotReal} plots the central estimates of the ResMDN and ccODP models, also showing the ResMDN's corrections producing a more accurate forecast.
\item The ResMDN also demonstrated the ability to recognise errors in the ccODP's volatility estimates and correct them. In environment
2, the ccODP models the speed up in claims processing with a higher dispersion parameter, leading to consistently over-estimating volatility. The ResMDN successfully learns this shortcoming and reduces volatility estimates accordingly. Figure \ref{fig:ResMDNplotShape} visualises the ResMDN's boosting for environment 2; the ResMDN has corrected volatility to almost match the empirical trend, indicating its higher distributional forecast accuracy.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Images/Results/ResHeatmap.png}
\caption{A heatmap of the ccODP's residuals}
\label{fig:map1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Images/Results/ResMDNBoost.png}
\caption{A heatmap of the ResMDN's residuals}
\label{fig:map2}
\end{subfigure}
\caption{Environment 2: Heatmaps showing the ccODP's initial residuals in (a), calculated as $\mu_{i,j}^{ccODP} - X_{i,j}$. The ResMDN's residuals, calculated as $\mu_{i,j}^{ResMDN} - X_{i,j}$, are shown in (b). The lighter colours in (b) show that the ResMDN partially corrected the ccODP's residuals, producing more accurate forecasts. }
\label{fig:ResMDNheatmap}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/RNplotReal.png}}
\caption{Environment 2: Plots comparing the mean estimates of the ResMDN (orange) and ccODP (green) models to actual losses (blue). The grey area represents the lower triangle, the forecasting region. These plots demonstrate the ResMDN partially correcting the ccODP's residuals, leading to more accurate forecasts.}
\label{fig:ResMDNplotReal}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/D2RNShape.png}}
\caption{Environment 2: Plots comparing the 25\% (solid) and 75\% (dashed) risk margin estimates of the ResMDN (orange) and ccODP (green) models to the empirical margins based on 250 simulations (black). The grey area represents the lower triangle, the forecasting region. While Figure \ref{fig:ResMDNplotReal} demonstrates the ResMDN's ability to correct the ccODP's mean estimates, this figure shows the ResMDN correcting the ccODP's volatility estimates, allowing distributional forecasts to be more in line with empirical data.}
\label{fig:ResMDNplotShape}
\end{figure}
The ResMDN showed promise in detecting and partially correcting the embedded GLM's mean and volatility estimates. Given these results, several considerations arose that are worth mentioning:
\begin{itemize}
\item The ResMDN projected the ccODP's residuals unreasonably in some instances. For example, in environment 2, it learnt that the ccODP over-estimates claims in later AQs, so it adjusted this error by reducing the loss estimates for those periods, and continued that correction excessively in the later DQs. This issue was fixed by constraining projections for DQ 38-40. Central estimate bounds for these periods were set between 0-500,000 (for environment 2) and 0-200,000 (for environment 3). These bounds are judgemental, but are reasonably wide to accommodate the uncertainty in forecasting. Figure \ref{fig:ResMDNconst} plots the difference in forecasts between the constrained and unconstrained ResMDN. While the plotted triangle is an extreme scenario (out of the 50 triangle sample), it justifies using the constrained ResMDN to ensure incremental claims tend to 0.
\item Despite the ResMDN correcting residuals, its log score is disadvantaged compared to the ccODP. This is mainly driven by the high coefficient of variation of incremental claims observed in the later DQs (of which only a small amount are in-sample), which naturally favoured the ODP distribution over the mixture Gaussian.
\end{itemize}
\begin{figure}[htb]
\centerline{\includegraphics[width = 10cm]{Images/Results/RNconst.png}}
\caption{Environment 2: Plots comparing the mean estimates of the constrained (orange) and unconstrained (purple) ResMDN models against the ccODP backbone (green) and actual losses (blue). The grey area represents the lower triangle, the forecasting region. These plots show that the unconstrained ResMDN can significantly under-estimate claims, hence justifying the use of constraints to stabilise forecasts.}
\label{fig:ResMDNconst}
\end{figure}
Tables \ref{table:meanresmdn}, \ref{table:trianglesresmdn} and \ref{table:totalreservesresmdn} display the quantitative results of the unconstrained ResMDN (ResMDN) and constrained ResMDN (ResMDN-PC), in a similar fashion to Tables \ref{table:mean}, \ref{table:triangles} and \ref{table:totalreserves}. In accordance with conclusions from visual analysis in Figures \ref{fig:ResMDNplotReal} and \ref{fig:ResMDNplotShape}, the ResMDN produced more accurate central estimates, 75th and 95th quantiles when looking at individual cells, demonstrating that it has improved the mean and distribution over its ccODP backbone. In the majority of triangles, the ResMDN achieved a more accurate RMSE and Quantile Score. Boosting residuals translated to more accurate total reserve estimates, for the mean, 75th and 95th quantiles. We note that the unconstrained ResMDN yielded more accurate central estimates for environment 2 than the constrained model. This is due to the constraints slightly discouraging boosting to avoid negative forecasts. Moreover, the apparent improvement in performance embodied in the reduced RMSE can come at the cost of negative projections.
\begin{table}[h!]
\centering
\begin{tabular}{|c ||c|c|c| c| c| c |}
\hline
Environment & Model & Mean RMSE & RMSE & Mean LS & Mean QS & Mean QS \\
&&&(\% of ccODP)&&(75\%)&(95\%) \\ [0.5ex]
\hline\hline
2 & ResMDN & 378,898.3 &64.1& -15.23 & 81,641.2 & 30,300.2 \\
\hline
2 & ResMDN-PC & 436,414.9 &73.8& -14.95 & 78,584.0 & 21,644.8 \\
\hline
3 & ResMDN & 190,471.4 &100.0& -14.01 & 55,497.3 & 25,348.4 \\
\hline
3 & ResMDN-PC & 182,944.4 &96.0& -13.65 & 53,003.2 & 26,971.7 \\
\hline\hline
\end{tabular}
\caption{The average score, over 50 triangles, of each quantitative metric; the RMSE, log score (LS) and quantile scores (QS) for the 75\% and 95\% levels for the ResMDN. The ResMDN-PC model features constrained projections
}
\label{table:meanresmdn}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{ |c||c|c|c|c|c| }
\hline
Environment& Model & RMSE &Log Score&Quantile Score (75\%)& Quantile Score (95\%)\\
\hline\hline
2 & ResMDN & 96 &58& 90 & 72\\
\hline
2 & ResMDN-PC & 96 &60& 96 & 100\\
\hline
3 & ResMDN & 78 &38& 80 & 76 \\
\hline
3 & ResMDN-PC & 90 &46& 92 & 76 \\
\hline\hline
\end{tabular}
\caption{The percentage of triangles in which the ResMDN outperformed the ccODP in each metric. The ResMDN-PC model features constrained projections
}
\label{table:trianglesresmdn}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{|c ||c| c| c| c |}
\hline
Environment & Model & RMSE ($\times 10^6$)& QS(75\%) ($\times 10^6$) &
QS(95\%) ($\times 10^6$)\\ [0.5ex]
\hline\hline
2 & ResMDN& 101.1 (39\%) &27.0 (41\%) & 13.69 (102\%) \\
\hline
2 & ResMDN-PC& 180.2 (69\%) &44.85 (68\%) & 9.13 (68\%) \\
\hline
3 & ResMDN& 55.9 (105\%) &19.9 (54\%) & 18.3 (41\%) \\%16.9 \\
\hline
3 & ResMDN-PC& 36.7 (69\%) &22.3 (60\%)& 25.8 (58\%) \\%16.9 \\
\hline\hline
\end{tabular}
\caption{The RMSE and quantile scores (QS) at the 75\% and 95\% levels, calculated for total reserve estimates, $\hat{R}$. The number in brackets represents the corresponding metric as a $\%$ of the corresponding ccODP results. The ResMDN-PC model features constrained projections}
\label{table:totalreservesresmdn}
\end{table}
\subsubsection{Comparison to the MDN}
While the ResMDN was able to detect the ccODP's systematic errors, it generally failed to outperform the non-embedded MDN in the above examples as it only partially corrected the errors (see Figure \ref{fig:ResMDNheatmap} for an illustration). Nevertheless, the ResMDN showed great function and potential in correcting an embedded GLM's mean and volatility estimates, while maintaining a fair level of interpretability. While the current implementation is not as accurate as the MDN, if this is criteria is essential then one possibility is to consider a more sophisticated GLM backbone.
Finally, it should also be noted that an additional benefit of the ResMDN is that the ResMDN's training time was noticeably faster than the MDN. For environment 2, when a few triangles were analysed, the ResMDN finished training in 2218 epochs on average, compared to 3868 for the MDN. This 43\% reduction in training time is due to the ResMDN's GLM initialisation being more accurate than the MDN's random starting fit, meaning that less parameter adjustment is required. This observation is in parallel to the findings of \citet*{GaRiWu2020}, who also noted faster training times under the CANN structured model.
\section{Conclusion} \label{sec:conclusion}
In this paper, we identified, addressed, and mitigated a number of obstacles, which so far hindered the popularisation of neural networks in loss triangle reserving. A neural network design which specialises in distributional forecasting, the MDN, was applied successfully to a variety of environments. The MDN produced more accurate central and volatility estimates, for both the individual cell and total reserve measures. The rolling origin model validation method provided a framework for model testing and selection, suited to sequential data. This sequential data partition gave preference to smooth, robust models, while also producing accurate forecasts.
We considered additional extensions involving projection controls and hybrid GLM-MDN approaches. We demonstrated that the MDN was able to significantly outperform the ccODP model in a variety of environments and metrics. While the ccODP model is not representative of the full potential of GLMs in loss reserving, it is a gold standard, and as such the results are compelling.
\section*{Acknowledgements}
Earlier versions of this paper were presented at the Actuaries Institute 2021 Virtual Summit, and at the ASTIN Online Colloquium. The authors are grateful for constructive comments received from colleagues who attended those events.
This research was supported under Australian Research Council's Linkage (LP130100723, with funding partners Allianz Australia Insurance Ltd, Insurance Australia Group Ltd, and Suncorp Metway Ltd) and Discovery (DP200101859) Projects funding schemes. The views expressed herein are those of the authors and are not necessarily those of the supporting organisations.
\section*{Data and Code}
We are unable to provide the dataset that was used in the empirical case study due to confidentiality. However, simulated data sets with similar features, as well as all relevant codes, can be found at \url{https://github.com/agi-lab/reserving-MDN-ResMDN}.
\section*{References}
\bibliographystyle{elsarticle-harv}
|
{
"timestamp": "2021-08-19T02:07:14",
"yymm": "2108",
"arxiv_id": "2108.07924",
"language": "en",
"url": "https://arxiv.org/abs/2108.07924"
}
|
\section{Introduction}
\noindent
The need for modeling and analyzing bimodal bounded data, in specials for data on the unit interval, occurs in many fields of real life
such as bioinformatics \citep{ji05}, image classification \citep{ma09}, transaction at a car dealership
\citep{Smithson1} and so on. In such situations, in order to apply probabilistic modeling these phenomena, under a
parametric paradigm, probability distributions limited to $[0, 1]$ are indispensable.
Especially, the unimodal beta model is the most widely model used in the literature
to describe data in the unit interval, especially because of its
flexibility and fruitful properties \citep{jo95}.
However, despite its broad sense applicability in many fields, the beta distribution
is not suitable to model bimodal data on the unit interval.
In general, one uses mixtures of distributions for describing the bimodal data.
For example, \cite{Smithson1} and \cite{Smithson2}
consider finite mixtures of beta regression models to analyze the priming
effects in judgments of imprecise probabilities. However, in general, mixtures
of distributions may suffer from identifiability problems in the parameter estimation; see
\cite{li1,li2}. Thus, new mixture-free models which have the capacity to accommodate
unimodal and bimodal are very important as often real-world data are better modeled by these models. The nature of phenomena can show bimodality due to many reasons such as economical policies, uncertainty of social movement and its effects on the economy \citep{wong,VC20}.
Variations of the beta model can be found in \cite{ferrari2004}, \cite{os08}, \cite{bayes12}, \cite{Hahn21}, among others. However, all the models cited above are not suitable for capturing bimodality.
Recently, probabilistic models for modeling bimodality on the positive real line were discussed by various authors. \cite{olmos17} introduced
recently a bimodal extension of the Birnbaum-Saunders distribution.
\cite{VFSPO20} proposed the bimodal gamma distribution.
\cite{VC20} considered a bimodal Weibull distribution. Despite this, to the best of our
knowledge, a specific parametric model to describe bimodality data observed on the unit interval has never been considered in the literature.
Based on the above discussion and motivated by the presence of bimodality in proportion responses, we develop a model for
double bounded response variables. In particular, we extended the usual
beta distribution using a quadratic transformation technique used to generate bimodal functions \citep{e:10}.
The approach therefore appears to be a new development for the literature.
We discuss several properties of the proposed model such as bimodality, real moments, hazard rate, entropy measures and identifiability.
Furthermore, we study the effects of the explanatory variables on the response variable using a regression model.
In what follows, we list some of the main contribution and advantages of the proposed model.
\begin{itemize}
\item We introduce a new family of distributions that is flexible version of the usual
beta distribution so that it is capable of fitting bimodal as well as unimodal data.
We provide general properties of the proposed model;
\item We propose an extend version of the quadratic transformation technique used to generate bimodal functions;
\item The proposed model allows the boundary values to lie on a smooth unified continuum
along with the rest of the open interval (0, 1), as opposed existing as one or two discontinuities, i.e.,
it does not require boundary values to be either discarded or else treated separately \citep{Hahn21}.
Thus, one of the main motivation of this paper is to contribute with another attractive
regression model for modeling of double bounded response variables.
\end{itemize}
The rest of the article proceeds as follows. In Sections \ref{sect:2} and \ref{sect:3}, we present the new distribution and
derive some of its properties.
Then in Section \ref{sect:4}, we present the main properties of the bimodal Beta, which include entropy measures, stochastic representation and identifiability.
Section \ref{sect:5} presents the bimodal Beta regression model.
Also, the estimation method for the model parameters and diagnostic measures are discussed.
In Section \ref{sect:6}, some numerical results of the estimators and the empirical distribution of the residuals are presented with a discussion of the
results.
A real life application related to the proportion of votes that Jair Bolsonaro received in the second turn of Brazilian elections in 2018 is analyzed in Section \ref{sect:7}.
Section \ref{sect:8} summarizes the main findings of the paper.
\section{The Beta bimodal distribution}
\label{sect:2}
\noindent
In this Section, the bimodal Beta (BBeta) distribution is introduced and its density is derived. Moreover,
some results on the bimodality properties are obtained.
We say that a random variable (r.v.) $X$ has a BBeta distribution with parameter vector
$\boldsymbol{\theta}_\delta=(\alpha,\beta, \rho, \delta)$, $\alpha>0,\beta>0$, $\rho\geqslant 0$ and $\delta\in\mathbb{R}$,
denoted by $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$,
if its probability density function (PDF) is given by
\begin{align}\label{beta-density}
f(x;\boldsymbol{\theta}_\delta)
=
\begin{cases}
\displaystyle
\frac{\rho+(1-\delta{x})^2}{ Z(\boldsymbol{\theta}_\delta) {B}(\alpha,\beta) } \,
x^{\alpha-1} \, (1-x)^{\beta-1},
& 0\leqslant x\leqslant 1
\\[0,2cm]
0, & \text{otherwise},
\end{cases}
\end{align}
where
\begin{align}\label{partition-function}
Z(\boldsymbol{\theta}_\delta)
=
1+\rho
- 2\delta\,{\alpha\over \alpha+\beta}
+
\delta^2\,{\alpha(\alpha+1)\over(\alpha+\beta)(\alpha+\beta+1)}
\end{align}
denotes the normalization constant and ${B}(\alpha,\beta)$ is the beta function.
When $\delta=0$, $\rho$ is simplified in \eqref{beta-density}, and then we obtain the classic beta distribution
with parameter vector $\boldsymbol{\theta}_0=(\alpha,\beta,\rho, 0)\coloneqq (\alpha,\beta)$. The parameters $\alpha$, $\beta$ (which appear as exponents of the r.v.) and $\rho$, control the format of the distribution. The uni- or bimodality is controlled by the parameter $\delta$. Note that for $\alpha$, $\beta$ and $\delta\neq 0$ fixed, the parameter $\rho$ also controls the uni- or bimodality of the distribution. From Figure \ref{pdfbbeta} we note some different shapes of the BBeta PDF for different combinations of parameters. Figure \ref{pdfbbeta} (a) and (b) represent $L$ shape and its bimodal form and
bell shaped case of Beta distribution, respectively.
\begin{figure}[H]
\centering
\subfloat[]{\label{fig:bimodalpdfbbeta}\includegraphics[width=0.45\textwidth]{bimodalpdfbbeta}}
\quad
\subfloat[]{\label{fig:flucpdfbbeta}\includegraphics[width=0.45\textwidth]{flucpdfbbeta}}
\caption{The PDF of BBeta for different values of parameters.}
\label{pdfbbeta}
\end{figure}
If $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$, the cumulative distribution function (CDF), the survival function (SF) and the hazard rate function (HR) of $X$ are, respectively, given by
\begin{align}
&F(x;\boldsymbol{\theta}_\delta)
=
\dfrac{1}{ Z(\boldsymbol{\theta}_\delta) }
\biggl[
(1+\rho)\,
{I_{x}(\alpha, \beta)}
-2\delta\,
\dfrac{B_{x}(\alpha+1, \beta)}{B(\alpha, \beta)}
+\delta^2\,
\dfrac{B_{x}(\alpha+2, \beta)}{B(\alpha, \beta)}
\biggr], \label{CDF}
\\[0,2cm]
&S(x;\boldsymbol{\theta}_\delta)
=
\dfrac{1}{ Z(\boldsymbol{\theta}_\delta) }
\sum_{i=0}^{2}
c_i\,
\biggl[
\dfrac{B(\alpha+i, \beta)}{B(\alpha, \beta)}
-
\dfrac{B_{x}(\alpha+i, \beta)}{B(\alpha, \beta)}
\biggr]
\ \text{and} \label{SF}
\\[0,2cm]
&H(x;\boldsymbol{\theta}_\delta)
=
\dfrac{\big[\rho+(1-\delta{x})^2\big] x^{\alpha-1} \, (1-x)^{\beta-1}}{
\sum_{i=0}^{2} c_i\,
\big[B(\alpha+i, \beta)
-
B_{x}(\alpha+i, \beta)
\big]}, \label{HR}
\end{align}
where $I_{x}(\alpha, \beta)$ is the incomplete beta function ratio, $B_{x}(\alpha, \beta)$ is the incomplete beta function, and $c_0=1+\rho$, $c_1=-2\delta$, $c_2=\delta^2$. For more details on the derivation of these formulas see Section \ref{sect:3}.
\subsection{Bimodality properties}
\noindent
To state the following result that guarantees the bimodality of the \text{BBeta} distribution, we define the set $\mathcal{A}$ formed by all $\boldsymbol{\theta}_\delta=(\alpha,\beta, \rho, \delta)\in(0,+\infty)^2\times[0,+\infty)\times\mathbb{R}$ such that following hold:
\begin{eqnarray}
\alpha>1,\beta>1, \delta> 1,\rho\neq 0; \label{cond-1}
\\[0,1cm]
\delta(\alpha-3)>-2(\alpha+\beta-2); \label{cond-2}
\\[0,1cm]
2\delta(2+\delta-\alpha)<(\rho+1)(\alpha+\beta-2); \label{cond-3}
\\[0,1cm]
(\rho+1)(\alpha-1)>2\delta. \label{cond-4}
\end{eqnarray}
Note that the set $\mathcal{A}$ is non-empty because the point $\boldsymbol{\theta}_\delta=(6,6,0.1,2)\in \mathcal{A}$.
\begin{thm1}[Bimodality; case $\rho\neq 0$]
If $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$ such that $\boldsymbol{\theta}_\delta\in \mathcal{A}$ then the \text{BBeta} distribution is bimodal.
\end{thm1}
\begin{proof}
A simple computation shows that
\begin{eqnarray}\label{derivative}
f'(x;\boldsymbol{\theta}_\delta)
=
{x^{\alpha-2}(1-x)^{\beta-2}\over Z(\boldsymbol{\theta}_\delta) {B}(\alpha,\beta)} \,
p_3(x),
\end{eqnarray}
where
\begin{align}\label{polynomial}
p_3(x)= \big[\rho+(1-\delta x)^2\big]\big[(\alpha-1)(1-x)-(\beta-1)x\big]
-2\delta(1-\delta x)x(1-x).
\end{align}
This implies that, $f'(x;\boldsymbol{\theta}_\delta)=0$ if and only if $x=0$, $x=1$ and
\begin{align*}
p_3(x)&=
-x^3\delta^2(\alpha+\beta-2)+x^2\delta\big[\delta(\alpha-3)+2(\alpha+\beta-2)\big]
\\[0,2cm]
&+x\big[2\delta(2+\delta-\alpha)-(\rho+1)(\alpha+\beta-2)\big]
-2\delta+(\rho+1)(\alpha-1)=0.
\end{align*}
Since, by definition, the boundary points are never critical points we exclude the analysis at these points.
By using \eqref{cond-1} in \eqref{derivative}-\eqref{polynomial} we have $f'(x;\boldsymbol{\theta}_\delta)\neq 0$ for all $x>1$.
In other words, the roots of $p_3(x)$ occur within the interval $(0,1)$.
We claim that, under conditions \eqref{cond-1}, \eqref{cond-2}, \eqref{cond-3} and \eqref{cond-4}, $p_3(x)$ has exactly three different roots within the interval $(0,1)$.
Indeed, under \eqref{cond-1}-\eqref{cond-4}, by Descartes’ rule of signs
(see, e.g. \cite{xue2012loop} and \cite{{griffiths1947introduction}}), $p_3(x)$ has three or one positive roots.
But by conditions \eqref{cond-1}-\eqref{cond-4} and by Vieta’s formula (see, e.g., \cite{vinberg2003course}),
\begin{eqnarray*}
x_1+x_2+x_3={\delta\big[\delta(\alpha-3)+2(\alpha+\beta-2)\big]\over \delta^2(\alpha+\beta-2)},
\\[0,1cm]
x_1x_2+x_2x_3+x_1x_3=-{\big[2\delta(2+\delta-\alpha)-(\rho+1)(\alpha+\beta-2)\big]\over \delta^2(\alpha+\beta-2)},
\\[0,1cm]
x_1x_2x_3={-2\delta+(\rho+1)(\alpha-1)\over \delta^2(\alpha+\beta-2)},
\end{eqnarray*}
we obtain that the polynomial equation $p_3(x)= 0$ has exactly three positive roots $x_1,x_2$ and $x_3$ in $(0,1)$, and the claimed follows.
Without loss of generality, let’s assume that $x_1<x_2<x_3$. Since, for $\alpha>1,\beta>1$, $f(x; \boldsymbol{\theta}_\delta) \longrightarrow 0$ as $x \to 0^+$
and $f(x; \boldsymbol{\theta}_\delta) \longrightarrow 0$ as
$x\to 1^-$, it follows that the BBeta density \eqref{beta-density} increases on the intervals $(0, x_1)$
and $(x_2, x_3)$, and decreases on $(x_1, x_2)$ and $(x_3, 1)$. That is, $x_1$ and $x_3$ are two
maximum points and $x_2$ is the unique minimum point. Thus we have complete the proof of theorem.
\end{proof}
\begin{thm1}[Bimodality; case $\rho= 0$]
If $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$, $\rho= 0$, $\alpha>1,\beta>1$, $\delta>1$ and
\begin{eqnarray}\label{condition-zero}
\big[\delta(\alpha+1)+\alpha+\beta-2\big]^2>4\delta(\alpha+\beta)(\alpha-1),
\end{eqnarray}
then the \text{BBeta} distribution is bimodal.
\end{thm1}
\begin{proof}
When $\rho= 0$, in \eqref{derivative}, we have
{\scalefont{0.97}
\begin{align}\label{flinha}
f'(x;\boldsymbol{\theta}_\delta)
&=
{x^{\alpha-2}(1-x)^{\beta-2} (1-\delta x)\over Z(\boldsymbol{\theta}_\delta) {B}(\alpha,\beta)} \,
\Big\{
(1-\delta x)\big[(\alpha-1)(1-x)-(\beta-1)x\big]
-2\delta x(1-x)\Big\}
\\[0,1cm]
&=
{x^{\alpha-2}(1-x)^{\beta-2} (1-\delta x)\over Z(\boldsymbol{\theta}_\delta) {B}(\alpha,\beta)} \,
\Big\{
x^2(\alpha+\beta)\delta-\big[\delta(\alpha+1)+\alpha+\beta-2\big]x+(\alpha-1)
\Big\}. \nonumber
\end{align}
}
A direct calculus shows that $f'(x;\boldsymbol{\theta}_\delta)=0$ if and only if (excluding the boundary points) $x=1/\delta$ and
\begin{align*}
x_{\pm}=
{
\delta(\alpha+1)+\alpha+\beta-2 \pm
\sqrt{\big[\delta(\alpha+1)+\alpha+\beta-2\big]^2-4\delta(\alpha+\beta)(\alpha-1)}
\over
2\delta(\alpha+\beta)
}.
\end{align*}
Note that, by conditions $\alpha>1,\beta>1$, $\delta>1$, in \eqref{flinha} we have $f'(x;\boldsymbol{\theta}_\delta)\neq 0$ for all $x>1$.
Hence, under condition \eqref{condition-zero}, it follows that the equation $f'(x;\boldsymbol{\theta}_\delta)=0$ has three positive roots $x=1/\delta,x_-$ and $x_+$ within the interval $(0,1)$, where $x_-<x=1/\delta<x_+$.
Since, for $\alpha>1,\beta>1$, $f(x; \boldsymbol{\theta}_\delta) \longrightarrow 0$ as $x \to 0^+$
and $f(x; \boldsymbol{\theta}_\delta) \longrightarrow 0$ as
$x\to 1^-$, the bimodality of the BBeta distribution is guaranteed, where $x_-$ and $x_+$ are two
maximum points and $x=1/\delta$ is the unique minimum point.
\end{proof}
\vspace*{-0,5cm}
\section{Some characteristics and properties}
\label{sect:3}
\noindent
In this section, some closed expressions for the mean residual life function and real moments of the BBeta distribution are obtained.
\begin{thm1}\label{truncated-moments}
If $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$ then, for $0\leqslant a<b\leqslant 1$ and $r> -\alpha$,
\begin{eqnarray*}
\mathbb{E}\big(X^r\mathds{1}_{\{a\leqslant X\leqslant b\}}\big)
=
\dfrac{1}{ Z(\boldsymbol{\theta}_\delta) }
\sum_{i=0}^{2}
c_i\,
\biggl[
\dfrac{B_{b}(\alpha+r+i, \beta)}{B(\alpha, \beta)}
-
\dfrac{B_{a}(\alpha+r+i, \beta)}{B(\alpha, \beta)}
\biggr],
\end{eqnarray*}
where $c_0=1+\rho$, $c_1=-2\delta$, $c_2=\delta^2$, and
$B_x(\alpha, \beta)$ is the incomplete beta function.
\end{thm1}
\begin{proof}
By using definition of expectation and definition of BBeta density, we have
\begin{eqnarray*}
\mathbb{E}\big(X^r \mathds{1}_{\{a\leqslant X\leqslant b\}}\big)
=
\frac{1}{ Z(\boldsymbol{\theta}_\delta) }
\sum_{i=0}^{2}
c_i\, \mathbb{E}\big(Y^{r+i} \mathds{1}_{\{a\leqslant Y\leqslant b\}}\big), \quad Y\sim \text{BBeta}(\boldsymbol{\theta}_0).
\end{eqnarray*}
Since
\begin{eqnarray*}
\mathbb{E}\big(Y^{r+i} \mathds{1}_{\{a\leqslant Y\leqslant b\}}\big)
=
\dfrac{B_{b}(\alpha+r+i, \beta)}{B(\alpha, \beta)}
-
\dfrac{B_{a}(\alpha+r+i, \beta)}{B(\alpha, \beta)},
\end{eqnarray*}
the proof of theorem follows.
\end{proof}
Taking $r = 0$, $b=x$ and $a = 0$ in Theorem \ref{truncated-moments}, we get the formula \eqref{CDF} for the CDF.
Letting $r = 0$, $b=1$ and $a = x$ in Theorem \ref{truncated-moments}, we get the formula \eqref{SF} for the SF.
\begin{corollary}[Mean residual life function]
If $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$ then mean residual life function of $X$, defined by ${\rm MRL}(x,\boldsymbol{\theta}_\delta)=\int_{x}^{1} S(t;\boldsymbol{\theta}_\delta)\, {\rm d}t/S(x;\boldsymbol{\theta}_\delta)$, is written as
\begin{eqnarray*}
{\rm MRL}(x,\boldsymbol{\theta}_\delta)=
\dfrac{ \sum_{i=0}^{2}
c_i\,
\big\{
\big[B(\alpha+i+1, \beta)-x B(\alpha+i, \beta)\big]
-
\big[B_{x}(\alpha+i+1, \beta)-x B_{x}(\alpha+i, \beta)\big]
\big\}}{\sum_{i=0}^{2}
c_i\,
\big[
B(\alpha+i, \beta)
-
B_{x}(\alpha+i, \beta)\big]},
\end{eqnarray*}
}
where $c_0=1+\rho$, $c_1=-2\delta$ and $c_2=\delta^2$.
\end{corollary}
\begin{proof}
Integration by parts gives
\begin{eqnarray}\label{MRL}
{\rm MRL}(x,\boldsymbol{\theta}_\delta)={1\over S(x,\boldsymbol{\theta}_\delta)} \, \mathbb{E}\big(X\mathds{1}_{\{X\geqslant x\}}\big) -x.
\end{eqnarray}
Taking $r=1$, $a=x$ and $b=1$ in Theorem \ref{truncated-moments}, we get
\begin{eqnarray*}
\mathbb{E}\big(X\mathds{1}_{\{X\geqslant x\}}\big)
=
\dfrac{1}{ Z(\boldsymbol{\theta}_\delta) }
\sum_{i=0}^{2}
c_i\,
\biggl[
\dfrac{B(\alpha+i+1, \beta)}{B(\alpha, \beta)}
-
\dfrac{B_{x}(\alpha+i+1, \beta)}{B(\alpha, \beta)}
\biggr].
\end{eqnarray*}
By replacing the above identity in \eqref{MRL}, the proof follows.
\end{proof}
By combining the formula \eqref{SF} of CDF and definition of the BBeta distribution, we obtain the formula \eqref{HR} for the HR.
\begin{corollary}[Real moments]\label{Real moments}
If $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$ and $r> -\alpha$, then
\begin{eqnarray*}
\mathbb{E}(X^r)
=
\dfrac{1}{ Z(\boldsymbol{\theta}_\delta) }
\biggl[
(1+\rho)\,
\dfrac{B(\alpha+r, \beta)}{B(\alpha, \beta)}
-2\delta\,
\dfrac{B(\alpha+r+1, \beta)}{B(\alpha, \beta)}
+\delta^2\,
\dfrac{B(\alpha+r+2, \beta)}{B(\alpha, \beta)}
\biggr],
\end{eqnarray*}
\end{corollary}
\begin{proof}
By taking $b=1$ and $a = 0$ in Theorem \ref{truncated-moments} we have the following:
\begin{eqnarray*}
\mathbb{E}(X^r)
=
\dfrac{1}{ Z(\boldsymbol{\theta}_\delta) }
\sum_{i=0}^{2}
c_i\,
\dfrac{B(\alpha+r+i, \beta)}{B(\alpha, \beta)},
\end{eqnarray*}
where $c_0=1+\rho$, $c_1=-2\delta$ and $c_2=\delta^2$.
\end{proof}
\begin{corollary}[Raw moments]\label{moments}
If $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$ and $k\in[0,+\infty)\cap\mathbb{Z}$, then
\begin{align*}
\mathbb{E}(X^k)
=
\dfrac{1}{Z(\boldsymbol{\theta}_\delta)}
\Biggl(\prod_{j=0}^{k-1}\frac{\alpha+j}{\alpha+\beta+j}\Biggr)
\biggl[
1+\rho
-
{2\delta(\alpha+k)\over \alpha+\beta+k}
+
{\delta^2(\alpha+k)(\alpha+k+1)\over (\alpha+\beta+k)(\alpha+\beta+k+1)}
\biggl],
\end{align*}
where we are conventioning that $\prod_{j=0}^{-1}({\alpha+j})/({\alpha+\beta+j})=1$.
\end{corollary}
\begin{proof}
By taking $r=k$ in Corollary \ref{Real moments} and using the simple recurrence relation
\begin{eqnarray}\label{rel-rec}
B(x+k,y)=B(x,y) \, \prod_{j=0}^{k-1}\frac{x+j}{x+y+j}
\end{eqnarray}
we have
\begin{eqnarray*}
\mathbb{E}(X^k)
=
\dfrac{1}{ Z(\boldsymbol{\theta}_\delta) }
\sum_{i=0}^{2}
c_i
\prod_{j=0}^{k+i-1}\frac{\alpha+j}{\alpha+\beta+j},
\end{eqnarray*}
where $c_0=1+\rho$, $c_1=-2\delta$ and $c_2=\delta^2$.
From the above formula the proof follows immediately.
\end{proof}
As a consequence of the above corollary, the closed expressions for the standardized moments, variance, skewness and kurtosis of the bimodal Beta r.v. $X$ are easily obtained.
\begin{rem1}\label{obs-1}
Taking $\delta=0$ in in Corollaries \ref{Real moments} and \ref{moments} we obtain the following known formulas:
\begin{align*}
\mathbb{E}(Y^r)=
\dfrac{B(\alpha+t, \beta)}{B(\alpha, \beta)}, \ r>-\alpha;
\quad
\mathbb{E}(Y^k)
=
\prod_{j=0}^{k-1}\frac{\alpha+j}{\alpha+\beta+j}, \
k\in[0,+\infty)\cap\mathbb{Z}; \quad Y\sim \text{BBeta}(\boldsymbol{\theta}_0).
\end{align*}
\end{rem1}
An immediate application of Corollary \ref{moments} provides the following result.
\begin{corollary}
If $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$ then
\begin{align*}
\mathbb{E}(X)&=
\dfrac{1}{Z(\boldsymbol{\theta}_\delta)} \,
\frac{ \alpha}{\alpha+\beta}\,
\biggl[
1+\rho -
{2\delta(\alpha+1)\over \alpha+\beta+1}
+
{\delta^2(\alpha+1)(\alpha+2)\over (\alpha+\beta+1)(\alpha+\beta+2)}
\biggl];
\\[0,2cm]
\mathbb{E}(X^2)&=
\dfrac{1}{Z(\boldsymbol{\theta}_\delta)}\,
\frac{\alpha(\alpha+1)}{(\alpha+\beta)(\alpha+\beta+1)}\,
\biggl[
1+\rho -
{2\delta(\alpha+2)\over \alpha+\beta+2}
+
{\delta^2(\alpha+2)(\alpha+3)\over (\alpha+\beta+2)(\alpha+\beta+3)}
\biggl].
\end{align*}
\end{corollary}
\begin{rem1
The deformed moment generating function of BBeta r.v. $X$ is given by the following expression:
\begin{equation*}
\mathbb{E}\big[\exp_q(tX)\big]
=
{\Gamma(\alpha)\Gamma(\beta) (A_1 + A_2) \over {B}(\alpha,\beta)\Gamma(\alpha+\beta) Z(\boldsymbol{\theta}_{\delta})},
\quad q \in [0,1), t \geqslant 0,
\end{equation*}
\noindent
where
$\exp_q(tx)=[1 + (1 - q)tx]^{1/ (1-q)}$ denotes the deformed exponential function,
$A_1=(1+\rho){H}_2({1/(q-1)},\alpha,\alpha+\beta,t(q-1))$ and
{\small
\begin{align*}
A_2={
-2(1+\alpha+\beta) {H}_2({1 \over q-1},1+\alpha,1+\alpha+\beta,t(q-1))
+
(1+\alpha) \delta {H}_2({1 \over q-1},2+\alpha,2+\alpha+\beta,t(q-1))
\over
(\alpha + \beta )(1 + \alpha + \beta)/(\delta \alpha)}.
\end{align*}
}
Here,
$H_2(a,b,c,z)$ is the hypergeometric function $_2 F_1 (a,b;c;z)$. By using L'Hospital's rule we have that, if $q\to 1$, $\exp_q(tx)$ drops to $\exp(tx)$.
\end{rem1}
\begin{corollary}[Moment generating function]\label{momentMGF}
If $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$ and $t \geqslant 0$, then
{\small
\begin{align*}
\mathbb{E}\big[\exp(tX)\big]
&=\Gamma(\alpha + \beta){ t^{-1}
\biggl\{1+\rho +
\frac{\alpha \delta [\delta(1+\alpha)-2(1+\alpha+\beta)]}{(\alpha+\beta)(1+\alpha+\beta)}\biggr\}^{-1}}
\\[0,2cm] \nonumber
&
\times\left\{
t[(\delta-1)^2 + \rho]
-\beta \delta^2 H_1(\alpha,\alpha+\beta,t)
+
\beta \delta [\delta(\alpha+\beta)+t(2-\delta)] H_1(\alpha,1+\alpha+\beta,t)
\right\},
\end{align*}
}
where
$H_1(a,b,c)$ is the regularized confluent hypergeometric function $_1 F_1(a;b;z)/\Gamma(b)$.
\end{corollary}
\vspace*{-0,5cm}
\section{Further properties}
\label{sect:4}
\noindent
In this section, we consider some properties of the BBeta distribution, such as the
entropy measures, stochastic representation and identifiability.
\subsection{Entropy measures}
\noindent
Let $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$.
The Tsallis \citep{Tsallis1988} entropy associated with a non-negative random variable $X$ is defined by
\begin{eqnarray*}
S_q(X)=\dfrac{1}{q-1}\, \bigg[1-\int_{0}^{1} f^q(x;\boldsymbol{\theta}_\delta) \, {\rm d}x\bigg], \quad q\neq 1.
\end{eqnarray*}
The quadratic entropy \citep{Rao2010} is defined as
\begin{eqnarray*}
H_2(X)=-\log \int_{0}^1 f^2(x;\boldsymbol{\theta}_\delta) \, {\rm d}x.
\end{eqnarray*}
We also define the Shannon entropy \citep{Shannon1948} as
\begin{eqnarray*}
H_1(X)=
-
\int_{0}^{1}
f(x;\boldsymbol{\theta}_\delta)
\log f(x;\boldsymbol{\theta}_\delta) \, {\rm d}x.
\end{eqnarray*}
By using L'Hospital's Rule, we have that, if $q\to 1$, then $S_q(X) \to H_1(X)$ and the usual definition of Shannon's entropy is recovered.
\begin{thm1}[Tsallis entropy]
Let $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$, $\alpha_q=q(\alpha-1)+1>0$, $\beta_q=q(\beta-1)+1>0$, $\rho\geqslant 1$ and $0\leqslant q<1$. Then
\begin{align*}
\int_{0}^{1} f^q(x;\boldsymbol{\theta}_\delta) \, {\rm d}x
\leqslant
\frac{ {B}(\alpha_q,\beta_q)}{ [Z(\boldsymbol{\theta}_\delta) {B}(\alpha,\beta)]^q } \,
\biggl[
1+q\rho
+
q\delta^2\,
{{B}(\alpha_q+2,\beta_q)\over {B}(\alpha_q,\beta_q)}
-
2q\delta\,
{{B}(\alpha_q+1,\beta_q)\over {B}(\alpha_q,\beta_q)}
\biggr].
\end{align*}
Moreover, the two sides are equal if and only if $q$ is sufficiently close to 1.
In particular, for $\alpha_q>0$, $\beta_q>0$ and $0\leqslant q<1$, the Tsallis entropy exists.
\end{thm1}
\begin{proof}
By definition of BBeta PDF, we have
\begin{eqnarray}\label{exp-1}
\int_{0}^{1} f^q(x;\boldsymbol{\theta}_\delta) \, {\rm d}x
=
\frac{1}{ [Z(\boldsymbol{\theta}_\delta) {B}(\alpha,\beta)]^q } \,
\int_{0}^{1}
[\rho+(1-\delta{x})^2]^q
x^{q(\alpha-1)} \, (1-x)^{q(\beta-1)}
\, {\rm d}x.
\end{eqnarray}
By using the inequality (see, e.g., \cite{Hardy34})
$
a^b\leqslant 1+(a-1)b, \ \text{for} \ b\in[0,1], \ a\geqslant 1,
$
the expression on the right-hand side of \eqref{exp-1} is at most
\begin{align*
&\frac{1}{ [Z(\boldsymbol{\theta}_\delta) {B}(\alpha,\beta)]^q } \,
\int_{0}^{1}
\big\{1+[\rho-1+(1-\delta x)^2] q\big\}\, x^{q(\alpha-1)} \, (1-x)^{q(\beta-1)}
\, {\rm d}x
\\[0,2cm]
&=
\frac{{B}(q(\alpha-1)+1,q(\beta-1)+1)}{ [Z(\boldsymbol{\theta}_\delta) {B}(\alpha,\beta)]^q } \,
[1+q\rho+q\delta^2\mathbb{E}(Y^2)-2q\delta\mathbb{E}(Y)],
\end{align*}
where $Y\sim \text{BBeta}(q(\alpha-1)+1,q(\beta-1)+1)$.
By applying Remark \ref{obs-1}, the proof follows.
\end{proof}
By using the identifiability (see Subsection \ref{Identifiability}), it is possible to write an upper bound
for the Tsallis entropy and log$_q(f)$, where, for $x > 0$, $\log_q(x)=(x^{1-q}-1)/(1-q)$, $q\neq 1$, represents the deformed logarithm \citep{Tsallisbook09}. After having an upper bound for the Tsallis entropy and log$_q(f)$,
the application of MLqE method for the estimation of the parameters of BBeta will be accurate.
If the proposed distribution can have entropies, existence of MGF (see Corollary \ref{momentMGF}), it is safe to apply for modelling on a data set. Otherwise, unboundness and nonexistence of moments for PDF cannot provide modelling many types of real data sets and free from computational error which can occur while performing optimization in order to get the estimators of parameters in the distribution \cite{gut13}.
\begin{proposition}[Quadratic entropy]
Let $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$ with $\alpha>1/2$, $\beta>1/2$. Then
\begin{align*}
H_2(X)&=
-\log B(2\alpha-1,2\beta-1)
+
2\log Z(\boldsymbol{\theta}_\delta)
+
2\log B(\alpha,\beta)
-
\log\Biggl[
\sum_{i=0}^4
\widetilde{c}_i \prod_{j=0}^{i-1}\frac{\alpha+j}{\alpha+\beta+j}
\Biggr],
\end{align*}
where $\widetilde{c}_0=(1 + \rho)^2$, $\widetilde{c}_1=- 4 \delta(1+\rho)$, $\widetilde{c}_2=2 \delta^2(3+\rho)$, $\widetilde{c}_3=- 4 \delta^3$ and $\widetilde{c}_4=\delta^4$.
\end{proposition}
\begin{proof}
Since $\alpha>1/2$ and $\beta>1/2$, by using definitions of density $f$ and expectation,
\begin{align}\label{exp-1-1}
\int_{0}^{1} f^2(x;\boldsymbol{\theta}_\delta) \, {\rm d}x
&=
\frac{{B}(2\alpha-1,2\beta-1)}{[Z(\boldsymbol{\theta}_\delta){B}(\alpha,\beta)]^2} \,
\int_{0}^{1}
[\rho+(1-\delta{x})^2]^2\,
{x^{2(\alpha-1)} \, (1-x)^{2(\beta-1)}\over {B}(2\alpha-1,2\beta-1)}
\, {\rm d}x \nonumber
\\[0,3cm]
&=
\frac{{B}(2\alpha-1,2\beta-1)}{[Z(\boldsymbol{\theta}_\delta){B}(\alpha,\beta)]^2} \,
\mathbb{E}\big\{[\rho+(1-\delta{Y})^2]^2\big\},
\quad Y\sim{\rm Beta}(2\alpha-1,2\beta-1).
\end{align}
Developing the quadratic factor above,
\begin{align*}
\mathbb{E}\big\{[\rho+(1-\delta{Y})^2]^2\big\}
=
(1 + \rho)^2
- 4 \delta(1+\rho) \mathbb{E}(Y)
+ 2 \delta^2(3+\rho) \mathbb{E}(Y^2)
- 4 \delta^3 \mathbb{E}(Y^3) + \delta^4 \mathbb{E}(Y^4)
\end{align*}
and replacing in \eqref{exp-1-1}, we have
\begin{align*}
\int_{0}^{1} f^2(x;\boldsymbol{\theta}_\delta) \, {\rm d}x
=
\frac{{B}(2\alpha-1,2\beta-1)}{[Z(\boldsymbol{\theta}_\delta){B}(\alpha,\beta)]^2} \,
\sum_{i=0}^4
\widetilde{c}_i \mathbb{E}(Y^i),
\end{align*}
with $\widetilde{c}_0=(1 + \rho)^2$, $\widetilde{c}_1=- 4 \delta(1+\rho)$, $\widetilde{c}_2=2 \delta^2(3+\rho)$, $\widetilde{c}_3=- 4 \delta^3$ and $\widetilde{c}_4=\delta^4$.
Hence, from Remark \ref{obs-1} and definition of quadratic entropy, the proof follows.
\end{proof}
\begin{lemma}[The $1$-th logarithmic moment about zero]\label{exp-log}
If $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$, with $\alpha\geqslant 2$, then
\begin{eqnarray*}
\mathbb{E}\big[\log (X)\big]
=
\frac{1}{ Z(\boldsymbol{\theta}_\delta) B(\alpha, \beta)}
\sum_{i=0}^{2}
c_i \,
\dfrac{\partial B(\alpha+i, \beta)}{\partial\alpha},
\end{eqnarray*}
where $c_0=1+\rho$, $c_1=-2\delta$ and $c_2=\delta^2$, and $\psi(x)=\Gamma'(x)/\Gamma(x)$ is the digamma function.
\end{lemma}
\begin{proof} By using definition of expectation of a function of a BBeta r.v. $X$, we have
\begin{align*}
\mathbb{E}\big[\log (X)\big]=
\frac{1}{ Z(\boldsymbol{\theta}_\delta) }
\sum_{i=0}^{2}
c_i\, \mathbb{E}\big[Y^{i} \log (Y)\big], \quad Y\sim \text{BBeta}(\boldsymbol{\theta}_0).
\end{align*}
If we prove that
\begin{eqnarray}\label{claim-1}
\mathbb{E}\big[Y^{i} \log(Y)\big]
=
\dfrac{1}{B(\alpha, \beta)}\,
\dfrac{\partial B(\alpha+i, \beta)}{\partial\alpha}, \quad i=0,1,2,\ldots,
\end{eqnarray}
the proof follows. In what remains of the proof, we show the validity of \eqref{claim-1}.
Indeed, since ${\partial y^{\alpha-1}/\partial \alpha}=(\log y) y^{\alpha-1}$, we get
\begin{eqnarray}\label{id-integral}
\mathbb{E}\big[Y^{i} \log(Y)\big]
&=&
\dfrac{1}{B(\alpha, \beta)}\,
\int_{0}^{1} \log(y)\, {y^{\alpha+i-1}(1 - y)^{\beta-1}} \, {\rm d} y \nonumber
\\[0,15cm]
&=&
\dfrac{1}{B(\alpha, \beta)}\,
\int_{0}^{1}
\dfrac{\partial}{\partial\alpha}\big[
{y^{\alpha+i-1}(1 - y)^{\beta-1}}\big] \, {\rm d} y.
\end{eqnarray}
A standard calculation shows that conditions of Leibniz integral rule are satisfied. Then we can interchange the derivative with the integral in \eqref{id-integral}. Hence
\begin{align*}
\mathbb{E}\big[Y^{i} \log(Y)\big]
&=
\dfrac{1}{B(\alpha, \beta)}\,
\dfrac{\partial}{\partial\alpha}
\int_{0}^{1} {y^{\alpha+i-1}(1 - y)^{\beta-1}} \, {\rm d} y
=
\dfrac{1}{B(\alpha, \beta)}\,
\dfrac{\partial B(\alpha+i,\beta)}{\partial\alpha},
\end{align*}
and \eqref{claim-1} follows.
Thus, we complete the proof of lemma.
\end{proof}
\begin{rem1}\label{rem-main}
By using Lemma \ref{exp-log}, the identity
$
{\partial \log B(\alpha, \beta)}/{\partial \alpha}=\psi(\alpha)-\psi(\alpha+\beta)
$,
with
$\psi(x)=\Gamma'(x)/\Gamma(x)$,
and the recurrence relation \eqref{rel-rec}, we have
\begin{align*}
\mathbb{E}\big[\log (X)\big]
=
\frac{1}{ Z(\boldsymbol{\theta}_\delta) }
\sum_{i=0}^{2}
c_i \,
\biggl[
\big(\psi(\alpha)-\psi(\alpha+\beta)\big) \prod_{j=0}^{i-1}\frac{\alpha+j}{\alpha+\beta+j}
+
{\partial\over \partial\alpha}
\prod_{j=0}^{i-1}\frac{\alpha+j}{\alpha+\beta+j}
\biggr].
\end{align*}
\end{rem1}
\begin{thm1}[Shannon entropy]
Let $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$, with $\rho=0$, $\alpha\geqslant 2$ and $\delta=1$. Then
\begin{multline*}
\hspace*{-0.35cm}
H_1(X)= \log \Gamma(\alpha)+\log\Gamma(\beta) -\log\Gamma(\alpha + \beta)
+
\log\biggl[1
-2\, {\alpha\over \alpha+\beta}
+ {\alpha(\alpha+1)\over(\alpha+\beta)(\alpha+\beta+1)}\biggr]
\\[0,2cm]
- \frac{(\alpha-1)\beta(\beta+1)}{ (\alpha+\beta)(\alpha+\beta+1)
-2\alpha(\alpha+\beta+1)
+
{\alpha(\alpha+1)} }
\left[\psi(\alpha)-\psi(\alpha+\beta)-{2(\alpha+\beta)+1\over (\alpha+\beta) (\alpha+\beta+1)}\right]
\\[0,2cm]
+ \dfrac{(\alpha+\beta)(\alpha+\beta+1)(\beta+1)}{ (\alpha+\beta)(\alpha+\beta+1)
-2\alpha(\alpha+\beta+1)
+
{\alpha(\alpha+1)} }
\sum_{i=0}^{2}
c_i
\sum_{k=1}^{\infty}
\dfrac{1}{k}
\prod_{j=0}^{k+i-1}\frac{\alpha+j}{\alpha+\beta+j},
\end{multline*}
whenever the series above converges absolutely.
Here, $c_0=c_2=1$ and $c_1=-2$, and $\psi(x)=\Gamma'(x)/\Gamma(x)$ is the digamma function.
\end{thm1}
\begin{proof}
Since $\rho=0$ and $\delta=1$,
a simple computation shows that
\begin{multline}\label{Step-1}
\int_{0}^{1}
f(x;\boldsymbol{\theta}_1)
\log f(x;\boldsymbol{\theta}_1) \, {\rm d}x
=
\mathbb{E}\big[\log f(X;\boldsymbol{\theta}_1)\big]
\\[0,2cm]
=
-\log Z(\boldsymbol{\theta}_1) -\log{B}(\alpha,\beta)
+ (\alpha-1)\,\mathbb{E}\big[\log (X)\big] + (\beta+1)\,\mathbb{E}\big[\log(1-X)\big].
\end{multline}
Taking $c_0=c_2=1$ and $c_1=-2$ in Remark \ref{rem-main} we obtain
\begin{align}\label{Step-2}
\mathbb{E}\big[\log (X)\big]
=
\frac{\beta(\beta+1)}{ Z(\boldsymbol{\theta}_1)\, (\alpha+\beta) (\alpha+\beta+1)}
\left[\psi(\alpha)-\psi(\alpha+\beta)-{2(\alpha+\beta)+1\over (\alpha+\beta) (\alpha+\beta+1)}\right].
\end{align}
In what follows we provide a closed expression for the expectation
$\mathbb{E}\big[\log(1-X)\big]$.
Indeed, by using series representation of function $\log(1-x)$; also called Newton-Mercator series:
$
\log(1-x)
=
-
\sum_{k=1}^{\infty}
{x^k}/{k}
$
which converges for $0<x<1$, we have
\begin{align}\label{Step-3}
\mathbb{E}\big[\log(1-X)\big]
=
-\sum_{k=1}^{\infty}
\dfrac{\mathbb{E}(X^k)}{k}
=
-\dfrac{1}{ Z(\boldsymbol{\theta}_1) }
\sum_{i=0}^{2}
c_i
\sum_{k=1}^{\infty}
\dfrac{1}{k}
\prod_{j=0}^{k+i-1}\frac{\alpha+j}{\alpha+\beta+j},
\end{align}
where in the second equality we used Corollary \ref{moments}.
By combining \eqref{Step-1}, \eqref{Step-2} and \eqref{Step-3}, and using definitions of normalization constant $Z(\boldsymbol{\theta}_1)$ and beta function ${B}(\alpha,\beta)$, the proof follows.
\end{proof}
\subsection{Stochastic representation}
\noindent
We say that a r.v. $Y$ has a non-standard Beta distribution bounded in $[a,b]$ interval and shape parameters $\alpha > 0$ and $\beta > 0$,
if its PDF is given by
\begin{equation*}
g(x; \alpha, \beta,a,b) = \frac{(x-a)^{\alpha-1}( b-x)^{\beta-1}}{B(\alpha, \beta) (b-a)^{\alpha+\beta-1}}, \quad a \leqslant {x} \leqslant b.
\end{equation*}
\begin{proposition}[Stochastic representation for $\delta<0$]\label{Stochastic representation}
Suppose $Y_{k;\alpha,\beta}$ has a non-standard Beta distribution bounded in $[0,1/k]$ interval, with $k=1,2,3$, and shape parameters $\alpha > 0$ and $\beta > 0$. Let $W$ be a discrete distribution, so that $W=1$ or $W=2$ or $W=3$, each with probability
\begin{align*}
\pi_1={1+\rho\over Z(\boldsymbol{\theta}_\delta)}, \quad \pi_2=-{2\alpha \delta\over Z(\boldsymbol{\theta}_\delta)\, (\alpha+\beta)},
\quad
\pi_3={\alpha(\alpha+1)\delta^2 \over Z(\boldsymbol{\theta}_\delta)\, (\alpha+\beta)(\alpha+\beta+1)},
\end{align*}
respectively, with $\delta<0$. A simple algebraic manipulation shows that $\pi_1+\pi_2+\pi_3=1$.
Assume that
$$Y
=
\sum_{k=1}^{3}Y_{k;\alpha+k-1,\beta} \delta_{W,k} \delta_{W,l},
\quad l=1,2,3,
$$
and that $W$ is independent of $Y_{k;\alpha,\beta}$, for each $k=1,2,3$. Here $\delta_{W,k}$ is the Kronecker delta function, i.e., $\delta_{W(\omega),k}$ is 1 if $W(\omega)=k$ for $\omega$ belonging to the random sample $\Omega$, and 0 otherwise.
If $X=WY$
then $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$.
Conversely, if $X\sim \text{BBeta}(\boldsymbol{\theta}_\delta)$ then $X=WY$.
\end{proposition}
\begin{proof}
By Law of total probability and by independence, we get
\begin{align*}
\mathbb{P}(X\leqslant x)
=
\mathbb{P}(WY\leqslant x)
&=
\sum_{l=1}^{3}
\mathbb{P}(WY\leqslant x\vert W=l) \mathbb{P}(W=l)
\\[0,15cm]
&=
\sum_{l=1}^{3}
\mathbb{P}(lY_{l;\alpha+l-1,\beta}\leqslant x)\mathbb{P}(W=l)
=
\sum_{l=1}^{3}
\mathbb{P}(Y_{1;\alpha+l-1,\beta}\leqslant x)\pi_l,
\end{align*}
because $lY_{l;\alpha+l-1,\beta}=Y_{1;\alpha+l-1,\beta}\sim {B}(\alpha+l-1,\beta)$, for $l=1,2,3$.
Since for $Y \sim {B}(\alpha,\beta)$ its CDF is given by
$
F(x; \alpha, \beta) = I_{x}(\alpha, \beta),
\ 0 \leqslant {x} \leqslant 1,
$
by definition of $\pi_l$'s, the above expression is
\begin{align*}
=
\sum_{l=1}^{3}
I_{x}(\alpha+l-1, \beta)\pi_l
=
\dfrac{1}{ Z(\boldsymbol{\theta}_\delta) }
\biggl[
(1+\rho)\,
{I_{x}(\alpha, \beta)}
-2\delta\,
\dfrac{B_{x}(\alpha+1, \beta)}{B(\alpha, \beta)}
+\delta^2\,
\dfrac{B_{x}(\alpha+2, \beta)}{B(\alpha, \beta)}
\biggr]
.
\end{align*}
But, by \eqref{CDF}, the right-hand side is equal to the CDF $F(x;\boldsymbol{\theta}_\delta)$.
Then we have completed the proof.
\end{proof}
\subsection{Identifiability}\label{Identifiability}
\noindent
Let us suppose that $f(x; \alpha, \beta)$ is the PDF of the Beta distribution, where $\alpha>0$ and
$\beta>0$ are the shape parameters.
A simple observation shows that the bimodal Beta PDF $f(x;\boldsymbol{\theta}_\delta)$ in \eqref{beta-density}, with parameter vector $\boldsymbol{\theta}_\delta =(\alpha,\beta,\rho,\delta)$, can be written as a finite (generalized) mixture of three Beta distributions with different shape parameters, i.e.
\begin{align}\label{eq-density}
f(x;\boldsymbol{\theta}_\delta)
=
\pi_1 f(x;\alpha,\beta)
+
\pi_2 f(x;\alpha+1,\beta)
+
\pi_3 f(x;\alpha+2,\beta),
\quad 0\leqslant x\leqslant 1,
\end{align}
where $\pi_1$, $\pi_2$ and $\pi_3$ are constants (that depends only on $\boldsymbol{\theta}_\delta$) given in Proposition \ref{Stochastic representation},
and $Z({\boldsymbol{\theta}_\delta})$
is as in \eqref{partition-function}.
Unlike Proposition \ref{Stochastic representation}, here $\delta$ can be negative. In principle, mixing non-negative weights are not necessary since mixtures can be PDF even if some of weights are negative.
\smallskip
Let $\mathcal{B}$ be the family of Beta distributions, as follows:
\begin{align*}
\mathcal{B}=\biggl\{F:F(x;\alpha,\beta)=\int_{0}^{x}f(y;\alpha,\beta)\, {\rm d}y, \ \alpha>0, \beta>0,\ 0\leqslant x\leqslant 1 \biggl\}.
\end{align*}
Write $\mathcal{H}_{\mathcal{B}}$ the class of all finite mixtures of $\mathcal{B}$. It is well-known that the class $\mathcal{H}_{\mathcal{B}}$ is identifiable (this fact is a consequence of the main result of \cite{Atienza06}).
The following result proves the identifiability of bimodal Beta distribution.
\begin{proposition}
The mapping $\boldsymbol{\theta}_\delta \longmapsto f(\cdot;\boldsymbol{\theta}_\delta)$ is one-to-one.
\end{proposition}
\begin{proof}
Let us suposse that $f(x;\boldsymbol{\theta}_\delta)=f(x;\boldsymbol{\theta}'_\delta)$ for all $0\leqslant x\leqslant 1$. In other words, by \eqref{eq-density},
\begin{multline*}
\pi_1 f(x;\alpha,\beta)+
\pi_2 f(x;\alpha+1,\beta)+
\pi_3 f(x;\alpha+2,\beta)
\\[0,2cm]
=
\pi_1' f(x;\alpha',\beta')+
\pi_2' f(x;\alpha'+1,\beta')+
\pi_3' f(x;\alpha'+2,\beta').
\end{multline*}
Since $\mathcal{H}_{\mathcal{B}}$ is identifiable, we have $\pi_i=\pi_i'$, for $i=1,2,3$, and $\alpha=\alpha'$, $\beta=\beta'$. Hence, from equalities $\pi_i=\pi_i'$, $i=1,2,3$, immediately follows that $\rho=\rho'$ and $\delta=\delta'$. Therefore, $\boldsymbol{\theta}_\delta=\boldsymbol{\theta}'_\delta$, and the proof follows.
\end{proof}
\vspace*{-0,5cm}
\section{Regression model, estimation and diagnostic analysis}
\label{sect:5}
\noindent
Let $X_1, \ldots, X_n$ be $n$ independent random variables, where each $X_i$, $i = 1, \ldots, n$, follows the PDF given in~\eqref{beta-density}.
We assume that the parameters $\alpha_i$ and $\beta_i$ satisfy the following functional relations:
\begin{equation}\label{cs1}
g_1(\alpha_i) = \eta_{1i} = \mathbf{w}^\top_i\bm{\gamma} \quad \textrm{and} \quad g_2(\beta_i) = \eta_{2i} = \mathbf{z}^\top_i\bm{\zeta},
\end{equation}
where $\bm{\gamma} = (\gamma_1, \ldots, \gamma_p)^\top$ and $\bm{\zeta} = (\zeta_1, \ldots, \zeta_q)^\top$
are vectors of unknown regression coefficients which are assumed to be functionally independent,
$\bm{\gamma} \in \mathbb{R}^p$ and $\bm{\zeta} \in \mathbb{R}^q$, with $p + q < n$,
$\eta_{1i}$ and $\eta_{2i}$ are the linear predictors, and $\mathbf{w}_i = (w_{i1}, \ldots, w_{ip})^\top$
and $\mathbf{z}_i = (z_{i1}, \ldots, z_{iq})^\top$ are observations on $p$ and $q$ known regressors, for $i = 1, \ldots, n$. Furthermore, we assume that the covariate matrices $\mathbf{W} = (\mathbf{w}_1, \ldots, \mathbf{w}_n)^\top$ and $\mathbf{Z} = (\mathbf{z}_1, \ldots, \mathbf{z}_n)^\top$ have rank $p$ and $q$, respectively. The link functions $g_1: \mathbb{R} \rightarrow \mathbb{R}^+$ and $g_2: \mathbb{R} \rightarrow \mathbb{R}^+$ in (\ref{cs1}) must be strictly monotone, positive and at least twice differentiable,
such that $\alpha_i = g_1^{-1}(\mathbf{x}_i^\top\,\bm{\gamma})$ and $\beta_i = g_2^{-1}(\mathbf{z}_i^\top\,\bm{\zeta})$, with $g_1^{-1}(\cdot)$ and
$g_2^{-1}(\cdot)$ being the inverse functions of $g_1(\cdot)$ and $g_2(\cdot)$, respectively.
The log-likelihood function for $\boldsymbol{\theta}_\delta = (\bm{\gamma} , \bm{\zeta}, \rho, \delta)$ based on a sample of $ n $ independent
observations is given by
\begin{equation}\label{eq:logm}
\ell(\boldsymbol{\theta}_\delta) = \sum_{i=1}^{n}\ell(\alpha_i, \beta_i, \rho, \delta),
\end{equation}
where
\begin{align*}
\ell(\alpha_i, \beta_i, \rho, \delta)
&=
-\log Z(\boldsymbol{\theta}_\delta) -\log {B}(\alpha_i, \beta_i)
+
\log\big[\rho+(1-\delta{x_i})^2\big]\\
&+
(\alpha_i-1)\log x_i +(\beta_i-1)\log(1-x_i),
\quad i=1,\ldots,n,
\end{align*}
and $Z(\boldsymbol{\theta}_\delta)$ is as in \eqref{partition-function}.
The maximum likelihood estimator (MLE)
$\widehat{\bm{\theta}}_\delta = (\widehat{\bm{\gamma}}^\top, \widehat{\bm{\zeta}}^\top, \widehat{\rho}, \widehat{\delta})^\top$ of
$\bm{\theta}_\delta = (\bm{\gamma}^\top, \bm{\zeta}^\top, \rho, \delta)^\top$
is obtained by the maximization of the log-likelihood function \eqref{eq:logm}.
However, it is not possible to derive analytical solution for the MLE $\widehat{\bm{\theta}}$, hence
we must be required to numerical solution using some optimization algorithm such as Newton-Raphson and quasi-Newton.
Under mild regularity conditions and when $ n $ is large,
the asymptotic distribution of the MLE
$\widehat{\bm{\theta}}_\delta = (\widehat{\bm{\gamma}}^\top, \widehat{\bm{\zeta}}^\top, \widehat{\rho}, \widehat{\delta})^\top$
is approximately multivariate normal (of dimension $p+q+2$) with mean vector
$\bm{\theta}_\delta = (\bm{\gamma}^\top, \bm{\zeta}^\top, \rho, \delta)^\top$ and variance covariance matrix
$\mathbf{K}^{-1}(\bm{\theta}_\delta )$ where
$$\mathbf{K}(\bm{\theta}_\delta )= \mathbb{E}\left[- \ {\partial \ell \left(\bm{\theta}_\delta \right)\over \partial \bm{\theta}_\delta \; \partial \bm{\theta}_\delta^\top} \right],$$
is the expected Fisher information matrix.
Unfortunately, there is no closed form expression for the matrix $\mathbf{K}(\bm{\theta}_\delta )$.
Nevertheless, a consistent estimator of the expected Fisher information matrix is given by
$$\mathbf{J}(\widehat{\bm{\theta}}_\delta)=- \ {\partial \ell \left(\bm{\theta}_\delta \right)\over \partial \bm{\theta}_\delta \;\partial \bm{\theta}_\delta^\top} \Big{|}_{\bm{\theta}_\delta = \widehat{\bm{\theta}}_\delta} \ ,$$
which is the estimated observed Fisher information matrix.
Therefore, for large $n$, we can replace $\mathbf{K}(\bm{\theta}_\delta )$ by $\mathbf{J}(\widehat{\bm{\theta}}_\delta )$.
Let $ \theta_{\delta_r}$ be the \emph{r}-th component of $\bm{\theta}_\delta .$
The asymptotic $ 100 (1 - \varphi)\% $ confidence interval for $ \theta_{\delta_r} $ is given by
\begin{equation*}
\widehat{\theta}_r \pm z_{\varphi/2}\; \textrm{se}\left(\widehat{\theta}_{\delta_r}\right), \qquad r = 1, \ldots, p+q+2,
\end{equation*}
where $ z_{\varphi/2}$ is the $ \varphi/2$ upper quantile of the standard normal distribution and
$ \textrm{se}\left(\widehat{\theta}_{\delta_r}\right) $ is the asymptotic standard error of $ \widehat{\theta}_{\delta_r}.$
Note that $ \textrm{se}\left(\widehat{\theta}_{\delta_r}\right) $ is the square root of the \emph{r}-th diagonal element of the matrix $\mathbf{J}^{-1}(\widehat{\bm{\theta}}_\delta)$.
Residuals are widely used to check the adequacy of the fitted model.
To check the goodness of fit of the BBeta model,
we propose to use the randomized quantile residuals introduced by \cite{du96}.
Let $F(x_i;\boldsymbol{\theta}_\delta)$ be the cumulative distribution
function of the BBeta distribution, as defined in (\ref{CDF}), in which the regression
structures are assumed as in (\ref{cs1}). The randomized quantile residual is given by
$$r_i = \Phi^{-1}\left(F(x_i;\boldsymbol{\widehat\theta}_\delta)\right), \quad i = 1, \ldots, n,$$
where $\Phi^{-1}(\cdot)$ is the standard normal distribution function.
If the assumed model for the data is well adjusted, these residuals have standard
normal distribution \citep{du96}.
\vspace*{-0,5cm}
\section{Simulation study}
\label{sect:6}
\noindent
In this section, Monte Carlo simulations are performed
(i) to evaluate the finite-sample behavior of the maximum likelihood estimates of the regression coefficients and
(ii) to investigate the empirical distribution of the randomized quantile residuals.
The Monte Carlo experiments were carried out by considering the following regression structure
\begin{align*}
\log\left(\alpha_i\right) &= \gamma_0 + \gamma_1\,z_i,
\nonumber
\\ \nonumber
\log\left(\beta_i\right) &= \zeta_0 + \zeta_1\,z_i, \quad i = 1, \ldots, n, \nonumber
\end{align*}
where the true values of the parameters were chosen to be same with the values of the estimated parameters for the case in which we use the application part of regression, i.e.,
$\gamma_0 = -1.8, \gamma_1 = 5.9, \zeta_0 = 3.8,
\zeta_1 = -2.4, \rho = 0.1$ and $\delta = 2.4$.
The covariate values of $ z_i $ were generated from the standard uniform
distribution.
The sample size considered was $n = 50, 100, 200$ and $300$.
All simulations were conducted in \textsf{R} using the
BFGS algorithm available in the \texttt{optim} function.
For each scenario the Monte Carlo experiment was repeated $5,000$ times.
\subsection{Parameter estimation}
\noindent
In this subsection, a small simulation study is presented to observe the
finite sample performance of the proposed estimators from regression approach.
For such evaluation, the estimated relative bias and the estimated mean squared error (RMSE) were calculated.
The results are presented in Table \ref{tab:parms_sim} and Figure \ref{fig:bps}.
Table \ref{tab:parms_sim} presents the bias and RMSE for the maximum likelihood estimators of
$\gamma_0, \gamma_1, \zeta_0, \zeta_1, \rho$ and $\delta$.
Based on these tables, we find that the estimates are convergent to
their values. As expected, increasing the sample size reduces substantially both
bias and RMSE. The previous findings are confirmed by the box plots shown in Figure \ref{fig:bps}.
\begin{table}[H]
\caption{Estimated bias and mean-squared error.}
\label{tab:parms_sim}
\onehalfspacing
\scalefont{0.8}
\centering
\begin{tabular}{crrrrrrrrrrrr} \midrule
\multirow{2}{*}{$n$} & \multicolumn{6}{c}{Bias} & \multicolumn{6}{c}{RMSE} \\
\cmidrule(lr){2-7} \cmidrule(lr){8-13}
& $\gamma_0$ & $\gamma_1$ & $\zeta_0$ & $\zeta_1$ & $\delta$ & $\rho$
& $\gamma_0$ & $\gamma_1$ & $\zeta_0$ & $\zeta_1$ & $\delta$ & $\rho$ \\
\midrule 50 & 0.212 & 0.106 & 0.132 & 0.299 & 0.177 & 1.306 & 0.234 & 0.634 & 0.417 & 0.839 & 0.488 & 0.235 \\
100 & 0.213 & 0.099 & 0.114 & 0.254 & 0.120 & 0.938 & 0.192 & 0.475 & 0.276 & 0.558 & 0.183 & 0.091 \\
200 & 0.202 & 0.093 & 0.095 & 0.215 & 0.081 & 0.543 & 0.157 & 0.390 & 0.181 & 0.381 & 0.068 & 0.006 \\
300 & 0.195 & 0.091 & 0.088 & 0.200 & 0.061 & 0.414 & 0.139 & 0.353 & 0.152 & 0.313 & 0.037 & 0.003 \\
\bottomrule \end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{bp_estimated_paramters.pdf}
\caption{Boxplots of the estimated parameters obtained in Monte Carlo experiments for different sample sizes.}
\label{fig:bps}
\end{figure}
\subsection{Residuals}
\noindent
The second simulation study was performed to examine how well the distributions of the randomized quantile residuals is approximated by the standard normal distribution. The evaluation of the randomized quantile residuals were based on the normal probability plots of the mean order statistics and descriptive measures. The results are presented in Table \ref{tab:sim_res} and Figure \ref{fig:normalplots}.
In Table \ref{tab:sim_res}, we present the mean, standard deviation (StdDev), skewness and kurtosis of the
randomized quantile residuals. For all scenarios, that is, the residuals have approximately zero mean and unit standard deviation, have skewness close to zero, and the kurtosis is near three.
Figure \ref{fig:normalplots} displays empirical quantiles versus theoretical quantiles plots of the randomized
quantile residuals. The results presented in Figure \ref{fig:normalplots} show that the
distribution of the randomized quantile residuals is approximated by the standard normal distribution.
\begin{table}[H]
\caption{Descriptive measures of the randomized quantile residuals.}
\label{tab:sim_res}
\onehalfspacing
\centering
\begin{tabular}{crrrr} \midrule
$n$ & Mean & StdDev & Skewness & Kurtosis \\ \midrule 50 & $-$0.001 & 0.999 & 0.028 & 2.854 \\
100 & $-$0.002 & 0.999 & 0.054 & 2.976 \\
200 & $-$0.003 & 0.997 & 0.077 & 3.002 \\
300 & $-$0.003 & 0.997 & 0.084 & 3.025 \\
\bottomrule \end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[scale = 0.68]{res_sim.pdf}
\caption{Normal probability plots of the mean order statistics.}
\label{fig:normalplots}
\end{figure}
\vspace*{-0,5cm}
\section{Real data application}
\label{sect:7}
\noindent
In this section, to evaluate the applicability of the proposed model, a real data set with bimodality is considered.
In particular, a real life application related to the proportion of votes that Jair Bolsonaro received in the second turn of Brazilian elections
in 2018 is analyzed. We compared the potentiality of the BBeta regression with the traditional beta regression model.
In order to estimate the parameters of model, we adopt the MLE method (as discussed in section \ref{sect:5}).
The asymptotic standard errors and confidence intervals were computed using the observed Fisher
information matrix. The required numerical evaluations for data analysis were implemented using the R software.
The goal of this data analysis is to describe the proportion of votes that Jair Bolsonaro received in the second turn of Brazilian elections in 2018 for all 5.565 cities.
The response variable $X_i$ is the proportion of votes given the municipal human development (\textrm{mhdi}).
Figure \ref{empirical_plots:pdf} plots the histogram of response variable used
in the application and the scatterplots of municipal human development against proportion of votes.
From Figure \ref{empirical_plots:pdf}, we can see that the response variable has bimodality.
Furthermore, there is evidence of an proportion of votes trend with increased municipal human development.
\begin{figure}[H]
\centering
\setkeys{Gin}{width=0.43\textwidth,height=8.0cm} %
\includegraphics[]{density_proportion_votes.pdf}
\includegraphics[]{proportion_votes_vs_mhdi.pdf}
\caption{Empirical plots of data.}
\label{empirical_plots:pdf}
\end{figure}
To explain this proportion of votes we consider the bimodal beta regression model, defined as
\begin{eqnarray*}
Y_i &\sim& \textrm{BBeta}(\bm{\theta}_\delta), \\
\log(\alpha_i) &=& \gamma_0 + \gamma_1\,\textrm{mhdi}_i, \\
\log(\beta_i) &=& \zeta_0 + \zeta_1\,\textrm{mhdi}_i,
\end{eqnarray*}
where $i = 1, 2, \ldots, 5.565$ cities and $\textrm{mhdi}_i$ is municipal human development of cities $i$.
For comparison purposes the beta regression model was fitted, assuming that
\begin{eqnarray*}
Y_i &\sim& \textrm{Beta}(\mu_i, \phi_i), \\
\textrm{logit}(\mu_i) &=& \beta_0 + \beta_1\,\textrm{mhdi}_i, \\
\log(\phi_i) &=& \gamma_0 + \gamma_1\,\textrm{mhdi}_i.
\end{eqnarray*}
Table \ref{estimates} shows the estimated parameters, standard errors and inferior and superior
bounds of the confidence intervals with significance level at 5\% under the BBeta and
Beta models. Note that the coefficients are statistically significant at the
the level of 5\%, for the BBeta and
Beta regression models with the structure above.
\begin{table}[H]
\centering
\caption{ML estimates, standard errors and 95\% confidence interval.}\label{estimates}
\begin{tabular}{ccrrrr}
\hline
Model & Parameter & Estimate & S.E. & 2.5 \% & 97.5 \% \\
\hline
\multirow{6}{*}{BBeta} & $\gamma_0$ & $-$1.8999 & 0.1963 & $-$2.2846 & $-$1.5152 \\
& $\gamma_1$ & 5.9471 & 0.3044 & 5.3505 & 6.5437 \\
& $\zeta_0$ & 3.8341 & 0.1915 & 3.4587 & 4.2095 \\
& $\zeta_1$ & $-$2.4232 & 0.2862 & $-$2.9842 & $-$1.8622 \\
& $\rho$ & 0.1096 & 0.0090 & 0.0920 & 0.1273 \\
& $\delta$ & 2.4092 & 0.0351 & 2.3405 & 2.4780 \\
\hdashline
\multirow{4}{*}{Beta }& $\beta_0$ & $-$7.5343 & 0.0749 & $-$7.6810 & $-$7.3875 \\
& $\beta_1$ & 11.1820 & 0.1105 & 10.9654 & 11.3987 \\
& $\gamma_0$ & 1.0029 & 0.1675 & 0.6746 & 1.3312 \\
& $\gamma_1$ & 2.5214 & 0.2528 & 2.0260 & 3.0169 \\
\hline
\end{tabular}
\end{table}
Table \ref{AIC} shows the Akaike information criterion
(AIC), Bayesian information criterion (BIC) and Kolmogorov-Smirnov (KS) statistic for the fitted models. In general, it
is expected that the better model to fit the data presents the smaller values for these quantities (AIC and BIC).
Based on the AIC and BIC criteria, the model which provides a better fit in this data set is the BBeta regression model.
These claim is also supported by the
residuals plots with simulated envelopes shown in Figure \ref{empirical_plots:pdf}.
\begin{table}[H]
\centering
\caption{Goodness-of-fit measures.}\label{AIC}
\begin{tabular}{lcrr} \hline
Model & KS & AIC & BIC \\
\hline
Beta & 0.0203 (0.2014) & $-$8238 & $-$8212 \\
BBeta & 0.0149 (0.5659) & $-$8786 & $-$8746 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\linewidth]{hnps.pdf}
\caption{Half-normal plot of randomized quantile residuals with simulated envelope.}
\label{empirical_plots:pdf}
\end{figure}
\vspace*{-0,05cm}
\section{Concluding remarks}
\label{sect:8}
\noindent
When modeling responses with bimodal bounded to the unit interval, despite its broad sense applicability in many fields, the beta distribution is not suitable.
In this paper, the well-known two-parameter beta distribution is extended by introducing two extra parameters, thus defining the bimodal beta (BBeta) distribution, based on a quadratic transformation technique used to generate bimodal functions \citep{e:10}, which generalizes the beta distribution. We provide a mathematical treatment of the new
distribution including bimodality, moments, entropy measures, entropy measures, stochastic representation and identifiability.
We allow a regression structure for the parameters $\alpha$ and $\beta$.
The estimation of the model parameters is approached by
maximum likelihood and its good performance has been evaluated
by means of Monte Carlo simulations. Furthermore, we have proposed residuals for the proposed model and conducted a simulation study to establish
their empirical properties in order to evaluate their performances.
The proposed model was fitted to the proportion of votes that Jair Bolsonaro received in the second turn of Brazilian elections in 2018.
As expected, the BBeta model outperforms the beta regression in presence of bimodality.
|
{
"timestamp": "2021-08-19T02:07:43",
"yymm": "2108",
"arxiv_id": "2108.07934",
"language": "en",
"url": "https://arxiv.org/abs/2108.07934"
}
|
"\\section{Introduction}\n\n\\subsection{Hall algebras of quivers with potential}\\label{haq} Let $Q(...TRUNCATED)
| {"timestamp":"2021-08-19T02:06:26","yymm":"2108","arxiv_id":"2108.07919","language":"en","url":"http(...TRUNCATED)
|
"\n\\section{Introduction}\n\\label{sec:intro}\nQuark-gluon plasma (QGP) is a state of thermalized a(...TRUNCATED)
| {"timestamp":"2021-08-31T02:29:17","yymm":"2108","arxiv_id":"2108.07943","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec:introduction}\n\\nop{\n\\todo{Yong: After reading the whole pa(...TRUNCATED)
| {"timestamp":"2021-08-19T02:06:24","yymm":"2108","arxiv_id":"2108.07915","language":"en","url":"http(...TRUNCATED)
|
"\n\\section{Introduction}\nTensors are multidimensional arrays which are ubiquitous in data analys(...TRUNCATED)
| {"timestamp":"2021-09-21T02:32:22","yymm":"2108","arxiv_id":"2108.07899","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nIn any technology driven industry, launch of a new business or launch of n(...TRUNCATED)
| {"timestamp":"2021-08-19T02:08:21","yymm":"2108","arxiv_id":"2108.07951","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\n\r\nThe Kodaira problem is that whether a compact Kähler manifold admit(...TRUNCATED)
| {"timestamp":"2021-08-19T02:08:30","yymm":"2108","arxiv_id":"2108.07957","language":"en","url":"http(...TRUNCATED)
|
"\\section*{Acknowledgements}\nWe thank Warren Morningstar, Chung-Ching Chang, and Zachary Garrett a(...TRUNCATED)
| {"timestamp":"2021-11-03T01:10:54","yymm":"2108","arxiv_id":"2108.07931","language":"en","url":"http(...TRUNCATED)
|
"\\section{The Proposed Model} \\label{ProposedModel}\nIn this section, we first formalize the dual-(...TRUNCATED)
| {"timestamp":"2021-08-19T02:10:30","yymm":"2108","arxiv_id":"2108.07976","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 5