Monketoo's picture
Add files using upload-large-folder tool
418f1f2 verified

where $(i, j, k)$ runs over the risk set $\mathcal{R}(\tau_{sh})$ for any given $(i, j)$.

Using Chebyshev's inequality, it follows from (12) and (13) that we have the following consistency results in terms of convergence in probability (Ma 1999):

(i) $\hat{U}_i \rightarrow U_i$ as $\sigma^2 \rightarrow 0$;

(ii) $\hat{U}{ij} \rightarrow U{ij}$ as $\omega^2 + \sigma^2 \rightarrow 0$.

(iii) $\hat{U}{ij} \rightarrow U{ij}$ as $\min_{jksh}(\mu_{ijk,h}^{(s)}) \rightarrow \infty$.

Results (i)-(iii) are usually referred to as ‘small dispersion asymptotics’. Let $n_{ij}$ be the number of the induced observations $y_{ijk,h}^{(s)}$ contained in sub-cluster $(i, j)$. We also have the following large sample asymptotics if $\min_{jk}(\mu_{ijk}) \ge \text{clog}(\min_j(n_{ij}))/\min_j(n_{ij})$ for a positive constant c. That is, the only restriction is that $\mu_{ijk}$ should not tend to zero too quickly.

(iv) $\hat{U}i \rightarrow U_i$ as $J_i \rightarrow \infty$ and $\hat{U}{ij} \xrightarrow{P} U_{ij}$ as $\min_j(n_{ij}) \rightarrow \infty$.

The magnitude of the $n_{ij}$ depends not only on the number of individuals in sub-cluster $(i, j)$, but also on the number of the failures in each individual's stratum. In other words, the greater the number of subjects, especially those with complete survival histories, the better we are able to predict the random effects.

4.2 Estimation of Regression Parameters

Consider first estimation of the regression parameters in the case of known dispersion parameters. Estimation of the unknown dispersion parameters will be discussed in next section.

Differentiating the joint likelihood of the auxiliary model for the data and random effects yields the joint score function. Replacing the random effects with their predictors, we have an unbiased estimating function for the regression parameters $\boldsymbol{\gamma} = (\boldsymbol{\alpha}^\top, \boldsymbol{\beta}^\top)^\top$:

ψ(γ)=s=1ah=1qs(i,j,k)R(τsh)xijk,h(s)(Yijk,h(s)U^ijμijk,h(s)) \psi(\boldsymbol{\gamma}) = \sum_{s=1}^{a} \sum_{h=1}^{q_s} \sum_{(i,j,k) \in \mathcal{R}(\tau_{sh})} \mathbf{x}_{ijk,h}^{(s)} (Y_{ijk,h}^{(s)} - \hat{U}_{ij}\mu_{ijk,h}^{(s)})

The solutions of $\psi(\gamma) = 0$ provide estimates of the regression parameters. The Newton scoring algorithm introduced by Jørgensen et al. (1995) can be used to solve this estimating equation.

The Newton scoring algorithm is defined as the Newton algorithm applied to the equation $\psi(\gamma) = 0$, but with the derivative of $\psi(\gamma)$ replaced by its expectation. This expectation, denoted by $\mathbf{S}(\gamma)$, is called the sensitivity matrix:

S(γ)=i=1mcieiei+i=1mj=1Jiω2wijfijfij \mathbf{S}(\gamma) = \sum_{i=1}^{m} c_i e_i e_i^{\top} + \sum_{i=1}^{m} \sum_{j=1}^{J_i} \omega^2 w_{ij} \mathbf{f}_{ij} \mathbf{f}_{ij}^{\top}