idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
40,501
|
When does the sd stay the same, even after values in the sample were changed?
|
The question is about the data, not random variables.
Let $X = (x_1, x_2, \ldots, x_n)$ be the data and $Y = (y_1, y_2, \ldots, y_n)$ be additive changes to the data so that the new values are $(x_1+y_1, \ldots, x_n+y_n)$. From
$$\text{Var}(X) = \text{Var}(X+Y) = \text{Var}(X) + 2 \text{Cov}(X,Y) + \text{Var}(Y)$$
we deduce that
$$(*) \quad \text{Var}(Y) + 2 \text{Cov}(X,Y) = 0$$
is necessary for the variance to be unchanged. Add in $n-k$ additional constraints to zero out all but $k$ of the $y_i$ (there are ${n \choose k}$ ways to do this) and note that all $n-k+1$ constraints almost everywhere have linearly independent derivatives. By the Implicit Function Theorem, this defines a manifold of $n - (n-k+1)$ = $k-1$ dimensions (plus perhaps a few singular points): those are your degrees of freedom.
For example, with $X = (2, 3, 4)$ we compute
$$3 \text{Var}(y) = y_1^2 + y_2^2 + y_3^2 - (y_1+y_2+y_3)^2/3$$
$$3 \text{Cov}(x,y) = (2 y_1 + 3 y_2 + 4 y_3) - 3(y_1 + y_2 + y_3)$$
If we set (arbitrarily) $y_2 = y_3 = 0$ the solutions to $(*)$ are $y_1 = 0$ (giving the original data) and $y_1 = 3$ (the posted solution). If instead we require $y_1=y_3 = 0$ the only solution is $y_2 = 0$: you can't keep the SD constant by changing $y_2$. Similarly we can set $y_3 = -3$ while zeroing the other two values. That exhausts the possibilities for $k=1$. If we set only $y_3 = 0$ (one of the cases where $k = 2$) then we get a set of solutions
$$y_2^2 - y_1 y_2 + y_1^2 - 3y_1 == 0$$
which consists of an ellipse in the $(y_1, y_2)$ plane. Similar sets of solutions arise in the choices $y_2 = 0$ and $y_1 = 0$.
|
When does the sd stay the same, even after values in the sample were changed?
|
The question is about the data, not random variables.
Let $X = (x_1, x_2, \ldots, x_n)$ be the data and $Y = (y_1, y_2, \ldots, y_n)$ be additive changes to the data so that the new values are $(x_1+y
|
When does the sd stay the same, even after values in the sample were changed?
The question is about the data, not random variables.
Let $X = (x_1, x_2, \ldots, x_n)$ be the data and $Y = (y_1, y_2, \ldots, y_n)$ be additive changes to the data so that the new values are $(x_1+y_1, \ldots, x_n+y_n)$. From
$$\text{Var}(X) = \text{Var}(X+Y) = \text{Var}(X) + 2 \text{Cov}(X,Y) + \text{Var}(Y)$$
we deduce that
$$(*) \quad \text{Var}(Y) + 2 \text{Cov}(X,Y) = 0$$
is necessary for the variance to be unchanged. Add in $n-k$ additional constraints to zero out all but $k$ of the $y_i$ (there are ${n \choose k}$ ways to do this) and note that all $n-k+1$ constraints almost everywhere have linearly independent derivatives. By the Implicit Function Theorem, this defines a manifold of $n - (n-k+1)$ = $k-1$ dimensions (plus perhaps a few singular points): those are your degrees of freedom.
For example, with $X = (2, 3, 4)$ we compute
$$3 \text{Var}(y) = y_1^2 + y_2^2 + y_3^2 - (y_1+y_2+y_3)^2/3$$
$$3 \text{Cov}(x,y) = (2 y_1 + 3 y_2 + 4 y_3) - 3(y_1 + y_2 + y_3)$$
If we set (arbitrarily) $y_2 = y_3 = 0$ the solutions to $(*)$ are $y_1 = 0$ (giving the original data) and $y_1 = 3$ (the posted solution). If instead we require $y_1=y_3 = 0$ the only solution is $y_2 = 0$: you can't keep the SD constant by changing $y_2$. Similarly we can set $y_3 = -3$ while zeroing the other two values. That exhausts the possibilities for $k=1$. If we set only $y_3 = 0$ (one of the cases where $k = 2$) then we get a set of solutions
$$y_2^2 - y_1 y_2 + y_1^2 - 3y_1 == 0$$
which consists of an ellipse in the $(y_1, y_2)$ plane. Similar sets of solutions arise in the choices $y_2 = 0$ and $y_1 = 0$.
|
When does the sd stay the same, even after values in the sample were changed?
The question is about the data, not random variables.
Let $X = (x_1, x_2, \ldots, x_n)$ be the data and $Y = (y_1, y_2, \ldots, y_n)$ be additive changes to the data so that the new values are $(x_1+y
|
40,502
|
When does the sd stay the same, even after values in the sample were changed?
|
Suppose that you have a random variable $X$ and you wish to find the set of transformations $Y=f(X)$ such that the standard deviation of $Y$ is the identical to the standard deviation of $X$.
Consider first the set of linear transformations:
$Y = a X + b$ where $a, b$ are constants.
It is clear that:
$Var(Y) = a^2 Var(X)$.
Thus, the only set of linear transformations that preserve standard deviations are linear translations but not scaling by any factor other than -1 (see comment by mbq to this answer). I suspect that non-linear transformations do not preserve standard deviations.
|
When does the sd stay the same, even after values in the sample were changed?
|
Suppose that you have a random variable $X$ and you wish to find the set of transformations $Y=f(X)$ such that the standard deviation of $Y$ is the identical to the standard deviation of $X$.
Consider
|
When does the sd stay the same, even after values in the sample were changed?
Suppose that you have a random variable $X$ and you wish to find the set of transformations $Y=f(X)$ such that the standard deviation of $Y$ is the identical to the standard deviation of $X$.
Consider first the set of linear transformations:
$Y = a X + b$ where $a, b$ are constants.
It is clear that:
$Var(Y) = a^2 Var(X)$.
Thus, the only set of linear transformations that preserve standard deviations are linear translations but not scaling by any factor other than -1 (see comment by mbq to this answer). I suspect that non-linear transformations do not preserve standard deviations.
|
When does the sd stay the same, even after values in the sample were changed?
Suppose that you have a random variable $X$ and you wish to find the set of transformations $Y=f(X)$ such that the standard deviation of $Y$ is the identical to the standard deviation of $X$.
Consider
|
40,503
|
Forecasting unemployment rate
|
The Arellano-Bond estimator has been designed for precisely this type of problems.
You will find a short non-technical paper with a examples here. In a nutshell, it combines the information embedded in the large number of cross-section to make up for the small number of points in each series. This estimator is widely used and implemented: it is avalaible in the default gretl package, but also in stata via the XTABOND2 package and in R too, via the plm package (you should easily find a large number of paper using it).
EDIT:
Given that spatial correlation may indeed be informative (see Andy's post), i would advice to add a variable:
$s_{it} = u_{it} - \bar{u}_{-it}$
where $u_{it}$ is (eventually the $\log()$ of) the unemployment rate of region $i$ at time $t$ and $\bar{u}_{-it}$ its average value among $k$ geographical neighbors of region $i$ (excluding region $i$). I would advise trying different values of $k$ until small changes in $k$ do not affect the estimation end-result/conclusions. Then, for efficient and consistent estimation of $\beta_s$ (the coefficient associated with the variable $s$) i would use OLS for the main effect and allow for a random component to the error terms to account for inter-regional heterogenity in $\beta_s$; thereby leveraging the fact that the R package plm allows to combine gmm (i.e. Arellano-Bond) and random effect coefficients.
Concerning Andy W's remark: you could read these two documents for a non technical summary. The full paper version is here. Note the reliance on both a large number of cross section and time dimension.
PS: Thanks @Srikant. I think i get it now :)
|
Forecasting unemployment rate
|
The Arellano-Bond estimator has been designed for precisely this type of problems.
You will find a short non-technical paper with a examples here. In a nutshell, it combines the information embedded i
|
Forecasting unemployment rate
The Arellano-Bond estimator has been designed for precisely this type of problems.
You will find a short non-technical paper with a examples here. In a nutshell, it combines the information embedded in the large number of cross-section to make up for the small number of points in each series. This estimator is widely used and implemented: it is avalaible in the default gretl package, but also in stata via the XTABOND2 package and in R too, via the plm package (you should easily find a large number of paper using it).
EDIT:
Given that spatial correlation may indeed be informative (see Andy's post), i would advice to add a variable:
$s_{it} = u_{it} - \bar{u}_{-it}$
where $u_{it}$ is (eventually the $\log()$ of) the unemployment rate of region $i$ at time $t$ and $\bar{u}_{-it}$ its average value among $k$ geographical neighbors of region $i$ (excluding region $i$). I would advise trying different values of $k$ until small changes in $k$ do not affect the estimation end-result/conclusions. Then, for efficient and consistent estimation of $\beta_s$ (the coefficient associated with the variable $s$) i would use OLS for the main effect and allow for a random component to the error terms to account for inter-regional heterogenity in $\beta_s$; thereby leveraging the fact that the R package plm allows to combine gmm (i.e. Arellano-Bond) and random effect coefficients.
Concerning Andy W's remark: you could read these two documents for a non technical summary. The full paper version is here. Note the reliance on both a large number of cross section and time dimension.
PS: Thanks @Srikant. I think i get it now :)
|
Forecasting unemployment rate
The Arellano-Bond estimator has been designed for precisely this type of problems.
You will find a short non-technical paper with a examples here. In a nutshell, it combines the information embedded i
|
40,504
|
Forecasting unemployment rate
|
Given the nature of your data I would suggest you investigate the use of exponential smoothing as well as fitting ARIMA type models, especially due to the temporal constraints within your data. Although I wouldn't doubt spatial dependencies exist, I would be abit skeptical about their usefulness in forecasting (in what I would imagine are fairly large areas), especially since any spatial dependency will likely be already captured (at least to a certain extent) in previous observations in the series.
Where the spatial dependencies may be helpful is if you have small area estimation problems, and you can use the spatial dependency in your data to help smooth out your estimations in those noisy geographic regions. This may not be a problem though since you have aggregated data for a full year.
You shouldn't take my word for it though, and should investigate economics literature on the subject and assess various forecasting methods yourself. Its quite possible other variables are useful predictors of future unemployment in similar panel settings.
Edit:
First I'd like to clarify that I did not mean that the OP should simply prefer some type of exponential smoothing over other techniques. I think the OP should assess performance of various forecasting methods using a hold out sample of 1 or 2 time periods. I do not know the literature for forecasting unemployment, but I have not seen any method so obviously superior that others should be dismissed outright in any context.
Kwak mentions a key point I did not consider initially (and Stephan's comment makes the same point very succinctly as well). The panel nature of the data allows one to estimate an auto-regressive compenent in the model much easier than in a single time series. So I would follow his suggestion and consider the A/B estimator a good bet to provide the best forecast accuracy.
I'm still sticking with my initial suggestion though that I am skeptical of the usefulness of the spatial dependence, and one should assess a models predictive accuracy with and without the spatial component. In terms of prediction it is not simply whether some sort of spatial auto-correlation exists, it is whether that spatial auto-correlation is useful in predicting future values independent of past observations in the series.
For simplifying my reasoning, lets denote
$R_{t}$ corresponds to a geographic region $R$ at time $t$
$R_{t-1}$ corresponds to a geographic region $R$ at the previous time period
$W_{t-1}$ corresponds to however one wants to define the spatial relationship for for the neighbors of $R_{t}$ at the previous time period
In this case $R$ is some attribute and $W$ is that same attribute in the neighbors of $R$ (i.e. an endogenous spatial lag.)
In pretty much all cases of lattice areal data, we have a relationship between $R$ and $W$. Two general explanations for this relationship are
1) The General Social Process Theory
This is when there are processes that affect $R$ and $W$ simultaneously that result in similar values with some sort of spatial organization. The support of the data does not distinguish between the forces that shape attributes in a broader scope than the areal units encompass. (I imagine there is a better name for this, so if someone could help me out.)
2) The Spatial Externalities Theory
This is when some attribute of $W$ directly affects an attribute of $R$. Srikant's example of job diffusion is an example of this.
In the context of forecasting, the general social process model may not be all that helpful in forecasting. In this case, $R_{t-1}$ and $W_{t-1}$ are reflective of the same external shocks, and so $W_{t-1}$ is less likely to have exogenous power to predict $R_{t}$ independent of $R_{t-1}$.
IMO the spatial externalities case I would expect $W_{t-1}$ to have a greater potential to forecast $R_{t}$ independent of $R_{t-1}$ in the short run because $R_{t-1}$ and $W_{t-1}$ can be reflective of different external shocks to the system. This is my opinion though and you typically can't distinguish between the general social process model and the spatial externalities model through empirical means in a cross sectional design (they are probably both occurring to a certain extent in many contexts). Hence I would attempt to validate its usefulness before simply incorporating it into the forecast. Better knowledge of the literature and social processes would definately be helpful here to guide your model building. In criminology only in a very limited set of circumstances does the externalities model make sense (but I imagine it is more likely in economics data). Models of spatial hedonic housing prices often show very strong spatial effects, and in that context I would expect the spatial component to have a strong ability to forecast housing prices. (I like Luc Anselin's explanation of these two different processes better than mine in this paper, PDF here)
Often how we define $W$ is a further problem in this setting. Most conceptions of $W$ are very simplistic and probably aren't entirely reflective of real geographic processes. Here kwaks suggestion of adding a random component to the $W$ effect for each $R$ makes alot of sense. An example would be we would expect New York City to influence its neighbors, but we wouldn't expect NYC's neighbors to have all that much influence on NYC. This still doesn't solve how to either decide what is a neighbor or how to best represent the effects of neighbors. What kwak suggests is essential a local version of Geary's C (spatial differences), local Moran's I (spatial averages) is a common approach as well.
I'm still alittle surprised at the negative responses to my suggestion to use simpler smoothing methods (even if they are meant for univariate time series). Am I naive to think exponential smoothing or some other type of moving window technique won't perform at least comparably well enough to more complicated procedures to assess it? I would be more worried if the series were such that we would expect seasonal components, but that is not the case here.
|
Forecasting unemployment rate
|
Given the nature of your data I would suggest you investigate the use of exponential smoothing as well as fitting ARIMA type models, especially due to the temporal constraints within your data. Althou
|
Forecasting unemployment rate
Given the nature of your data I would suggest you investigate the use of exponential smoothing as well as fitting ARIMA type models, especially due to the temporal constraints within your data. Although I wouldn't doubt spatial dependencies exist, I would be abit skeptical about their usefulness in forecasting (in what I would imagine are fairly large areas), especially since any spatial dependency will likely be already captured (at least to a certain extent) in previous observations in the series.
Where the spatial dependencies may be helpful is if you have small area estimation problems, and you can use the spatial dependency in your data to help smooth out your estimations in those noisy geographic regions. This may not be a problem though since you have aggregated data for a full year.
You shouldn't take my word for it though, and should investigate economics literature on the subject and assess various forecasting methods yourself. Its quite possible other variables are useful predictors of future unemployment in similar panel settings.
Edit:
First I'd like to clarify that I did not mean that the OP should simply prefer some type of exponential smoothing over other techniques. I think the OP should assess performance of various forecasting methods using a hold out sample of 1 or 2 time periods. I do not know the literature for forecasting unemployment, but I have not seen any method so obviously superior that others should be dismissed outright in any context.
Kwak mentions a key point I did not consider initially (and Stephan's comment makes the same point very succinctly as well). The panel nature of the data allows one to estimate an auto-regressive compenent in the model much easier than in a single time series. So I would follow his suggestion and consider the A/B estimator a good bet to provide the best forecast accuracy.
I'm still sticking with my initial suggestion though that I am skeptical of the usefulness of the spatial dependence, and one should assess a models predictive accuracy with and without the spatial component. In terms of prediction it is not simply whether some sort of spatial auto-correlation exists, it is whether that spatial auto-correlation is useful in predicting future values independent of past observations in the series.
For simplifying my reasoning, lets denote
$R_{t}$ corresponds to a geographic region $R$ at time $t$
$R_{t-1}$ corresponds to a geographic region $R$ at the previous time period
$W_{t-1}$ corresponds to however one wants to define the spatial relationship for for the neighbors of $R_{t}$ at the previous time period
In this case $R$ is some attribute and $W$ is that same attribute in the neighbors of $R$ (i.e. an endogenous spatial lag.)
In pretty much all cases of lattice areal data, we have a relationship between $R$ and $W$. Two general explanations for this relationship are
1) The General Social Process Theory
This is when there are processes that affect $R$ and $W$ simultaneously that result in similar values with some sort of spatial organization. The support of the data does not distinguish between the forces that shape attributes in a broader scope than the areal units encompass. (I imagine there is a better name for this, so if someone could help me out.)
2) The Spatial Externalities Theory
This is when some attribute of $W$ directly affects an attribute of $R$. Srikant's example of job diffusion is an example of this.
In the context of forecasting, the general social process model may not be all that helpful in forecasting. In this case, $R_{t-1}$ and $W_{t-1}$ are reflective of the same external shocks, and so $W_{t-1}$ is less likely to have exogenous power to predict $R_{t}$ independent of $R_{t-1}$.
IMO the spatial externalities case I would expect $W_{t-1}$ to have a greater potential to forecast $R_{t}$ independent of $R_{t-1}$ in the short run because $R_{t-1}$ and $W_{t-1}$ can be reflective of different external shocks to the system. This is my opinion though and you typically can't distinguish between the general social process model and the spatial externalities model through empirical means in a cross sectional design (they are probably both occurring to a certain extent in many contexts). Hence I would attempt to validate its usefulness before simply incorporating it into the forecast. Better knowledge of the literature and social processes would definately be helpful here to guide your model building. In criminology only in a very limited set of circumstances does the externalities model make sense (but I imagine it is more likely in economics data). Models of spatial hedonic housing prices often show very strong spatial effects, and in that context I would expect the spatial component to have a strong ability to forecast housing prices. (I like Luc Anselin's explanation of these two different processes better than mine in this paper, PDF here)
Often how we define $W$ is a further problem in this setting. Most conceptions of $W$ are very simplistic and probably aren't entirely reflective of real geographic processes. Here kwaks suggestion of adding a random component to the $W$ effect for each $R$ makes alot of sense. An example would be we would expect New York City to influence its neighbors, but we wouldn't expect NYC's neighbors to have all that much influence on NYC. This still doesn't solve how to either decide what is a neighbor or how to best represent the effects of neighbors. What kwak suggests is essential a local version of Geary's C (spatial differences), local Moran's I (spatial averages) is a common approach as well.
I'm still alittle surprised at the negative responses to my suggestion to use simpler smoothing methods (even if they are meant for univariate time series). Am I naive to think exponential smoothing or some other type of moving window technique won't perform at least comparably well enough to more complicated procedures to assess it? I would be more worried if the series were such that we would expect seasonal components, but that is not the case here.
|
Forecasting unemployment rate
Given the nature of your data I would suggest you investigate the use of exponential smoothing as well as fitting ARIMA type models, especially due to the temporal constraints within your data. Althou
|
40,505
|
Pseudo-random orthogonal matrix generation
|
It's in the Test Matrix Toolbox, not the Matrix Computation Toolbox. The M-file qmult.m (premultiplication by a Haar-distributed pseudorandom orthogonal matrix) can be found here or here.
|
Pseudo-random orthogonal matrix generation
|
It's in the Test Matrix Toolbox, not the Matrix Computation Toolbox. The M-file qmult.m (premultiplication by a Haar-distributed pseudorandom orthogonal matrix) can be found here or here.
|
Pseudo-random orthogonal matrix generation
It's in the Test Matrix Toolbox, not the Matrix Computation Toolbox. The M-file qmult.m (premultiplication by a Haar-distributed pseudorandom orthogonal matrix) can be found here or here.
|
Pseudo-random orthogonal matrix generation
It's in the Test Matrix Toolbox, not the Matrix Computation Toolbox. The M-file qmult.m (premultiplication by a Haar-distributed pseudorandom orthogonal matrix) can be found here or here.
|
40,506
|
Predicting proportions from time with a discontinuity
|
Sounds to me that Y(X) is a sigmiodal process. Thus logistic regression should be suitable for this data. If you model this in R with:
glm(Y~X,family=binomial)
you will find that the "sharpness" of the transition is determined by the magnitude of the X coefficient, and the point of transition (technically the mid-point) is at the ratio of the intercept coefficient to the X coefficient times -1. I made an image to illustrate this but cannot seem to upload it for some reason.
|
Predicting proportions from time with a discontinuity
|
Sounds to me that Y(X) is a sigmiodal process. Thus logistic regression should be suitable for this data. If you model this in R with:
glm(Y~X,family=binomial)
you will find that the "sharpness" of t
|
Predicting proportions from time with a discontinuity
Sounds to me that Y(X) is a sigmiodal process. Thus logistic regression should be suitable for this data. If you model this in R with:
glm(Y~X,family=binomial)
you will find that the "sharpness" of the transition is determined by the magnitude of the X coefficient, and the point of transition (technically the mid-point) is at the ratio of the intercept coefficient to the X coefficient times -1. I made an image to illustrate this but cannot seem to upload it for some reason.
|
Predicting proportions from time with a discontinuity
Sounds to me that Y(X) is a sigmiodal process. Thus logistic regression should be suitable for this data. If you model this in R with:
glm(Y~X,family=binomial)
you will find that the "sharpness" of t
|
40,507
|
Predicting proportions from time with a discontinuity
|
Ignoring the "change point" your description suggests to me a (non-linear) mixed effects model of the following form:
g(E(Yi)) = Xi*beta + Zi*U
Where The betas are fixed effects, the U's are random effects, g(E(yi)) is the (logit) link to a binomial mean.
This will deal with logitudinal correlation of data and the non-Guassian distribution issues.
This must be coupled with some form of change point model, probably a Hidden Markov Model (HMM).
http://en.wikipedia.org/wiki/Hidden_markov_model
It may be necessary to set-up the model as a Directed Acyclic Graph (DAG) in MCMC format, or even specify it fully in a Bayesian framework using software such as WinBUGS.
See:
http://en.wikipedia.org/wiki/Directed_Acyclic_Graph
http://en.wikipedia.org/wiki/MCMC
http://en.wikipedia.org/wiki/WinBUGS
|
Predicting proportions from time with a discontinuity
|
Ignoring the "change point" your description suggests to me a (non-linear) mixed effects model of the following form:
g(E(Yi)) = Xi*beta + Zi*U
Where The betas are fixed effects, the U's are random e
|
Predicting proportions from time with a discontinuity
Ignoring the "change point" your description suggests to me a (non-linear) mixed effects model of the following form:
g(E(Yi)) = Xi*beta + Zi*U
Where The betas are fixed effects, the U's are random effects, g(E(yi)) is the (logit) link to a binomial mean.
This will deal with logitudinal correlation of data and the non-Guassian distribution issues.
This must be coupled with some form of change point model, probably a Hidden Markov Model (HMM).
http://en.wikipedia.org/wiki/Hidden_markov_model
It may be necessary to set-up the model as a Directed Acyclic Graph (DAG) in MCMC format, or even specify it fully in a Bayesian framework using software such as WinBUGS.
See:
http://en.wikipedia.org/wiki/Directed_Acyclic_Graph
http://en.wikipedia.org/wiki/MCMC
http://en.wikipedia.org/wiki/WinBUGS
|
Predicting proportions from time with a discontinuity
Ignoring the "change point" your description suggests to me a (non-linear) mixed effects model of the following form:
g(E(Yi)) = Xi*beta + Zi*U
Where The betas are fixed effects, the U's are random e
|
40,508
|
Predicting proportions from time with a discontinuity
|
James' approach looks good: each observation, according to your description, might have a Binomial(n[i], p[i]) distribution where n[i] is known and--to be fully general--p[i] is a completely unknown function that rises from near 0 to near 1 as i increases. A logistic regression (GLM with binomial response and logistic link) against X[i]==i alone as the explanatory variable might even work. If it's a poor fit, you can introduce additional terms, such as higher powers or (better yet, given the nonparametric spirit) splines. This readily allows for incorporating any covariates into the model, too.
In effect, what appears to be an abrupt change in the response might really just be a natural linear (or nearly linear) progression of logit(p) on which is superimposed Binomial variability. It is this possibility that leads me slightly away from the direction indicated by Thylacoleo, whose approach clearly is valid and likely to be effective. I'm just suspecting (hoping?) that your situation might be amenable to this somewhat simpler analysis.
A complicating possibility concerns the possible autocorrelation of the responses, but that would need to be investigated only if the logistic residuals look strongly over- or under-dispersed.
As a matter of EDA, you could smooth successive observations in a natural way and plot their logits against i. For instance, to smooth observations y[i] and y[i+1] you would construct (n[i]*y[i] + n[i+1]*y[i+1])/(n[i] + n[i+1]), effectively pooling two successive batches; longer smoothing windows can be constructed the same way. (This would automatically cancel out any negative short-term temporal correlation, too.) The fit to the smooth wouldn't be quite right--it would be less steep than appropriate--but it would suggest choices for the general form of the covariates (i.e., functions of i) to use in the regression.
This is, of course, only one of many possible models. For example, the observations at each time point might be a binomial mixture, which would allow both for overdispersion and another way of getting nonlinear fits on the logit scale.
|
Predicting proportions from time with a discontinuity
|
James' approach looks good: each observation, according to your description, might have a Binomial(n[i], p[i]) distribution where n[i] is known and--to be fully general--p[i] is a completely unknown f
|
Predicting proportions from time with a discontinuity
James' approach looks good: each observation, according to your description, might have a Binomial(n[i], p[i]) distribution where n[i] is known and--to be fully general--p[i] is a completely unknown function that rises from near 0 to near 1 as i increases. A logistic regression (GLM with binomial response and logistic link) against X[i]==i alone as the explanatory variable might even work. If it's a poor fit, you can introduce additional terms, such as higher powers or (better yet, given the nonparametric spirit) splines. This readily allows for incorporating any covariates into the model, too.
In effect, what appears to be an abrupt change in the response might really just be a natural linear (or nearly linear) progression of logit(p) on which is superimposed Binomial variability. It is this possibility that leads me slightly away from the direction indicated by Thylacoleo, whose approach clearly is valid and likely to be effective. I'm just suspecting (hoping?) that your situation might be amenable to this somewhat simpler analysis.
A complicating possibility concerns the possible autocorrelation of the responses, but that would need to be investigated only if the logistic residuals look strongly over- or under-dispersed.
As a matter of EDA, you could smooth successive observations in a natural way and plot their logits against i. For instance, to smooth observations y[i] and y[i+1] you would construct (n[i]*y[i] + n[i+1]*y[i+1])/(n[i] + n[i+1]), effectively pooling two successive batches; longer smoothing windows can be constructed the same way. (This would automatically cancel out any negative short-term temporal correlation, too.) The fit to the smooth wouldn't be quite right--it would be less steep than appropriate--but it would suggest choices for the general form of the covariates (i.e., functions of i) to use in the regression.
This is, of course, only one of many possible models. For example, the observations at each time point might be a binomial mixture, which would allow both for overdispersion and another way of getting nonlinear fits on the logit scale.
|
Predicting proportions from time with a discontinuity
James' approach looks good: each observation, according to your description, might have a Binomial(n[i], p[i]) distribution where n[i] is known and--to be fully general--p[i] is a completely unknown f
|
40,509
|
Estimating beta-binomial distribution
|
To fit the model you can use JAGS or Winbugs. In fact if you look at the week 3 of the lecture notes at Paul Hewson's webpage, the rats JAGS example is a beta binomial model. He puts gamma priors on alpha and beta.
|
Estimating beta-binomial distribution
|
To fit the model you can use JAGS or Winbugs. In fact if you look at the week 3 of the lecture notes at Paul Hewson's webpage, the rats JAGS example is a beta binomial model. He puts gamma priors on a
|
Estimating beta-binomial distribution
To fit the model you can use JAGS or Winbugs. In fact if you look at the week 3 of the lecture notes at Paul Hewson's webpage, the rats JAGS example is a beta binomial model. He puts gamma priors on alpha and beta.
|
Estimating beta-binomial distribution
To fit the model you can use JAGS or Winbugs. In fact if you look at the week 3 of the lecture notes at Paul Hewson's webpage, the rats JAGS example is a beta binomial model. He puts gamma priors on a
|
40,510
|
Estimating beta-binomial distribution
|
You don't necessarily have to go Bayesian on your model, plain maximum likelihood estimation works just fine (though has no explicit solution). Multiple R packages (eg. aod or VGAM) will fit the distribution for you.
Alternatively, you can use the quasi-likelihood based overdispersed binomial model that does not assume a beta-binomial distribution, just adjusts for the overdispersion. The glm function with the quasibinomial family will fit this model in R.
|
Estimating beta-binomial distribution
|
You don't necessarily have to go Bayesian on your model, plain maximum likelihood estimation works just fine (though has no explicit solution). Multiple R packages (eg. aod or VGAM) will fit the distr
|
Estimating beta-binomial distribution
You don't necessarily have to go Bayesian on your model, plain maximum likelihood estimation works just fine (though has no explicit solution). Multiple R packages (eg. aod or VGAM) will fit the distribution for you.
Alternatively, you can use the quasi-likelihood based overdispersed binomial model that does not assume a beta-binomial distribution, just adjusts for the overdispersion. The glm function with the quasibinomial family will fit this model in R.
|
Estimating beta-binomial distribution
You don't necessarily have to go Bayesian on your model, plain maximum likelihood estimation works just fine (though has no explicit solution). Multiple R packages (eg. aod or VGAM) will fit the distr
|
40,511
|
Estimating beta-binomial distribution
|
You have a hierarchical bayesian model. Brief details below:
Likelihood Function:
$$f(n_i | p_i, t_i) = (t_i n_i) p_i^{n_i} (1-p_i)^{(t_i - n_i)}$$
Priors on $p_i, \alpha, \beta$:
$$pi \sim Beta(\alpha, \beta)$$
$$\alpha ~ N(\alpha_{mean}, \alpha_{var}) I(\alpha > 0)$$
$$\beta ~ N(\beta_{mean}, \beta_{var}) I(\beta > 0)$$
Posteriors are:
$$p_i \sim Beta(\alpha + n_i, \beta + t_i-n_i)$$
$$\alpha \propto I(\alpha > 0) \prod p_i^{(\alpha-1)} exp(-(\alpha-\alpha_{mean})^2) / (2 \alpha_{var})$$
$$\beta \propto I(\beta > 0) \prod (1-p_i)^{(\beta-1)} exp(-(\beta-\beta_{mean})^2) / (2 \beta_{var})$$
You can then use a combination of Gibbs and Metropolis-Hastings to draw from the posterior distributions.
|
Estimating beta-binomial distribution
|
You have a hierarchical bayesian model. Brief details below:
Likelihood Function:
$$f(n_i | p_i, t_i) = (t_i n_i) p_i^{n_i} (1-p_i)^{(t_i - n_i)}$$
Priors on $p_i, \alpha, \beta$:
$$pi \sim Beta(\alp
|
Estimating beta-binomial distribution
You have a hierarchical bayesian model. Brief details below:
Likelihood Function:
$$f(n_i | p_i, t_i) = (t_i n_i) p_i^{n_i} (1-p_i)^{(t_i - n_i)}$$
Priors on $p_i, \alpha, \beta$:
$$pi \sim Beta(\alpha, \beta)$$
$$\alpha ~ N(\alpha_{mean}, \alpha_{var}) I(\alpha > 0)$$
$$\beta ~ N(\beta_{mean}, \beta_{var}) I(\beta > 0)$$
Posteriors are:
$$p_i \sim Beta(\alpha + n_i, \beta + t_i-n_i)$$
$$\alpha \propto I(\alpha > 0) \prod p_i^{(\alpha-1)} exp(-(\alpha-\alpha_{mean})^2) / (2 \alpha_{var})$$
$$\beta \propto I(\beta > 0) \prod (1-p_i)^{(\beta-1)} exp(-(\beta-\beta_{mean})^2) / (2 \beta_{var})$$
You can then use a combination of Gibbs and Metropolis-Hastings to draw from the posterior distributions.
|
Estimating beta-binomial distribution
You have a hierarchical bayesian model. Brief details below:
Likelihood Function:
$$f(n_i | p_i, t_i) = (t_i n_i) p_i^{n_i} (1-p_i)^{(t_i - n_i)}$$
Priors on $p_i, \alpha, \beta$:
$$pi \sim Beta(\alp
|
40,512
|
Is the class of models for which the MLE exists also the one for which flat priors are permissible?
|
No, these are somewhat different problems. If you have an improper flat prior and you don't have a unique MLE, you will often not have a unique posterior mode, so neither MLE nor MAP estimation will be useful without some additional thought/constraints. But you can easily have a proper posterior.
Some examples:
Mixture models, where there is non-identifiability because of relabelling. There will still be relabelling in the posterior, but the posterior will be proper as long as the mixing probabilities are bounded away from zero
'Flat' or nearly flat regions in the likelihood: if you have $2\times 2$ table where you only observe the margins, the odds ratio is non-identifiable and the likelihood is nearly flat over some range of values. Given a flat prior, you'd get a flat posterior over that range. However, the flat range will typically be bounded so that the posterior is proper.
it's quite possible to have non-identifiability with bounded parameter spaces, so even a flat posterior would be proper. Suppose $Y\sim Binomial(1,p_1)$ and you have a flat prior over $[0,1]\times[0,1]$ for $(p_1,p_2)$. The posterior for $p_2$ (about which you have no data) will still be flat, but it will not be improper.
Conversely, you can get an improper posterior without non-identifiability. Hobert and Casella discuss this for linear mixed models here. They don't explicitly use flat priors, but their improper priors could be regarded as flat for some transformed parameter.
One situation where you can get an improper posterior from non-identifiability is when the likelihood is flat on a unbounded subspace of the parameter space. Suppose you have a model $Y\sim N(\alpha+\beta,1)$. The data only tell you about $\alpha+\beta$ and your posterior for $\alpha-\beta$ will be flat if the prior is flat.
|
Is the class of models for which the MLE exists also the one for which flat priors are permissible?
|
No, these are somewhat different problems. If you have an improper flat prior and you don't have a unique MLE, you will often not have a unique posterior mode, so neither MLE nor MAP estimation will
|
Is the class of models for which the MLE exists also the one for which flat priors are permissible?
No, these are somewhat different problems. If you have an improper flat prior and you don't have a unique MLE, you will often not have a unique posterior mode, so neither MLE nor MAP estimation will be useful without some additional thought/constraints. But you can easily have a proper posterior.
Some examples:
Mixture models, where there is non-identifiability because of relabelling. There will still be relabelling in the posterior, but the posterior will be proper as long as the mixing probabilities are bounded away from zero
'Flat' or nearly flat regions in the likelihood: if you have $2\times 2$ table where you only observe the margins, the odds ratio is non-identifiable and the likelihood is nearly flat over some range of values. Given a flat prior, you'd get a flat posterior over that range. However, the flat range will typically be bounded so that the posterior is proper.
it's quite possible to have non-identifiability with bounded parameter spaces, so even a flat posterior would be proper. Suppose $Y\sim Binomial(1,p_1)$ and you have a flat prior over $[0,1]\times[0,1]$ for $(p_1,p_2)$. The posterior for $p_2$ (about which you have no data) will still be flat, but it will not be improper.
Conversely, you can get an improper posterior without non-identifiability. Hobert and Casella discuss this for linear mixed models here. They don't explicitly use flat priors, but their improper priors could be regarded as flat for some transformed parameter.
One situation where you can get an improper posterior from non-identifiability is when the likelihood is flat on a unbounded subspace of the parameter space. Suppose you have a model $Y\sim N(\alpha+\beta,1)$. The data only tell you about $\alpha+\beta$ and your posterior for $\alpha-\beta$ will be flat if the prior is flat.
|
Is the class of models for which the MLE exists also the one for which flat priors are permissible?
No, these are somewhat different problems. If you have an improper flat prior and you don't have a unique MLE, you will often not have a unique posterior mode, so neither MLE nor MAP estimation will
|
40,513
|
Is the class of models for which the MLE exists also the one for which flat priors are permissible?
|
If the prior is uniform then
$$f(\theta|x) \propto \frac{\mathcal{L}(\theta,x)}{\int_{\theta \in \Theta} \mathcal{L}(\theta,x) d\theta}$$
And this is a proper distribution when the integral of the likelihood function in the denominator is finite.
An simple example where this is not gonna work is when for a particular observation $x$ the likelihood is above some finite value in an infinite range of the parameters. For example consider Poisson distribution $X \sim Poisson(\lambda=1/\theta)$ and the observation $x=0$, then the likelihood is equal to $\mathcal{L}(\theta,0) = e^{-1/\theta}$ then we need to compute the integral $\int_0^\infty e^{-1/\theta} d\theta$ which diverges and has no finite value.
|
Is the class of models for which the MLE exists also the one for which flat priors are permissible?
|
If the prior is uniform then
$$f(\theta|x) \propto \frac{\mathcal{L}(\theta,x)}{\int_{\theta \in \Theta} \mathcal{L}(\theta,x) d\theta}$$
And this is a proper distribution when the integral of the lik
|
Is the class of models for which the MLE exists also the one for which flat priors are permissible?
If the prior is uniform then
$$f(\theta|x) \propto \frac{\mathcal{L}(\theta,x)}{\int_{\theta \in \Theta} \mathcal{L}(\theta,x) d\theta}$$
And this is a proper distribution when the integral of the likelihood function in the denominator is finite.
An simple example where this is not gonna work is when for a particular observation $x$ the likelihood is above some finite value in an infinite range of the parameters. For example consider Poisson distribution $X \sim Poisson(\lambda=1/\theta)$ and the observation $x=0$, then the likelihood is equal to $\mathcal{L}(\theta,0) = e^{-1/\theta}$ then we need to compute the integral $\int_0^\infty e^{-1/\theta} d\theta$ which diverges and has no finite value.
|
Is the class of models for which the MLE exists also the one for which flat priors are permissible?
If the prior is uniform then
$$f(\theta|x) \propto \frac{\mathcal{L}(\theta,x)}{\int_{\theta \in \Theta} \mathcal{L}(\theta,x) d\theta}$$
And this is a proper distribution when the integral of the lik
|
40,514
|
Property of two independent Beta distribution
|
It does not seem to be a correct conjecture.
It seems your condition is that the mode for $X$ is greater than the mode for $Y$. Since in non-symmetric Beta distributions, the mode is not equal to the mean, it should be possible to find a counterexample.
One way is to take a non-symmetric case where $\frac{a_1}{a_1+b_1}=\frac{a_2}{a_2+b_2}$, find which tends to be greater more often, and then adjust the parameters slightly to get a counter-example which works.
So, using R, we might look at
set.seed(2023)
XgreaterY <- function(a1,b1,a2,b2,cases){
mean(rbeta(cases, a1+1, b1+1) > rbeta(cases, a2+1, b2+1))
}
XgreaterY(3, 1, 30, 10, 10^7)
# 0.390946
XgreaterY(3, 1, 29, 11, 10^7)
# 0.4385228
suggesting that $a_1=3, b_1=1, a_2=29, b_2=11$ provides a counterexample, since $\frac{a_1}{a_1+b_1} = 0.75 > 0.725 = \frac{a_2}{a_2+b_2}$ but it seems $\mathbb{P}(X>Y) < 0.44 < 0.5$
|
Property of two independent Beta distribution
|
It does not seem to be a correct conjecture.
It seems your condition is that the mode for $X$ is greater than the mode for $Y$. Since in non-symmetric Beta distributions, the mode is not equal to the
|
Property of two independent Beta distribution
It does not seem to be a correct conjecture.
It seems your condition is that the mode for $X$ is greater than the mode for $Y$. Since in non-symmetric Beta distributions, the mode is not equal to the mean, it should be possible to find a counterexample.
One way is to take a non-symmetric case where $\frac{a_1}{a_1+b_1}=\frac{a_2}{a_2+b_2}$, find which tends to be greater more often, and then adjust the parameters slightly to get a counter-example which works.
So, using R, we might look at
set.seed(2023)
XgreaterY <- function(a1,b1,a2,b2,cases){
mean(rbeta(cases, a1+1, b1+1) > rbeta(cases, a2+1, b2+1))
}
XgreaterY(3, 1, 30, 10, 10^7)
# 0.390946
XgreaterY(3, 1, 29, 11, 10^7)
# 0.4385228
suggesting that $a_1=3, b_1=1, a_2=29, b_2=11$ provides a counterexample, since $\frac{a_1}{a_1+b_1} = 0.75 > 0.725 = \frac{a_2}{a_2+b_2}$ but it seems $\mathbb{P}(X>Y) < 0.44 < 0.5$
|
Property of two independent Beta distribution
It does not seem to be a correct conjecture.
It seems your condition is that the mode for $X$ is greater than the mode for $Y$. Since in non-symmetric Beta distributions, the mode is not equal to the
|
40,515
|
How to perform a 2 sided binomial test with the alternative being larger
|
Let $X_1,\ldots,X_k$ be the number of successes in group $A$, and $Y_1,\ldots,Y_l$ for group $B$. By assumption
$$X_i \sim \text{Binomial}(n_i, \theta_A),\quad i=1,\ldots,k,$$
$$Y_j \sim \text{Binomial}(m_i, \theta_B), \quad j=1,\ldots,l$$
and $X_i$'s are independent, $Y_j$'s are independent and $X_i,Y_j,$ are also independent.
Let $S = \sum_i X_i$, $T = \sum_j Y_j$, $n = \sum_i n_i$ and $m = \sum_j m_j$. Then by the closure properties of the binomial distribution
$$
S \sim \text{Binomial}(n, \theta_A), \quad T \sim \text{Binomial}(m, \theta_B).
$$
Thus the test for $H_0:\theta_A=\theta_B$ boils down to testing for the difference between two binomial samples. This problem can be solved either by a Wald, likelihood ratio test, Rao score test or by an exact $\alpha$-level test. I'll work out the details of the Wald test here.
Let $\hat \theta_A,\hat\theta_B$ be the maximum likelihood estimator (MLE) of $\theta_A$ and $\theta_B$ respectively. Then, by the large sample properties of the MLE, we have
$$
\hat\theta_A\,\dot\sim\, N\left(\theta_A, \frac{\hat\theta_A(1-\hat\theta_A)}{n}\right),\quad \hat\theta_B\,\dot\sim\, N\left(\theta_B, \frac{\hat\theta_B(1-\hat\theta_B)}{m}\right),
$$
and $\hat\theta_A$ is independent from $\hat\theta_B$. By the closure properties of the Normal distribution we have
$$
\hat\theta_A-\hat\theta_B \,\dot\sim\, N\left(\theta_A-\theta_B,\frac{\hat\theta_A(1-\hat\theta_A)}{n}+\frac{\hat\theta_B(1-\hat\theta_B)}{m}\right).
$$
Thus
$$
W = \frac{\hat\theta_A-\hat\theta_B-(\theta_A-\theta_B)}{\left(\frac{\hat\theta_A(1-\hat\theta_A)}{n}+\frac{\hat\theta_B(1-\hat\theta_B)}{m}\right)^{1/2}}\,\dot\sim\, N(0,1).
$$
Here "$\dot\sim$" means "distributed as for large $n+m$".
The Wald test is:
Reject $H_0:\theta_A-\theta_B$ if $|W^{obs}|>z_{\alpha/2}$
where $W^{obs}$ is $W$ computed at the observed data and $z_{\alpha/2}$ is the upper $\alpha$th quantile of the standard normal distribution.
An approximate test of this kind can be computed with R using the prop.test command; but see also chisq.test for a chi-squared goodness-of-fit test. For an exact test, you can either use Fisher's exact test (fisher.test) or have a look at the exact2x2 package. I presume that in your case the sample size is sufficiently large so approximate tests, such as the Wald test, will be fine.
|
How to perform a 2 sided binomial test with the alternative being larger
|
Let $X_1,\ldots,X_k$ be the number of successes in group $A$, and $Y_1,\ldots,Y_l$ for group $B$. By assumption
$$X_i \sim \text{Binomial}(n_i, \theta_A),\quad i=1,\ldots,k,$$
$$Y_j \sim \text{Binomia
|
How to perform a 2 sided binomial test with the alternative being larger
Let $X_1,\ldots,X_k$ be the number of successes in group $A$, and $Y_1,\ldots,Y_l$ for group $B$. By assumption
$$X_i \sim \text{Binomial}(n_i, \theta_A),\quad i=1,\ldots,k,$$
$$Y_j \sim \text{Binomial}(m_i, \theta_B), \quad j=1,\ldots,l$$
and $X_i$'s are independent, $Y_j$'s are independent and $X_i,Y_j,$ are also independent.
Let $S = \sum_i X_i$, $T = \sum_j Y_j$, $n = \sum_i n_i$ and $m = \sum_j m_j$. Then by the closure properties of the binomial distribution
$$
S \sim \text{Binomial}(n, \theta_A), \quad T \sim \text{Binomial}(m, \theta_B).
$$
Thus the test for $H_0:\theta_A=\theta_B$ boils down to testing for the difference between two binomial samples. This problem can be solved either by a Wald, likelihood ratio test, Rao score test or by an exact $\alpha$-level test. I'll work out the details of the Wald test here.
Let $\hat \theta_A,\hat\theta_B$ be the maximum likelihood estimator (MLE) of $\theta_A$ and $\theta_B$ respectively. Then, by the large sample properties of the MLE, we have
$$
\hat\theta_A\,\dot\sim\, N\left(\theta_A, \frac{\hat\theta_A(1-\hat\theta_A)}{n}\right),\quad \hat\theta_B\,\dot\sim\, N\left(\theta_B, \frac{\hat\theta_B(1-\hat\theta_B)}{m}\right),
$$
and $\hat\theta_A$ is independent from $\hat\theta_B$. By the closure properties of the Normal distribution we have
$$
\hat\theta_A-\hat\theta_B \,\dot\sim\, N\left(\theta_A-\theta_B,\frac{\hat\theta_A(1-\hat\theta_A)}{n}+\frac{\hat\theta_B(1-\hat\theta_B)}{m}\right).
$$
Thus
$$
W = \frac{\hat\theta_A-\hat\theta_B-(\theta_A-\theta_B)}{\left(\frac{\hat\theta_A(1-\hat\theta_A)}{n}+\frac{\hat\theta_B(1-\hat\theta_B)}{m}\right)^{1/2}}\,\dot\sim\, N(0,1).
$$
Here "$\dot\sim$" means "distributed as for large $n+m$".
The Wald test is:
Reject $H_0:\theta_A-\theta_B$ if $|W^{obs}|>z_{\alpha/2}$
where $W^{obs}$ is $W$ computed at the observed data and $z_{\alpha/2}$ is the upper $\alpha$th quantile of the standard normal distribution.
An approximate test of this kind can be computed with R using the prop.test command; but see also chisq.test for a chi-squared goodness-of-fit test. For an exact test, you can either use Fisher's exact test (fisher.test) or have a look at the exact2x2 package. I presume that in your case the sample size is sufficiently large so approximate tests, such as the Wald test, will be fine.
|
How to perform a 2 sided binomial test with the alternative being larger
Let $X_1,\ldots,X_k$ be the number of successes in group $A$, and $Y_1,\ldots,Y_l$ for group $B$. By assumption
$$X_i \sim \text{Binomial}(n_i, \theta_A),\quad i=1,\ldots,k,$$
$$Y_j \sim \text{Binomia
|
40,516
|
Do Spline Models Have The Same Properties Of Standard Regression Models?
|
If you’re comfortable with the usual assumptions (e.g., Gauss-Markov conditions), then yes.
Spline regressions just apply linear regression to spline-transformed variables. In many regards, that is no different from including a quadratic or a $\log$ as a feature, but instead you do a spline transformation.
Once you have your features, including any features you engineered through transformations, you just run linear regression on those numbers. The regression machinery does not know or care how you got those features.
I do have some doubts about some of the usual conditions holding, but I suppose missing-variable bias could always be a concern.
This video by MathematicalMonk (Jeffrey Miller) does a good job of explaining the idea of incorporating some kind of engineered “feature space” that combines the original features and transformations involving one or several features.
|
Do Spline Models Have The Same Properties Of Standard Regression Models?
|
If you’re comfortable with the usual assumptions (e.g., Gauss-Markov conditions), then yes.
Spline regressions just apply linear regression to spline-transformed variables. In many regards, that is no
|
Do Spline Models Have The Same Properties Of Standard Regression Models?
If you’re comfortable with the usual assumptions (e.g., Gauss-Markov conditions), then yes.
Spline regressions just apply linear regression to spline-transformed variables. In many regards, that is no different from including a quadratic or a $\log$ as a feature, but instead you do a spline transformation.
Once you have your features, including any features you engineered through transformations, you just run linear regression on those numbers. The regression machinery does not know or care how you got those features.
I do have some doubts about some of the usual conditions holding, but I suppose missing-variable bias could always be a concern.
This video by MathematicalMonk (Jeffrey Miller) does a good job of explaining the idea of incorporating some kind of engineered “feature space” that combines the original features and transformations involving one or several features.
|
Do Spline Models Have The Same Properties Of Standard Regression Models?
If you’re comfortable with the usual assumptions (e.g., Gauss-Markov conditions), then yes.
Spline regressions just apply linear regression to spline-transformed variables. In many regards, that is no
|
40,517
|
How to simulate data for an interaction?
|
I see it as a question of how to generate data from a (linear?) model with interaction. See below for the R code where y is the response, x is the covariate and z is the binary moderator.
set.seed(12)
n <- 1000
# simulate a covariate
x <- rnorm(n)
# simualte a binary moderator
z <- sample(c(0,1),size = n, replace = T)
beta0 <- 1
beta1 <- 2
sigma0 <- 1
# mu
mu <- beta0 + beta1*z + 0.1*x*z + 0.1*x
y <- rnorm(n, mu, sigma0)
dd <- data.frame(y=y, x=x, z=z)
summary(lm(y~x*z, data = dd))
Call:
lm(formula = y ~ x * z, data = dd)
Residuals:
Min 1Q Median 3Q Max
-3.11036 -0.63836 -0.00177 0.64706 3.00070
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.98525 0.04612 21.364 < 2e-16 ***
x 0.15460 0.04985 3.101 0.00198 **
z 2.02830 0.06320 32.092 < 2e-16 ***
x:z 0.04727 0.06632 0.713 0.47615
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9966 on 996 degrees of freedom
Multiple R-squared: 0.5137, Adjusted R-squared: 0.5123
F-statistic: 350.7 on 3 and 996 DF, p-value: < 2.2e-16
|
How to simulate data for an interaction?
|
I see it as a question of how to generate data from a (linear?) model with interaction. See below for the R code where y is the response, x is the covariate and z is the binary moderator.
set.seed(12)
|
How to simulate data for an interaction?
I see it as a question of how to generate data from a (linear?) model with interaction. See below for the R code where y is the response, x is the covariate and z is the binary moderator.
set.seed(12)
n <- 1000
# simulate a covariate
x <- rnorm(n)
# simualte a binary moderator
z <- sample(c(0,1),size = n, replace = T)
beta0 <- 1
beta1 <- 2
sigma0 <- 1
# mu
mu <- beta0 + beta1*z + 0.1*x*z + 0.1*x
y <- rnorm(n, mu, sigma0)
dd <- data.frame(y=y, x=x, z=z)
summary(lm(y~x*z, data = dd))
Call:
lm(formula = y ~ x * z, data = dd)
Residuals:
Min 1Q Median 3Q Max
-3.11036 -0.63836 -0.00177 0.64706 3.00070
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.98525 0.04612 21.364 < 2e-16 ***
x 0.15460 0.04985 3.101 0.00198 **
z 2.02830 0.06320 32.092 < 2e-16 ***
x:z 0.04727 0.06632 0.713 0.47615
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9966 on 996 degrees of freedom
Multiple R-squared: 0.5137, Adjusted R-squared: 0.5123
F-statistic: 350.7 on 3 and 996 DF, p-value: < 2.2e-16
|
How to simulate data for an interaction?
I see it as a question of how to generate data from a (linear?) model with interaction. See below for the R code where y is the response, x is the covariate and z is the binary moderator.
set.seed(12)
|
40,518
|
What does "parameterized by" mean?
|
“Parametrized by” means that the function $f$ of $x$ has additional parameters $\Theta$, so we can write it as $f(x;\Theta)$. In such a case, we call the function by $x$ given a fixed value of $\Theta$.
Likelihood has two meanings, the traditional on, and Bayesian. Traditionally, likelihood is written as
$$
L(\theta|x) \propto P(x|\theta)
$$
The vertical bar $|$ is used on the right-hand side to denote conditional probability and is a slight abuse of notation on the left-hand side. People write it as $L(\theta|x)$ to show that we keep the data $x$ fixed, but we evaluate the function for different parameters $\theta$. In a Bayesian setting, you usually would not see the left-hand side notation, just the right-hand side will be called the likelihood (here you might have seen $L$ used instead of $P$). See Wikipedia entry on likelihood seems ambiguous for more discussion.
|
What does "parameterized by" mean?
|
“Parametrized by” means that the function $f$ of $x$ has additional parameters $\Theta$, so we can write it as $f(x;\Theta)$. In such a case, we call the function by $x$ given a fixed value of $\Theta
|
What does "parameterized by" mean?
“Parametrized by” means that the function $f$ of $x$ has additional parameters $\Theta$, so we can write it as $f(x;\Theta)$. In such a case, we call the function by $x$ given a fixed value of $\Theta$.
Likelihood has two meanings, the traditional on, and Bayesian. Traditionally, likelihood is written as
$$
L(\theta|x) \propto P(x|\theta)
$$
The vertical bar $|$ is used on the right-hand side to denote conditional probability and is a slight abuse of notation on the left-hand side. People write it as $L(\theta|x)$ to show that we keep the data $x$ fixed, but we evaluate the function for different parameters $\theta$. In a Bayesian setting, you usually would not see the left-hand side notation, just the right-hand side will be called the likelihood (here you might have seen $L$ used instead of $P$). See Wikipedia entry on likelihood seems ambiguous for more discussion.
|
What does "parameterized by" mean?
“Parametrized by” means that the function $f$ of $x$ has additional parameters $\Theta$, so we can write it as $f(x;\Theta)$. In such a case, we call the function by $x$ given a fixed value of $\Theta
|
40,519
|
What does "parameterized by" mean?
|
What is parameter? What is a parametric model?
Definition $1.$ Let $(\Omega,\mathfrak F) $ be a probability space. The set of probability measures $\{ \mathbb P_\theta:~{\boldsymbol \theta\in\Theta}\}$ indexed by a parameter $\boldsymbol\theta$ is a parametric family if and only if $\Theta\in\mathbb R^n,~n\in\mathbb Z^{>0}$, and each probability measure is known when $\boldsymbol\theta$ is known. Here $\Theta$ is parameter space.
Remark $1.$ A parametric model assumes the population comes from a parametric family.
Example $1.$ A parametric family of $n\in\mathbb Z^{>0}$ dimensional normal distributions indexed by $( \boldsymbol\mu, \mathbf \Sigma) $ is
$$\{\mathcal N_n(\boldsymbol \mu,\mathbf \Sigma): \boldsymbol\mu\in\mathbb R^n,\mathbf \Sigma\in\mathcal M_n\}.\tag 1$$
So, basically the probability measure is labeled or indexed by parameter(s) and the primary objective of inference is to draw information about the parameter $\theta$ in order to know the probability measure (if it is $\sigma$-finite, then the set is identified by the densities; so the problem is to infer about the parameter to draw information about the associated pdf).
$\bullet$ $\mathcal L(\boldsymbol\theta|\mathbf y) $ denotes, given that $\mathbf Y= \mathbf y$ is realised or observed, the likelihood of parameter $\boldsymbol\theta.$
References and further read:
$\rm [I]$ Mathematical Statistics, Jun Shao, Springer Science$+$Business Media, $2003, $ section $2.1.2, $ p. $94.$
$\rm [II]$ Testing Statistical Hypotheses, E. L. Lehmann, Joseph P. Romano, Springer Science$+$Business Media, $2005, $ section $1.1, $ p. $3.$
$\rm [III]$ Statistical Inference, George Casella, Roger L. Berger, Wadsworth, $2002,$ section $6.3, $ p. $290.$
|
What does "parameterized by" mean?
|
What is parameter? What is a parametric model?
Definition $1.$ Let $(\Omega,\mathfrak F) $ be a probability space. The set of probability measures $\{ \mathbb P_\theta:~{\boldsymbol \theta\in\Theta}\}
|
What does "parameterized by" mean?
What is parameter? What is a parametric model?
Definition $1.$ Let $(\Omega,\mathfrak F) $ be a probability space. The set of probability measures $\{ \mathbb P_\theta:~{\boldsymbol \theta\in\Theta}\}$ indexed by a parameter $\boldsymbol\theta$ is a parametric family if and only if $\Theta\in\mathbb R^n,~n\in\mathbb Z^{>0}$, and each probability measure is known when $\boldsymbol\theta$ is known. Here $\Theta$ is parameter space.
Remark $1.$ A parametric model assumes the population comes from a parametric family.
Example $1.$ A parametric family of $n\in\mathbb Z^{>0}$ dimensional normal distributions indexed by $( \boldsymbol\mu, \mathbf \Sigma) $ is
$$\{\mathcal N_n(\boldsymbol \mu,\mathbf \Sigma): \boldsymbol\mu\in\mathbb R^n,\mathbf \Sigma\in\mathcal M_n\}.\tag 1$$
So, basically the probability measure is labeled or indexed by parameter(s) and the primary objective of inference is to draw information about the parameter $\theta$ in order to know the probability measure (if it is $\sigma$-finite, then the set is identified by the densities; so the problem is to infer about the parameter to draw information about the associated pdf).
$\bullet$ $\mathcal L(\boldsymbol\theta|\mathbf y) $ denotes, given that $\mathbf Y= \mathbf y$ is realised or observed, the likelihood of parameter $\boldsymbol\theta.$
References and further read:
$\rm [I]$ Mathematical Statistics, Jun Shao, Springer Science$+$Business Media, $2003, $ section $2.1.2, $ p. $94.$
$\rm [II]$ Testing Statistical Hypotheses, E. L. Lehmann, Joseph P. Romano, Springer Science$+$Business Media, $2005, $ section $1.1, $ p. $3.$
$\rm [III]$ Statistical Inference, George Casella, Roger L. Berger, Wadsworth, $2002,$ section $6.3, $ p. $290.$
|
What does "parameterized by" mean?
What is parameter? What is a parametric model?
Definition $1.$ Let $(\Omega,\mathfrak F) $ be a probability space. The set of probability measures $\{ \mathbb P_\theta:~{\boldsymbol \theta\in\Theta}\}
|
40,520
|
How to report a P-value?
|
When is it appropriate to give the exact P-value, instead of writing e.g. P<0.05 (also in case of non-significant P-values)?
As a general guideline you want to convey as much information about your results as possible. When you report a p-value as p=0.016 instead of as p<0.02 then the readers will have a more precise view of the results. Often it is also helpful to report the statistic along with it. For instance you can encounter sentences like: "There was a positive effect of X with a coefficient $\beta_X = 0.5$, which was significant ($t = 2.6$, $p = 0.017$)".
The reporting of only $p<0.05$ can be done when one is only interested in strict cut-off levels (but often the cut-off levels are arbitrary). It is also common in graphs or tables where significance is denoted with superscripts like $^\star: p < 0.05$, ${^\star}{^\star}: p < 0.01$, ${^\star}{^\star}{^\star}: p < 0.001$ and in that case it is used as abbreviation to prevent cluttered graphs and tables that become difficult to read.
Instead of p-values, it is also increasingly more popular to report confidence intervals instead. Then the sentence above would become "There was a significant positive effect of X with a coefficient $\beta_X = 0.5$ (95% confidence interval [0.099,0.901] )".
If I have a P-value of e.g. 0.016, should I report it as P=0.01 or P=0.02?
Rounding off p-values might give a false idea. However, you could round up and use an inequality sign like $P<0.02$.
When I have a very small P-value, e.g. 0.00000013, is it appropriate to report it as P<0.000001, or is it better to stop at e.g. P<0.001?
The precision that is appropriate will depend on the application.
Often more precision than below $0.001$ is not needed. Also a better precision can be deceptive because the computation depends on several assumptions and the p-value is an estimate that can be computed with high precision, but does not truly have that high precision because of uncertainty in the assumptions underlying the computation. The computation with some model can be performed with a high precision, but that doesn't mean that the result of that computation should be considered as precise (the potential errors due to a wrong model should be considered as well).
In exact sciences like physics where high precision measurements are possible and a strong theoretical framework is present it is more common to see high precision p-values. For instance in some fields of physics one desires the significance to be below above $5\sigma$ which is equivalent to $p < 0.00000057$.
|
How to report a P-value?
|
When is it appropriate to give the exact P-value, instead of writing e.g. P<0.05 (also in case of non-significant P-values)?
As a general guideline you want to convey as much information about your r
|
How to report a P-value?
When is it appropriate to give the exact P-value, instead of writing e.g. P<0.05 (also in case of non-significant P-values)?
As a general guideline you want to convey as much information about your results as possible. When you report a p-value as p=0.016 instead of as p<0.02 then the readers will have a more precise view of the results. Often it is also helpful to report the statistic along with it. For instance you can encounter sentences like: "There was a positive effect of X with a coefficient $\beta_X = 0.5$, which was significant ($t = 2.6$, $p = 0.017$)".
The reporting of only $p<0.05$ can be done when one is only interested in strict cut-off levels (but often the cut-off levels are arbitrary). It is also common in graphs or tables where significance is denoted with superscripts like $^\star: p < 0.05$, ${^\star}{^\star}: p < 0.01$, ${^\star}{^\star}{^\star}: p < 0.001$ and in that case it is used as abbreviation to prevent cluttered graphs and tables that become difficult to read.
Instead of p-values, it is also increasingly more popular to report confidence intervals instead. Then the sentence above would become "There was a significant positive effect of X with a coefficient $\beta_X = 0.5$ (95% confidence interval [0.099,0.901] )".
If I have a P-value of e.g. 0.016, should I report it as P=0.01 or P=0.02?
Rounding off p-values might give a false idea. However, you could round up and use an inequality sign like $P<0.02$.
When I have a very small P-value, e.g. 0.00000013, is it appropriate to report it as P<0.000001, or is it better to stop at e.g. P<0.001?
The precision that is appropriate will depend on the application.
Often more precision than below $0.001$ is not needed. Also a better precision can be deceptive because the computation depends on several assumptions and the p-value is an estimate that can be computed with high precision, but does not truly have that high precision because of uncertainty in the assumptions underlying the computation. The computation with some model can be performed with a high precision, but that doesn't mean that the result of that computation should be considered as precise (the potential errors due to a wrong model should be considered as well).
In exact sciences like physics where high precision measurements are possible and a strong theoretical framework is present it is more common to see high precision p-values. For instance in some fields of physics one desires the significance to be below above $5\sigma$ which is equivalent to $p < 0.00000057$.
|
How to report a P-value?
When is it appropriate to give the exact P-value, instead of writing e.g. P<0.05 (also in case of non-significant P-values)?
As a general guideline you want to convey as much information about your r
|
40,521
|
How to report a P-value?
|
In practice, people report the exact p-value then declare whether the hypothesis test is statistically significant. Most statisticians don't advocate this. if the goal is hypothesis testing then reporting the p-value is moot because it doesn't make the test more significant, on the other hand the exact p-value matters if the result of the analysis doesn't boil down to a decision theoretic approach. So if the audience is not statisticians, they would probably expect you to do it the wrong way.
The significant digits on a p-value matters. First, you have to follow whatever style guide there is for your thesis or publication. If there are no rules, you need to consider the variability of the analysis. In general, it's hard to say how "stable" a p-value is without simulation or bootstrapping, but you'll often find that even with a huge analysis, a p-value is only really stable out to two decimal places - hence reporting 3 decimal places conveys a level of precision that is not present in the analysis. Secondly, if there is a pre-specified "alpha" level, you need to show enough precision for the reader to know whether p < alpha. In other words, p=0.05 requires two decimal places of precision if p = 0.04 or lower, but borderline cases get awkward. Do you say p = 0.049994 or p = 0.05 but it was statistically significant.
Don't cheat the incidental precision of the analysis. Use the prespecified precision following the rules above, and report out to the smallest decimal. Other aspects of the analysis should convey how striking the result is, a "really small p-value" in and of itself means nothing. And never never report p = 0.00.
|
How to report a P-value?
|
In practice, people report the exact p-value then declare whether the hypothesis test is statistically significant. Most statisticians don't advocate this. if the goal is hypothesis testing then repor
|
How to report a P-value?
In practice, people report the exact p-value then declare whether the hypothesis test is statistically significant. Most statisticians don't advocate this. if the goal is hypothesis testing then reporting the p-value is moot because it doesn't make the test more significant, on the other hand the exact p-value matters if the result of the analysis doesn't boil down to a decision theoretic approach. So if the audience is not statisticians, they would probably expect you to do it the wrong way.
The significant digits on a p-value matters. First, you have to follow whatever style guide there is for your thesis or publication. If there are no rules, you need to consider the variability of the analysis. In general, it's hard to say how "stable" a p-value is without simulation or bootstrapping, but you'll often find that even with a huge analysis, a p-value is only really stable out to two decimal places - hence reporting 3 decimal places conveys a level of precision that is not present in the analysis. Secondly, if there is a pre-specified "alpha" level, you need to show enough precision for the reader to know whether p < alpha. In other words, p=0.05 requires two decimal places of precision if p = 0.04 or lower, but borderline cases get awkward. Do you say p = 0.049994 or p = 0.05 but it was statistically significant.
Don't cheat the incidental precision of the analysis. Use the prespecified precision following the rules above, and report out to the smallest decimal. Other aspects of the analysis should convey how striking the result is, a "really small p-value" in and of itself means nothing. And never never report p = 0.00.
|
How to report a P-value?
In practice, people report the exact p-value then declare whether the hypothesis test is statistically significant. Most statisticians don't advocate this. if the goal is hypothesis testing then repor
|
40,522
|
Why exactly does a classifier need the same prevalence in the train and test sets?
|
"Why exactly does a classifier need the same prevalence in the train and test sets?"
Perhaps my answer to a related question on the DS SE might help
Doesn't over(/under)sampling an imbalanced dataset cause issues?
Yes, the classifier will expect the relative class frequencies in
operation to be the same as those in the training set. This means
that if you over-sample the minority class in the training set, the
classifier is likely to over-predict that class in operational use.
To see why it is best to consider probabilistic classifiers, where the
decision is based on the posterior probability of class membership
p(C_i|x), but this can be written using Bayes' rule as
$p(C_i|x) = \frac{p(x|C_i)p(C_i)}{p(x)}\qquad$ where $\qquad p(x) =
> \sum_j p(x|C_j)p(c_j)$,
so we can see that the decision depends on the prior probabilities of
the classes, $p(C_i)$, so if the prior probabilities in the training
set are different than those in operation, the operational performance
of our classifier will be suboptimal, even if it is optimal for the
training set conditions.
Some classifiers have a problem learning from imbalanced datasets, so
one solution is to oversample the classes to ameliorate this bias in
the classifier. There are to approaches. The first is to oversample
by just the right amount to overcome this (usually unknown) bias and
no more, but that is really difficult. The other approach is to
balance the training set and then post-process the output to
compensate for the difference in training set and operational priors.
We take the output of the classifier trained on an oversampled dataset
and multiply by the ratio of operational and training set prior
probabilities,
$q_o(C_i|x) \propto p_t(x|C_i)p_t(C_i) \times \frac{p_o(C_i)}{p_t(C_i}
> = p_t(x|C_i)p_o(C_i)$
Quantities with the o subscript relate to operational conditions and
those wit the t subscript relate to training set conditions. I have
written this as $q_o(C_i|x)$ as it is an un-normalised probability,
but it is straight forward to renormalise them by dividing by the sum
of $q_o(C_i|x)$ over all classes. For some problems it may be better
to use cross-validation to chose the correction factor, rather than
the theoretical value used here, as it depends on the bias in the
classifier due to the imbalance.
So in short, for imbalanced datasets, use a probabilistic classifier
and oversample (or reweight) to get a balanced dataset, in order to
overcome the bias a classifier may have for imbalanced datasets. Then
post-process the output of the classifier so that it doesn't
over-predict the minority class in operation.
Specific issues:
If we have a very imbalanced data set (say 99:1 for two classes), I
dont see why balancing the training set would introduce any problems.
It doesn't present a problem, provided you post-process the output of the model to compensate for the difference in training set and operational class frequencies. If you don't perform that adjustment (or you use a discrete yes-no classifier) you will over-predict the minority class for the reason given above.
"fans of “classifiers” sometimes subsample from observations in the
most frequent outcome category (here Y=1) to get an artificial 50/50
balance of Y=0 and Y=1 when developing their classifier. Fans of such
deficient notions of accuracy fail to realize that their classifier
will not apply to a population when a much different prevalence of Y=1
than 0.5."
I don't think this accurately represents the situation. The reason for balancing is actually because the majority class is "more important" in some sense than the minority class, and the rebalancing is an attempt to include misclassification costs so that it does work better in operational conditions. However a lot of blogs don't explain that properly, so a lot of practitioners are rather misinformed about it.
|
Why exactly does a classifier need the same prevalence in the train and test sets?
|
"Why exactly does a classifier need the same prevalence in the train and test sets?"
Perhaps my answer to a related question on the DS SE might help
Doesn't over(/under)sampling an imbalanced dataset
|
Why exactly does a classifier need the same prevalence in the train and test sets?
"Why exactly does a classifier need the same prevalence in the train and test sets?"
Perhaps my answer to a related question on the DS SE might help
Doesn't over(/under)sampling an imbalanced dataset cause issues?
Yes, the classifier will expect the relative class frequencies in
operation to be the same as those in the training set. This means
that if you over-sample the minority class in the training set, the
classifier is likely to over-predict that class in operational use.
To see why it is best to consider probabilistic classifiers, where the
decision is based on the posterior probability of class membership
p(C_i|x), but this can be written using Bayes' rule as
$p(C_i|x) = \frac{p(x|C_i)p(C_i)}{p(x)}\qquad$ where $\qquad p(x) =
> \sum_j p(x|C_j)p(c_j)$,
so we can see that the decision depends on the prior probabilities of
the classes, $p(C_i)$, so if the prior probabilities in the training
set are different than those in operation, the operational performance
of our classifier will be suboptimal, even if it is optimal for the
training set conditions.
Some classifiers have a problem learning from imbalanced datasets, so
one solution is to oversample the classes to ameliorate this bias in
the classifier. There are to approaches. The first is to oversample
by just the right amount to overcome this (usually unknown) bias and
no more, but that is really difficult. The other approach is to
balance the training set and then post-process the output to
compensate for the difference in training set and operational priors.
We take the output of the classifier trained on an oversampled dataset
and multiply by the ratio of operational and training set prior
probabilities,
$q_o(C_i|x) \propto p_t(x|C_i)p_t(C_i) \times \frac{p_o(C_i)}{p_t(C_i}
> = p_t(x|C_i)p_o(C_i)$
Quantities with the o subscript relate to operational conditions and
those wit the t subscript relate to training set conditions. I have
written this as $q_o(C_i|x)$ as it is an un-normalised probability,
but it is straight forward to renormalise them by dividing by the sum
of $q_o(C_i|x)$ over all classes. For some problems it may be better
to use cross-validation to chose the correction factor, rather than
the theoretical value used here, as it depends on the bias in the
classifier due to the imbalance.
So in short, for imbalanced datasets, use a probabilistic classifier
and oversample (or reweight) to get a balanced dataset, in order to
overcome the bias a classifier may have for imbalanced datasets. Then
post-process the output of the classifier so that it doesn't
over-predict the minority class in operation.
Specific issues:
If we have a very imbalanced data set (say 99:1 for two classes), I
dont see why balancing the training set would introduce any problems.
It doesn't present a problem, provided you post-process the output of the model to compensate for the difference in training set and operational class frequencies. If you don't perform that adjustment (or you use a discrete yes-no classifier) you will over-predict the minority class for the reason given above.
"fans of “classifiers” sometimes subsample from observations in the
most frequent outcome category (here Y=1) to get an artificial 50/50
balance of Y=0 and Y=1 when developing their classifier. Fans of such
deficient notions of accuracy fail to realize that their classifier
will not apply to a population when a much different prevalence of Y=1
than 0.5."
I don't think this accurately represents the situation. The reason for balancing is actually because the majority class is "more important" in some sense than the minority class, and the rebalancing is an attempt to include misclassification costs so that it does work better in operational conditions. However a lot of blogs don't explain that properly, so a lot of practitioners are rather misinformed about it.
|
Why exactly does a classifier need the same prevalence in the train and test sets?
"Why exactly does a classifier need the same prevalence in the train and test sets?"
Perhaps my answer to a related question on the DS SE might help
Doesn't over(/under)sampling an imbalanced dataset
|
40,523
|
Why exactly does a classifier need the same prevalence in the train and test sets?
|
I would proceed with caution. The only time I would rebalance would be if I knew the characteristics of the original population, so that when I did rebalanced it would reflect the proportion of the true population. In other words treat it as a true sample, and weight it. But, arbitrarily adding or subtracting from groups may remove important data from the majority classes, and multiply error or introduce bias if adding to minority classes. If you received new data later which didn't match the class breakout of the old data, it would be a headache to have to go make and refine your analysis, since you would have made incorrect assumptions about the class breakouts.
However, having said that, it is true that some ML algorithms, like Decision Tree algorithms, output trivial results with imbalanced classes, and sometimes it is OK to balance to equal classes if your goal is understand the model rules rather than making predictions.
|
Why exactly does a classifier need the same prevalence in the train and test sets?
|
I would proceed with caution. The only time I would rebalance would be if I knew the characteristics of the original population, so that when I did rebalanced it would reflect the proportion of the tr
|
Why exactly does a classifier need the same prevalence in the train and test sets?
I would proceed with caution. The only time I would rebalance would be if I knew the characteristics of the original population, so that when I did rebalanced it would reflect the proportion of the true population. In other words treat it as a true sample, and weight it. But, arbitrarily adding or subtracting from groups may remove important data from the majority classes, and multiply error or introduce bias if adding to minority classes. If you received new data later which didn't match the class breakout of the old data, it would be a headache to have to go make and refine your analysis, since you would have made incorrect assumptions about the class breakouts.
However, having said that, it is true that some ML algorithms, like Decision Tree algorithms, output trivial results with imbalanced classes, and sometimes it is OK to balance to equal classes if your goal is understand the model rules rather than making predictions.
|
Why exactly does a classifier need the same prevalence in the train and test sets?
I would proceed with caution. The only time I would rebalance would be if I knew the characteristics of the original population, so that when I did rebalanced it would reflect the proportion of the tr
|
40,524
|
Is correlation a percentage?
|
It’s wrong, and if a reviewer wants to tell you to change it, you have no argument. I would not, however, consider that to be more than a typo (minor revision), even if I said it should be changed.
I see an argument that it’s just a slang that perhaps has no place in formal writing like a scientific article but is fine for casual discussions. However, squaring the correlation has an interpretation as a proportion or percentage (it’s $R^2$ in a regression involving your two variables), so I think I do not like such a slang. If you mentioned having a correlation of $81\%$, that could correspond to $r=0.81$, $r=0.9$, or $r=-0.9$.
|
Is correlation a percentage?
|
It’s wrong, and if a reviewer wants to tell you to change it, you have no argument. I would not, however, consider that to be more than a typo (minor revision), even if I said it should be changed.
I
|
Is correlation a percentage?
It’s wrong, and if a reviewer wants to tell you to change it, you have no argument. I would not, however, consider that to be more than a typo (minor revision), even if I said it should be changed.
I see an argument that it’s just a slang that perhaps has no place in formal writing like a scientific article but is fine for casual discussions. However, squaring the correlation has an interpretation as a proportion or percentage (it’s $R^2$ in a regression involving your two variables), so I think I do not like such a slang. If you mentioned having a correlation of $81\%$, that could correspond to $r=0.81$, $r=0.9$, or $r=-0.9$.
|
Is correlation a percentage?
It’s wrong, and if a reviewer wants to tell you to change it, you have no argument. I would not, however, consider that to be more than a typo (minor revision), even if I said it should be changed.
I
|
40,525
|
Is correlation a percentage?
|
Informally, you can do whatever works for you/your group. I can see a lot of reason why it can be easy to visualize a set of positive correlations as proportions towards two ideal states (0 and 1).
However, formally, I think this is wrong on many conceptual levels. Most important: correlations measures are (usually) not additive. It means that the difference in information between $r = .5$ and $r = -0.5$ is not the same as the difference between $r = 1$ and $r = 0$ even if the metric difference is the same. And this holds whatever two couples of points in the scale you take, roughly.
Personally, I came to the idea that if you are not at the market, adding $%$ to numbers is always a bad choice because it is misleading.
|
Is correlation a percentage?
|
Informally, you can do whatever works for you/your group. I can see a lot of reason why it can be easy to visualize a set of positive correlations as proportions towards two ideal states (0 and 1).
Ho
|
Is correlation a percentage?
Informally, you can do whatever works for you/your group. I can see a lot of reason why it can be easy to visualize a set of positive correlations as proportions towards two ideal states (0 and 1).
However, formally, I think this is wrong on many conceptual levels. Most important: correlations measures are (usually) not additive. It means that the difference in information between $r = .5$ and $r = -0.5$ is not the same as the difference between $r = 1$ and $r = 0$ even if the metric difference is the same. And this holds whatever two couples of points in the scale you take, roughly.
Personally, I came to the idea that if you are not at the market, adding $%$ to numbers is always a bad choice because it is misleading.
|
Is correlation a percentage?
Informally, you can do whatever works for you/your group. I can see a lot of reason why it can be easy to visualize a set of positive correlations as proportions towards two ideal states (0 and 1).
Ho
|
40,526
|
Covariance of random sums
|
Law of total covariance (wiki) helps here. What you found is when $N$ and $N'$ are given:
$$\operatorname{cov}(X,Y|N,N')=\operatorname{var}(a)\min(N,N')$$
$$\mathbb E[X|N,N']=N\mathbb E[a], \ \ E[Y|N,N']=N'\mathbb E[a]$$
Plugging in gives us:
$$\operatorname{cov}(X,Y)=\operatorname{var}(a)\mathbb E[\min(N,N')]+\mathbb E[a]^2\operatorname{cov}(N,N')$$
I'm not sure how you can simplify the expected value of the minimum.
|
Covariance of random sums
|
Law of total covariance (wiki) helps here. What you found is when $N$ and $N'$ are given:
$$\operatorname{cov}(X,Y|N,N')=\operatorname{var}(a)\min(N,N')$$
$$\mathbb E[X|N,N']=N\mathbb E[a], \ \ E[Y|N,
|
Covariance of random sums
Law of total covariance (wiki) helps here. What you found is when $N$ and $N'$ are given:
$$\operatorname{cov}(X,Y|N,N')=\operatorname{var}(a)\min(N,N')$$
$$\mathbb E[X|N,N']=N\mathbb E[a], \ \ E[Y|N,N']=N'\mathbb E[a]$$
Plugging in gives us:
$$\operatorname{cov}(X,Y)=\operatorname{var}(a)\mathbb E[\min(N,N')]+\mathbb E[a]^2\operatorname{cov}(N,N')$$
I'm not sure how you can simplify the expected value of the minimum.
|
Covariance of random sums
Law of total covariance (wiki) helps here. What you found is when $N$ and $N'$ are given:
$$\operatorname{cov}(X,Y|N,N')=\operatorname{var}(a)\min(N,N')$$
$$\mathbb E[X|N,N']=N\mathbb E[a], \ \ E[Y|N,
|
40,527
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximation for the spread of many hypothetical means?
|
As I understand it, the standard error is the spread of many sample means in an attempt to gauge how precise (not accurate) our estimate of the population mean is, but what if there's just the one sample?
Very short
A sample is not just one sample but contains many individual observations. Each of the observations can be considered as a sample (is there a difference between '$n$ samples of size 1' and '1 sample of size $n$'?). So you actually have multiple samples that can help to estimate the standard error in sample means.
In order to estimate the variance of the mean of samples, would you rather have a sample of size one million or multiple (say a hundred) samples of ten?
A bit longer
A sample will almost never be picked such that it perfectly matches the population. Sometimes a sample might pick relatively low values, sometimes a sample might pick relatively high values.
The variation in the sample mean, due to these random variations in picking the sample, is related to the variation in the population that is sampled. If the population has a wide spread in high and low values, than the deviations in a random samples with relatively high/low values will be corresponding to this wide spread and they will be large.
The error/variation in the means of samples relates to the variance of the population. So we can estimate the former with the help of an estimate of the latter. We can estimate the variance of sample.means by the variance of the population. And for this estimate of the variance of the population, one single sample is sufficient.
In formula form
The variance of the sample means $\sigma_n$ where the samples are of size $n$ is related to the variance of the population $\sigma$ $$\sigma_n = \frac{\sigma}{\sqrt{n}}$$
So an estimate of $\sigma$, for which a single sample is sufficient, can also be used to estimate $\sigma_n$.
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximat
|
As I understand it, the standard error is the spread of many sample means in an attempt to gauge how precise (not accurate) our estimate of the population mean is, but what if there's just the one sam
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximation for the spread of many hypothetical means?
As I understand it, the standard error is the spread of many sample means in an attempt to gauge how precise (not accurate) our estimate of the population mean is, but what if there's just the one sample?
Very short
A sample is not just one sample but contains many individual observations. Each of the observations can be considered as a sample (is there a difference between '$n$ samples of size 1' and '1 sample of size $n$'?). So you actually have multiple samples that can help to estimate the standard error in sample means.
In order to estimate the variance of the mean of samples, would you rather have a sample of size one million or multiple (say a hundred) samples of ten?
A bit longer
A sample will almost never be picked such that it perfectly matches the population. Sometimes a sample might pick relatively low values, sometimes a sample might pick relatively high values.
The variation in the sample mean, due to these random variations in picking the sample, is related to the variation in the population that is sampled. If the population has a wide spread in high and low values, than the deviations in a random samples with relatively high/low values will be corresponding to this wide spread and they will be large.
The error/variation in the means of samples relates to the variance of the population. So we can estimate the former with the help of an estimate of the latter. We can estimate the variance of sample.means by the variance of the population. And for this estimate of the variance of the population, one single sample is sufficient.
In formula form
The variance of the sample means $\sigma_n$ where the samples are of size $n$ is related to the variance of the population $\sigma$ $$\sigma_n = \frac{\sigma}{\sqrt{n}}$$
So an estimate of $\sigma$, for which a single sample is sufficient, can also be used to estimate $\sigma_n$.
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximat
As I understand it, the standard error is the spread of many sample means in an attempt to gauge how precise (not accurate) our estimate of the population mean is, but what if there's just the one sam
|
40,528
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximation for the spread of many hypothetical means?
|
I propose to put some visuals/intuition to your question... using an empirical approach (bootstrapping) to make it more concrete, especially in reference to the following:
Usually experiments can't or just aren't repeated and only have 1 sample from a population
As you highlighted it, we are talking about the standard error of a statistic (the mean in our case). So, Let's assume that you have a random sample of 20 people's height from a given country:
## [1] 192.3214 144.4797 151.3796 155.2519 147.5844 147.9056 171.1867 159.3074
## [9] 163.0097 190.9857 165.8155 198.2192 192.2418 165.3628 186.9498 167.3355
## [17] 148.6400 156.6933 160.8472 174.4827
From this sample, you get a mean of 167 and a standard deviation of 17.
You have only one random sample, but you can imagine that if you could take another one, you might get similar values, sometimes duplicates or sometimes more extreme values... but something that will look like to your initial random sample.
So, from these initial sample values and without inventing new ones (only resampling with replacement), you can imagine many other samples. For example, we can imagine three as follows:
## [1] 165.8155 159.3074 148.6400 165.3628 155.2519 151.3796 192.2418 163.0097
## [9] 159.3074 192.2418 186.9498 163.0097 144.4797 198.2192 159.3074 190.9857
## [17] 165.3628 159.3074 167.3355 156.6933
## [1] 147.5844 147.9056 151.3796 163.0097 167.3355 159.3074 167.3355 156.6933
## [9] 156.6933 159.3074 147.9056 190.9857 192.2418 171.1867 198.2192 147.9056
## [17] 155.2519 167.3355 148.6400 165.8155
## [1] 192.2418 198.2192 156.6933 192.3214 148.6400 192.3214 198.2192 165.8155
## [9] 167.3355 144.4797 163.0097 148.6400 159.3074 163.0097 163.0097 174.4827
## [17] 165.3628 165.8155 174.4827 159.3074
Their respective mean will be different from the initial one... but what is interesting is that if we repeat this resampling exercise 10,000 times, for instance, and we calculate the mean for each of these generated samples, we will get something like that (leaving the R code here, just to illustrate it), a distribution of means centered around the initial sample mean:
set.seed(007)
spl <- 167+17*scale(rnorm(20))[,1] #Forcing to have same mean and sd for all samples
library(boot)
myFunc <- function(data, i){
return(mean(data[i]))
}
bootMean <- boot(spl , statistic=myFunc, R=10000)
hist(bootMean$t, xlim=c(150,185), main="Sample size n=20")
abline(v=mean(spl), col="blue")
So, the histogram above represents the distribution of means of 10,000 samples… that we constructed from our initial sample. Empirically, we can determine the standard deviation of this (sampling) distribution (which is our standard error of the mean):
sd(bootMean$t)
## [1] 3.74095
Interestingly enough, if we calculate the formula for the standard error $\frac{s}{\sqrt n}$, we get something very similar:
sd(spl)/sqrt(20)
## [1] 3.801316
The standard error of the mean tells us about the spread our data around the mean.
To finish this intuitive overview, let's see what happen if we increase our initial sample size (to understand the impact of this $\sqrt{n}$).
So, if we increase the sample size, the standard error gets unsurprisingly smaller... we reduce the error in estimating the population mean. Again, we can empirically see that the formula still holds:
sd(bootMean$t)
## [1] 0.7740625
sd(spl)/sqrt(500)
## [1] 0.7602631
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximat
|
I propose to put some visuals/intuition to your question... using an empirical approach (bootstrapping) to make it more concrete, especially in reference to the following:
Usually experiments can't o
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximation for the spread of many hypothetical means?
I propose to put some visuals/intuition to your question... using an empirical approach (bootstrapping) to make it more concrete, especially in reference to the following:
Usually experiments can't or just aren't repeated and only have 1 sample from a population
As you highlighted it, we are talking about the standard error of a statistic (the mean in our case). So, Let's assume that you have a random sample of 20 people's height from a given country:
## [1] 192.3214 144.4797 151.3796 155.2519 147.5844 147.9056 171.1867 159.3074
## [9] 163.0097 190.9857 165.8155 198.2192 192.2418 165.3628 186.9498 167.3355
## [17] 148.6400 156.6933 160.8472 174.4827
From this sample, you get a mean of 167 and a standard deviation of 17.
You have only one random sample, but you can imagine that if you could take another one, you might get similar values, sometimes duplicates or sometimes more extreme values... but something that will look like to your initial random sample.
So, from these initial sample values and without inventing new ones (only resampling with replacement), you can imagine many other samples. For example, we can imagine three as follows:
## [1] 165.8155 159.3074 148.6400 165.3628 155.2519 151.3796 192.2418 163.0097
## [9] 159.3074 192.2418 186.9498 163.0097 144.4797 198.2192 159.3074 190.9857
## [17] 165.3628 159.3074 167.3355 156.6933
## [1] 147.5844 147.9056 151.3796 163.0097 167.3355 159.3074 167.3355 156.6933
## [9] 156.6933 159.3074 147.9056 190.9857 192.2418 171.1867 198.2192 147.9056
## [17] 155.2519 167.3355 148.6400 165.8155
## [1] 192.2418 198.2192 156.6933 192.3214 148.6400 192.3214 198.2192 165.8155
## [9] 167.3355 144.4797 163.0097 148.6400 159.3074 163.0097 163.0097 174.4827
## [17] 165.3628 165.8155 174.4827 159.3074
Their respective mean will be different from the initial one... but what is interesting is that if we repeat this resampling exercise 10,000 times, for instance, and we calculate the mean for each of these generated samples, we will get something like that (leaving the R code here, just to illustrate it), a distribution of means centered around the initial sample mean:
set.seed(007)
spl <- 167+17*scale(rnorm(20))[,1] #Forcing to have same mean and sd for all samples
library(boot)
myFunc <- function(data, i){
return(mean(data[i]))
}
bootMean <- boot(spl , statistic=myFunc, R=10000)
hist(bootMean$t, xlim=c(150,185), main="Sample size n=20")
abline(v=mean(spl), col="blue")
So, the histogram above represents the distribution of means of 10,000 samples… that we constructed from our initial sample. Empirically, we can determine the standard deviation of this (sampling) distribution (which is our standard error of the mean):
sd(bootMean$t)
## [1] 3.74095
Interestingly enough, if we calculate the formula for the standard error $\frac{s}{\sqrt n}$, we get something very similar:
sd(spl)/sqrt(20)
## [1] 3.801316
The standard error of the mean tells us about the spread our data around the mean.
To finish this intuitive overview, let's see what happen if we increase our initial sample size (to understand the impact of this $\sqrt{n}$).
So, if we increase the sample size, the standard error gets unsurprisingly smaller... we reduce the error in estimating the population mean. Again, we can empirically see that the formula still holds:
sd(bootMean$t)
## [1] 0.7740625
sd(spl)/sqrt(500)
## [1] 0.7602631
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximat
I propose to put some visuals/intuition to your question... using an empirical approach (bootstrapping) to make it more concrete, especially in reference to the following:
Usually experiments can't o
|
40,529
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximation for the spread of many hypothetical means?
|
Frequentist methods have a concept called a confidence procedure. A confidence interval is an example of such a thing. It is the procedure that we have confidence in, not so much the specific interval or point estimate.
If you were to perform an experiment very many times over the sample space, you would get many different estimates. The estimators could be used to form a distribution of estimates. That is called the sampling distribution of the estimator. That distribution holds a predictable relationship with the population.
The standard error, as well as the sample mean, sample standard deviation, and so forth, are optimal procedures. It begs the question of optimal at what?
Most of these estimators are the best unbiased estimator of the population parameter. Usually, the definition of best is that it minimizes some form of loss function. It answers the question of "what is the best estimator of the true value in the population, under the restriction that my estimator will be unbiased and minimize loss."
If you would repeatedly take samples from the population, then what you would find is that the sample estimate, for each sample, would approximately average to the population parameter. However, without seeing any other samples, it is the best estimate. If you had many samples then you could perform meta-analysis if you felt it necessary on the estimates collected up to that time.
The standard error is the best estimate from the data provided of the standard deviation of the sampling distribution of the estimator of interest.
A way to think about this is that every sample has signal and noise. The goal of each procedure is to capture as much signal as is possible, subject to whatever constraints and rules you have in your optimization process, and to discard as much noise as is possible. The sample standard error of the mean, or the sample mean, or the sample median, or the sample estimator of something are the best estimate of the population parameter for a given sample.
There is no Bayesian equivalent estimator because if a statistician uses a Bayesian procedure, then it just updates the posterior from the new sample without creating two estimates of the parameter of interest.
A sampling distribution is really an artifact of the procedure, such as attempting to find the population median. It isn't so much a property of the population as a property of trying to find a parameter of the population. It is a point estimate of how wide the sample estimates will be rather than how wide the population is. Because the sampling distribution depends on how wide the population is there is a linkage between the descriptive statistics of the population and the descriptive statistics of the sampling process itself.
It is a bit dangerous to think in terms of precision. Imagine a school with an even number of students and the school is large. You flip a fair coin to put students in either group A or group B. You notice that SE(A)>SE(B).
What aspect of tossing a coin made group B's estimate more precise? Nothing of course. It isn't more precise, it is just different. Both are estimates of the sampling error, one just happens to be larger than the other.
The question of the usefulness of the standard error is another matter. If you observe the left big toe of five randomly chosen people from around the world, then you will have less precision in your estimate than if you estimated from a sample size of 30 million people. The standard error will tell you that you have about 2449 times less precision. Is that useful?
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximat
|
Frequentist methods have a concept called a confidence procedure. A confidence interval is an example of such a thing. It is the procedure that we have confidence in, not so much the specific interv
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximation for the spread of many hypothetical means?
Frequentist methods have a concept called a confidence procedure. A confidence interval is an example of such a thing. It is the procedure that we have confidence in, not so much the specific interval or point estimate.
If you were to perform an experiment very many times over the sample space, you would get many different estimates. The estimators could be used to form a distribution of estimates. That is called the sampling distribution of the estimator. That distribution holds a predictable relationship with the population.
The standard error, as well as the sample mean, sample standard deviation, and so forth, are optimal procedures. It begs the question of optimal at what?
Most of these estimators are the best unbiased estimator of the population parameter. Usually, the definition of best is that it minimizes some form of loss function. It answers the question of "what is the best estimator of the true value in the population, under the restriction that my estimator will be unbiased and minimize loss."
If you would repeatedly take samples from the population, then what you would find is that the sample estimate, for each sample, would approximately average to the population parameter. However, without seeing any other samples, it is the best estimate. If you had many samples then you could perform meta-analysis if you felt it necessary on the estimates collected up to that time.
The standard error is the best estimate from the data provided of the standard deviation of the sampling distribution of the estimator of interest.
A way to think about this is that every sample has signal and noise. The goal of each procedure is to capture as much signal as is possible, subject to whatever constraints and rules you have in your optimization process, and to discard as much noise as is possible. The sample standard error of the mean, or the sample mean, or the sample median, or the sample estimator of something are the best estimate of the population parameter for a given sample.
There is no Bayesian equivalent estimator because if a statistician uses a Bayesian procedure, then it just updates the posterior from the new sample without creating two estimates of the parameter of interest.
A sampling distribution is really an artifact of the procedure, such as attempting to find the population median. It isn't so much a property of the population as a property of trying to find a parameter of the population. It is a point estimate of how wide the sample estimates will be rather than how wide the population is. Because the sampling distribution depends on how wide the population is there is a linkage between the descriptive statistics of the population and the descriptive statistics of the sampling process itself.
It is a bit dangerous to think in terms of precision. Imagine a school with an even number of students and the school is large. You flip a fair coin to put students in either group A or group B. You notice that SE(A)>SE(B).
What aspect of tossing a coin made group B's estimate more precise? Nothing of course. It isn't more precise, it is just different. Both are estimates of the sampling error, one just happens to be larger than the other.
The question of the usefulness of the standard error is another matter. If you observe the left big toe of five randomly chosen people from around the world, then you will have less precision in your estimate than if you estimated from a sample size of 30 million people. The standard error will tell you that you have about 2449 times less precision. Is that useful?
|
Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximat
Frequentist methods have a concept called a confidence procedure. A confidence interval is an example of such a thing. It is the procedure that we have confidence in, not so much the specific interv
|
40,530
|
What is the parameter in Spearman's $\rho$
|
Suppose $(X_1,Y_1),(X_2,Y_2),\ldots,(X_n,Y_n)$ are i.i.d random vectors with a continuous distribution. Let $R_i =\operatorname{Rank}(X_i)$ among $X_1,X_2,\ldots,X_n$ and $Q_i=\operatorname{Rank}(Y_i)$ among $Y_1,Y_2,\ldots,Y_n$, $\,i=1,2,\ldots,n$.
Spearman's rank correlation coefficient is then the sample quantity
$$r_S=\frac{\sum_{i=1}^n \left(R_i-\frac{n+1}2 \right)\left(Q_i-\frac{n+1}2 \right)}{\sqrt{\sum_{i=1}^n \left(R_i-\frac{n+1}2 \right)^2}\sqrt{\sum_{i=1}^n \left(Q_i-\frac{n+1}2\right)^2}}$$
It can be shown that
$$E(r_S)\to \rho_G \quad\text{ as }n\to \infty\,, \tag{$\star$}$$
where $\rho_G$ is the grade correlation coefficient defined as
$$\rho_G=\operatorname{Corr}(F(X_1),G(Y_1))$$
Here $F$ and $G$ are the distribution functions of $X$ and $Y$ respectively.
So $r_S$ is an asymptotically unbiased estimator of $\rho_G$, and at least in this sense $\rho_G$ is a parameter of interest and can be considered to be a population counterpart of $r_S$.
On the other hand, the statistic $$T_n=\frac1{\binom{n}{2}}\sum_{1\le i<j\le n}\operatorname{sgn}(X_i-X_j)\operatorname{sgn}(Y_i-Y_j)$$ is exactly unbiased for its population counterpart, Kendall's tau:
$$\tau=E\left[\operatorname{sgn}(X_1-X_2)\operatorname{sgn}(Y_1-Y_2)\right]$$
If you note that
$$\sum_{j:j\ne i}\operatorname{sgn}(X_i-X_j)=(R_i-1)-(n-R_i)=2\left(R_i-\frac{n+1}2\right)$$
and similarly
$$\sum_{j:j\ne i}\operatorname{sgn}(Y_i-Y_j)=2\left(Q_i-\frac{n+1}2\right)\,,$$
we have this relation between $r_S$ and $T_n$:
\begin{align}
r_S&=\frac{12}{n(n^2-1)}\sum_{i=1}^n \left(R_i-\frac{n+1}2\right)\left(Q_i-\frac{n+1}2\right)
\\&=\frac3{n(n^2-1)}\sum_{i=1}^n \left\{\sum_{j\ne i}\operatorname{sgn}(X_i-X_j)\right\}\left\{\sum_{k\ne i}\operatorname{sgn}(Y_i-Y_k)\right\}
\\&=\frac3{n+1}T_n+\frac{3(n-2)}{n+1}U_n\,, \tag{1}
\end{align}
where $$U_n=\frac1{n(n-1)(n-2)}\sum_{i\ne j\ne k}\operatorname{sgn}(X_i-X_j)\operatorname{sgn}(Y_i-Y_k)$$
Using the independence of $X_2$ and $Y_3$, we can write
\begin{align}
E(U_n)&=E\left[\operatorname{sgn}(X_1-X_2)\operatorname{sgn}(Y_1-Y_3)\right]
\\&=E \left[ E\left[\operatorname{sgn}(X_1-X_2)\operatorname{sgn}(Y_1-Y_3)\right]\mid X_1,Y_1 \right]
\\&=E \left[ E\left[\operatorname{sgn}(X_1-X_2)\mid X_1 \right] E\left[\operatorname{sgn}(Y_1-Y_3) \mid Y_1\right] \,\right]
\\&=E\left[F(X_1)-(1-F(X_1))\right] \left[G(Y_1)-(1-G(Y_1))\right]
\\&=4 E\left[F(X_1)-\frac12\right]\left[G(Y_1)-\frac12\right]
\\&=\frac13 \rho_G \tag{2}
\end{align}
Equations $(1)$ and $(2)$ then together imply $(\star)$.
Typically we are interested in testing the null hypothesis $$H_0: X \text{ and }Y \text{ are independently distributed}$$
Under $H_0$, we have $\rho_G=0$ as well as $\tau=0$, which implies $E_{H_0}(r_S)=0$. The variance under $H_0$ can be shown to be $\operatorname{Var}_{H_0}(r_S)=\frac1{n-1}$. A large sample test is then based on
$$\sqrt{n-1}\,r_S \stackrel{d}\longrightarrow N(0,1)\quad, \text{ under }H_0$$
Note however, that this is not a test for $\rho_G=0$ and it does not give confidence intervals for $\rho_G$ or $E(r_S)$ since the asymptotic distribution of $r_S$ is derived only under $H_0$.
Reference:
Nonparametric Statistical Inference (5th ed.) by Gibbons and Chakraborti, pages 416-421.
Nonparametric Statistical Methods (3rd ed.) by Hollander/Wolfe/Chicken, pages 427-440.
Statistical Inference Based on Ranks by T.P. Hettmansperger.
Related question: Spearman's correlation as a parameter.
|
What is the parameter in Spearman's $\rho$
|
Suppose $(X_1,Y_1),(X_2,Y_2),\ldots,(X_n,Y_n)$ are i.i.d random vectors with a continuous distribution. Let $R_i =\operatorname{Rank}(X_i)$ among $X_1,X_2,\ldots,X_n$ and $Q_i=\operatorname{Rank}(Y_i)
|
What is the parameter in Spearman's $\rho$
Suppose $(X_1,Y_1),(X_2,Y_2),\ldots,(X_n,Y_n)$ are i.i.d random vectors with a continuous distribution. Let $R_i =\operatorname{Rank}(X_i)$ among $X_1,X_2,\ldots,X_n$ and $Q_i=\operatorname{Rank}(Y_i)$ among $Y_1,Y_2,\ldots,Y_n$, $\,i=1,2,\ldots,n$.
Spearman's rank correlation coefficient is then the sample quantity
$$r_S=\frac{\sum_{i=1}^n \left(R_i-\frac{n+1}2 \right)\left(Q_i-\frac{n+1}2 \right)}{\sqrt{\sum_{i=1}^n \left(R_i-\frac{n+1}2 \right)^2}\sqrt{\sum_{i=1}^n \left(Q_i-\frac{n+1}2\right)^2}}$$
It can be shown that
$$E(r_S)\to \rho_G \quad\text{ as }n\to \infty\,, \tag{$\star$}$$
where $\rho_G$ is the grade correlation coefficient defined as
$$\rho_G=\operatorname{Corr}(F(X_1),G(Y_1))$$
Here $F$ and $G$ are the distribution functions of $X$ and $Y$ respectively.
So $r_S$ is an asymptotically unbiased estimator of $\rho_G$, and at least in this sense $\rho_G$ is a parameter of interest and can be considered to be a population counterpart of $r_S$.
On the other hand, the statistic $$T_n=\frac1{\binom{n}{2}}\sum_{1\le i<j\le n}\operatorname{sgn}(X_i-X_j)\operatorname{sgn}(Y_i-Y_j)$$ is exactly unbiased for its population counterpart, Kendall's tau:
$$\tau=E\left[\operatorname{sgn}(X_1-X_2)\operatorname{sgn}(Y_1-Y_2)\right]$$
If you note that
$$\sum_{j:j\ne i}\operatorname{sgn}(X_i-X_j)=(R_i-1)-(n-R_i)=2\left(R_i-\frac{n+1}2\right)$$
and similarly
$$\sum_{j:j\ne i}\operatorname{sgn}(Y_i-Y_j)=2\left(Q_i-\frac{n+1}2\right)\,,$$
we have this relation between $r_S$ and $T_n$:
\begin{align}
r_S&=\frac{12}{n(n^2-1)}\sum_{i=1}^n \left(R_i-\frac{n+1}2\right)\left(Q_i-\frac{n+1}2\right)
\\&=\frac3{n(n^2-1)}\sum_{i=1}^n \left\{\sum_{j\ne i}\operatorname{sgn}(X_i-X_j)\right\}\left\{\sum_{k\ne i}\operatorname{sgn}(Y_i-Y_k)\right\}
\\&=\frac3{n+1}T_n+\frac{3(n-2)}{n+1}U_n\,, \tag{1}
\end{align}
where $$U_n=\frac1{n(n-1)(n-2)}\sum_{i\ne j\ne k}\operatorname{sgn}(X_i-X_j)\operatorname{sgn}(Y_i-Y_k)$$
Using the independence of $X_2$ and $Y_3$, we can write
\begin{align}
E(U_n)&=E\left[\operatorname{sgn}(X_1-X_2)\operatorname{sgn}(Y_1-Y_3)\right]
\\&=E \left[ E\left[\operatorname{sgn}(X_1-X_2)\operatorname{sgn}(Y_1-Y_3)\right]\mid X_1,Y_1 \right]
\\&=E \left[ E\left[\operatorname{sgn}(X_1-X_2)\mid X_1 \right] E\left[\operatorname{sgn}(Y_1-Y_3) \mid Y_1\right] \,\right]
\\&=E\left[F(X_1)-(1-F(X_1))\right] \left[G(Y_1)-(1-G(Y_1))\right]
\\&=4 E\left[F(X_1)-\frac12\right]\left[G(Y_1)-\frac12\right]
\\&=\frac13 \rho_G \tag{2}
\end{align}
Equations $(1)$ and $(2)$ then together imply $(\star)$.
Typically we are interested in testing the null hypothesis $$H_0: X \text{ and }Y \text{ are independently distributed}$$
Under $H_0$, we have $\rho_G=0$ as well as $\tau=0$, which implies $E_{H_0}(r_S)=0$. The variance under $H_0$ can be shown to be $\operatorname{Var}_{H_0}(r_S)=\frac1{n-1}$. A large sample test is then based on
$$\sqrt{n-1}\,r_S \stackrel{d}\longrightarrow N(0,1)\quad, \text{ under }H_0$$
Note however, that this is not a test for $\rho_G=0$ and it does not give confidence intervals for $\rho_G$ or $E(r_S)$ since the asymptotic distribution of $r_S$ is derived only under $H_0$.
Reference:
Nonparametric Statistical Inference (5th ed.) by Gibbons and Chakraborti, pages 416-421.
Nonparametric Statistical Methods (3rd ed.) by Hollander/Wolfe/Chicken, pages 427-440.
Statistical Inference Based on Ranks by T.P. Hettmansperger.
Related question: Spearman's correlation as a parameter.
|
What is the parameter in Spearman's $\rho$
Suppose $(X_1,Y_1),(X_2,Y_2),\ldots,(X_n,Y_n)$ are i.i.d random vectors with a continuous distribution. Let $R_i =\operatorname{Rank}(X_i)$ among $X_1,X_2,\ldots,X_n$ and $Q_i=\operatorname{Rank}(Y_i)
|
40,531
|
How to estimate the (approximate) variance of the weighted mean?
|
You can get general answer to this question (and the specific answer) from just considering the variances of sums. Suppose there are $N$ individuals in the population and you sample $n$ of them. The $X_i$ are fixed (Bob's opinion is whatever it is, whether you measure it or not) but the sampling indicators are random ($R_{\textrm{Bob}}=1$ if you sampled Bob).
The population total is $T=\sum_{i=1}^N X_i$. Your estimator is
$$\hat T=\sum_{i=1}^N R_iw_iX_i$$
Its variance is
$$\mathrm{var}\left[\sum_{i=1}^N R_iw_iX_i \right]= \sum_{i,j=1}^Nw_iw_jX_iX_j\mathrm{cov}[R_i,R_j]$$
Now, that isn't any use because it depends on $X_i$ for unsampled $i$, but we can do a weighted estimate of the total, just like the weighted mean we started with:
$$\widehat{\mathrm{var}}[\hat T]= \sum_{i,j=1}^NR_iR_jw_{ij}w_iw_jX_iX_j\mathrm{cov}[R_i,R_j]$$
where $1/w_{ij}$ is (an estimate of) the probability that both $i$ and $j$ are sampled (and $R_iR_j$ is $R_{ij}$ an indicator that means we have seen both i and j).
You could evaluate this for any precisely-specified sampling design, because you know the sampling probabilities.
Now make the approximation that the sampling is independent for different individuals (either $N$ is very large or $n$ isn't fixed and you're just sampling each individual independently). Only the $i=j$ terms remain and you get
$$\widehat{\mathrm{var}}[\hat T]= \sum_{i=1}^NR_iw_{i}w_iw_iX_i^2\mathrm{var}[R_i]$$
and approximating the sampling probability $\pi_i$ by $1/w_i$,
$$\widehat{\mathrm{var}}[\hat T]= \sum_{i=1}^NR_iw_{i}w_iw_iX_i^2w_i^{-1}(1-w_i^{-1})=\sum_{i\in\textrm{sample}}^Nw_i^2X_i^2(1-w_i^{-1})$$
That's the total. By the same arguments, the denominator of the mean, the estimated $N$, has variance
$$\widehat{\mathrm{var}}[\hat N]= \sum_{i\in\textrm{sample}}^Nw_i^2(1-w_i^{-1})$$
Next, we decide to apply this to $X=Y-\bar Y_w$, and use the (Taylor series) approximation for the variance of a ratio
$$\widehat{\mathrm{var}}\left[\bar Y_w \right]= \frac{T^2}{N^2}\left(\frac{\mathrm{var}[\hat T]}{E[\hat T]^2} -2\frac{\mathrm{cov}[\hat T, \hat N]}{E[\hat T]E[\hat N]} + \frac{\mathrm{var}[\hat N]}{E[\hat N]^2} \right)$$
At this point we note that the covariance term and the second variance term are of smaller order than the first variance term, and that $\hat T$ is unbiased for $T$, so that it simplifies to $\mathrm{var}[\hat T]/N^2\approx\mathrm{var}[\hat T]/(\sum w_i)^2$
This doesn't give you quite what you want (we've lost the $n/(n-1)$ and acquired a $(1-w_i^{-1})$, but doing the argument more carefully gives something closer.
|
How to estimate the (approximate) variance of the weighted mean?
|
You can get general answer to this question (and the specific answer) from just considering the variances of sums. Suppose there are $N$ individuals in the population and you sample $n$ of them. The
|
How to estimate the (approximate) variance of the weighted mean?
You can get general answer to this question (and the specific answer) from just considering the variances of sums. Suppose there are $N$ individuals in the population and you sample $n$ of them. The $X_i$ are fixed (Bob's opinion is whatever it is, whether you measure it or not) but the sampling indicators are random ($R_{\textrm{Bob}}=1$ if you sampled Bob).
The population total is $T=\sum_{i=1}^N X_i$. Your estimator is
$$\hat T=\sum_{i=1}^N R_iw_iX_i$$
Its variance is
$$\mathrm{var}\left[\sum_{i=1}^N R_iw_iX_i \right]= \sum_{i,j=1}^Nw_iw_jX_iX_j\mathrm{cov}[R_i,R_j]$$
Now, that isn't any use because it depends on $X_i$ for unsampled $i$, but we can do a weighted estimate of the total, just like the weighted mean we started with:
$$\widehat{\mathrm{var}}[\hat T]= \sum_{i,j=1}^NR_iR_jw_{ij}w_iw_jX_iX_j\mathrm{cov}[R_i,R_j]$$
where $1/w_{ij}$ is (an estimate of) the probability that both $i$ and $j$ are sampled (and $R_iR_j$ is $R_{ij}$ an indicator that means we have seen both i and j).
You could evaluate this for any precisely-specified sampling design, because you know the sampling probabilities.
Now make the approximation that the sampling is independent for different individuals (either $N$ is very large or $n$ isn't fixed and you're just sampling each individual independently). Only the $i=j$ terms remain and you get
$$\widehat{\mathrm{var}}[\hat T]= \sum_{i=1}^NR_iw_{i}w_iw_iX_i^2\mathrm{var}[R_i]$$
and approximating the sampling probability $\pi_i$ by $1/w_i$,
$$\widehat{\mathrm{var}}[\hat T]= \sum_{i=1}^NR_iw_{i}w_iw_iX_i^2w_i^{-1}(1-w_i^{-1})=\sum_{i\in\textrm{sample}}^Nw_i^2X_i^2(1-w_i^{-1})$$
That's the total. By the same arguments, the denominator of the mean, the estimated $N$, has variance
$$\widehat{\mathrm{var}}[\hat N]= \sum_{i\in\textrm{sample}}^Nw_i^2(1-w_i^{-1})$$
Next, we decide to apply this to $X=Y-\bar Y_w$, and use the (Taylor series) approximation for the variance of a ratio
$$\widehat{\mathrm{var}}\left[\bar Y_w \right]= \frac{T^2}{N^2}\left(\frac{\mathrm{var}[\hat T]}{E[\hat T]^2} -2\frac{\mathrm{cov}[\hat T, \hat N]}{E[\hat T]E[\hat N]} + \frac{\mathrm{var}[\hat N]}{E[\hat N]^2} \right)$$
At this point we note that the covariance term and the second variance term are of smaller order than the first variance term, and that $\hat T$ is unbiased for $T$, so that it simplifies to $\mathrm{var}[\hat T]/N^2\approx\mathrm{var}[\hat T]/(\sum w_i)^2$
This doesn't give you quite what you want (we've lost the $n/(n-1)$ and acquired a $(1-w_i^{-1})$, but doing the argument more carefully gives something closer.
|
How to estimate the (approximate) variance of the weighted mean?
You can get general answer to this question (and the specific answer) from just considering the variances of sums. Suppose there are $N$ individuals in the population and you sample $n$ of them. The
|
40,532
|
How to estimate the (approximate) variance of the weighted mean?
|
A bit nonrigorous, but define $z_i = w_i y_i$. Calculate the variance for the total, $\sum_i z_i$, using the usual formula. Then replace $z_i$ with $w_i y_i$ and notice that the total of $z_i$ is the weighted mean. This is the fastest/most intuitive way to get it -- you're just treating $w_i y_i$ as a single random variable and estimating its variance.
(The formula will differ by a normalizing constant $(\sum_i w_i)^2$, which is just there to make sure that the weights in the squared term inside the sum add up to 1.)
The lack of rigor comes from ignoring the variance introduced by dividing out the normalizing constant. To show this is negligible you'd need a Taylor expansion, but that doesn't add intuition. The post above provides more rigor; my goal is just to show why you should expect it to be true.
|
How to estimate the (approximate) variance of the weighted mean?
|
A bit nonrigorous, but define $z_i = w_i y_i$. Calculate the variance for the total, $\sum_i z_i$, using the usual formula. Then replace $z_i$ with $w_i y_i$ and notice that the total of $z_i$ is the
|
How to estimate the (approximate) variance of the weighted mean?
A bit nonrigorous, but define $z_i = w_i y_i$. Calculate the variance for the total, $\sum_i z_i$, using the usual formula. Then replace $z_i$ with $w_i y_i$ and notice that the total of $z_i$ is the weighted mean. This is the fastest/most intuitive way to get it -- you're just treating $w_i y_i$ as a single random variable and estimating its variance.
(The formula will differ by a normalizing constant $(\sum_i w_i)^2$, which is just there to make sure that the weights in the squared term inside the sum add up to 1.)
The lack of rigor comes from ignoring the variance introduced by dividing out the normalizing constant. To show this is negligible you'd need a Taylor expansion, but that doesn't add intuition. The post above provides more rigor; my goal is just to show why you should expect it to be true.
|
How to estimate the (approximate) variance of the weighted mean?
A bit nonrigorous, but define $z_i = w_i y_i$. Calculate the variance for the total, $\sum_i z_i$, using the usual formula. Then replace $z_i$ with $w_i y_i$ and notice that the total of $z_i$ is the
|
40,533
|
Variance of Bernoulli when success probability varies
|
In general, you solve problems like this using the 'law of iterated variance'.
Let $Y|X \sim \text{Bern}(X)$ and use your stipulated prior mean and variance for the success probability $X$. Using the law of iterated variance, you get:
$$\begin{align}
\mathbb{V}(Y)
&= \mathbb{V}(\mathbb{E}(Y|X)) + \mathbb{E}(\mathbb{V}(Y|X)) \\[6pt]
&= \mathbb{V}(X) + \mathbb{E}(X(1-X)) \\[6pt]
&= \mathbb{V}(X) + \mathbb{E}(X) - \mathbb{E}(X^2) \\[6pt]
&= \mathbb{E}(X^2) - \mathbb{E}(X)^2 + \mathbb{E}(X) - \mathbb{E}(X^2) \\[6pt]
&= \mathbb{E}(X) - \mathbb{E}(X)^2 \\[6pt]
&= \mu(1-\mu). \\[6pt]
\end{align}$$
As you can see, the variance of $Y$ is determined by $\mu$, and is unaffected by $\sigma$. It turns out that this is true for all the moments of $Y$. In fact, we have $\mathbb{E}(f(Y)) = f(0) + [f(1)-f(0)] \mu$, so the expectation of any function of $Y$ is determined by $\mu$ and is unaffected by $\sigma$.
|
Variance of Bernoulli when success probability varies
|
In general, you solve problems like this using the 'law of iterated variance'.
Let $Y|X \sim \text{Bern}(X)$ and use your stipulated prior mean and variance for the success probability $X$. Using the
|
Variance of Bernoulli when success probability varies
In general, you solve problems like this using the 'law of iterated variance'.
Let $Y|X \sim \text{Bern}(X)$ and use your stipulated prior mean and variance for the success probability $X$. Using the law of iterated variance, you get:
$$\begin{align}
\mathbb{V}(Y)
&= \mathbb{V}(\mathbb{E}(Y|X)) + \mathbb{E}(\mathbb{V}(Y|X)) \\[6pt]
&= \mathbb{V}(X) + \mathbb{E}(X(1-X)) \\[6pt]
&= \mathbb{V}(X) + \mathbb{E}(X) - \mathbb{E}(X^2) \\[6pt]
&= \mathbb{E}(X^2) - \mathbb{E}(X)^2 + \mathbb{E}(X) - \mathbb{E}(X^2) \\[6pt]
&= \mathbb{E}(X) - \mathbb{E}(X)^2 \\[6pt]
&= \mu(1-\mu). \\[6pt]
\end{align}$$
As you can see, the variance of $Y$ is determined by $\mu$, and is unaffected by $\sigma$. It turns out that this is true for all the moments of $Y$. In fact, we have $\mathbb{E}(f(Y)) = f(0) + [f(1)-f(0)] \mu$, so the expectation of any function of $Y$ is determined by $\mu$ and is unaffected by $\sigma$.
|
Variance of Bernoulli when success probability varies
In general, you solve problems like this using the 'law of iterated variance'.
Let $Y|X \sim \text{Bern}(X)$ and use your stipulated prior mean and variance for the success probability $X$. Using the
|
40,534
|
Is the emmeans R package performing causal inference G-computation?
|
Re-reading your question, my understanding is that you are asking if emmeans() does G-computation as part of what it ordinarily does. And based on my very limited understanding of causal models and G-computation, I would say the answer is NO. That is simply because we don't treat covariates in any special way. For a numerical covariate, the default action is to compute its mean and use that as a reference value for all subsequent estimates, regardless of whether it is regarded as a mediator or not. We just treat it as a direct effect.
There may be some options in emmeans() that do allow the user to treat covariates in a different way. For example, we can fit a model y ~ treat + M where treat is a treatment and M is a mediator. Then suppose we subsequently do
emmeans(model, "treat", cov.reduce = M ~ treat)
This instructs emmeans to not use the average value of M, but rather to use lm() to fit the model M ~ treat (with the same dataset) and use its predictions for the value of M. In that way, the reference value of M is different for each treatment level. This is equivalent to creating a covariate C that is equal to the residuals of the M ~ treat model, fitting the model y ~ treat + C, and using emmeans() in the ordinary way by using C's mean (which is zero) as the reference value. Perhaps this is similar to what G-computation does -- I am not sure, but perhaps someone else can shed some light on this. But at least it does something special with covariates thought to be mediators, and that seems more akin to what is needed in causal inference.
Addendum
A comment to this answer suggests doing something like emmeans(model, "treat", cov.reduce = FALSE, weights = "prop") but that this is very inefficient as it creates a huge reference grid. I believe that the following may do the same thing:
emmeans(model, "treat", submodel = ~ treat)
The above puts a linear constraint on the estimates whereby all the effects other than those of treat are replaced by predictions of those effects from the given submodel. See vignette("xplanations", "emmeans") for the gory details. But in words, what happens is that we are trying to obtain the predictions we would have obtained from the submodel, while still accounting for the reduction in error variance achieved by including the covariate in the model. I think this in fact does relate to some causal-inference methods, but I lack the depth of knowledge in that are to be sure.
In the case of mixed models and generalized linear models, the submodel constraint will not be quite the same as would be obtained by fitting the submodel with the same method. To accomplish this (or at least get closer), one can use a new feature in version 1.6.0 of emmeans to bring in covariate predictions from an external model. Suppose model was fitted using something like model <- lmer(y ~ M + treat + (1|SUBJ), ...)
Mmod <- lmer(M ~ treat + (1|SUBJ, ...)
Mpred <- function(grid)
list(M = predict(Mmod, newdata = grid, re.form = ~ 0))
emmeans(model, "treat", cov.reduce = extern ~ Mpred)
This is like the original cov.reduce = M ~ treat, except it uses Mmod instead of lm(M ~ treat) to do the predictions of M.
|
Is the emmeans R package performing causal inference G-computation?
|
Re-reading your question, my understanding is that you are asking if emmeans() does G-computation as part of what it ordinarily does. And based on my very limited understanding of causal models and G-
|
Is the emmeans R package performing causal inference G-computation?
Re-reading your question, my understanding is that you are asking if emmeans() does G-computation as part of what it ordinarily does. And based on my very limited understanding of causal models and G-computation, I would say the answer is NO. That is simply because we don't treat covariates in any special way. For a numerical covariate, the default action is to compute its mean and use that as a reference value for all subsequent estimates, regardless of whether it is regarded as a mediator or not. We just treat it as a direct effect.
There may be some options in emmeans() that do allow the user to treat covariates in a different way. For example, we can fit a model y ~ treat + M where treat is a treatment and M is a mediator. Then suppose we subsequently do
emmeans(model, "treat", cov.reduce = M ~ treat)
This instructs emmeans to not use the average value of M, but rather to use lm() to fit the model M ~ treat (with the same dataset) and use its predictions for the value of M. In that way, the reference value of M is different for each treatment level. This is equivalent to creating a covariate C that is equal to the residuals of the M ~ treat model, fitting the model y ~ treat + C, and using emmeans() in the ordinary way by using C's mean (which is zero) as the reference value. Perhaps this is similar to what G-computation does -- I am not sure, but perhaps someone else can shed some light on this. But at least it does something special with covariates thought to be mediators, and that seems more akin to what is needed in causal inference.
Addendum
A comment to this answer suggests doing something like emmeans(model, "treat", cov.reduce = FALSE, weights = "prop") but that this is very inefficient as it creates a huge reference grid. I believe that the following may do the same thing:
emmeans(model, "treat", submodel = ~ treat)
The above puts a linear constraint on the estimates whereby all the effects other than those of treat are replaced by predictions of those effects from the given submodel. See vignette("xplanations", "emmeans") for the gory details. But in words, what happens is that we are trying to obtain the predictions we would have obtained from the submodel, while still accounting for the reduction in error variance achieved by including the covariate in the model. I think this in fact does relate to some causal-inference methods, but I lack the depth of knowledge in that are to be sure.
In the case of mixed models and generalized linear models, the submodel constraint will not be quite the same as would be obtained by fitting the submodel with the same method. To accomplish this (or at least get closer), one can use a new feature in version 1.6.0 of emmeans to bring in covariate predictions from an external model. Suppose model was fitted using something like model <- lmer(y ~ M + treat + (1|SUBJ), ...)
Mmod <- lmer(M ~ treat + (1|SUBJ, ...)
Mpred <- function(grid)
list(M = predict(Mmod, newdata = grid, re.form = ~ 0))
emmeans(model, "treat", cov.reduce = extern ~ Mpred)
This is like the original cov.reduce = M ~ treat, except it uses Mmod instead of lm(M ~ treat) to do the predictions of M.
|
Is the emmeans R package performing causal inference G-computation?
Re-reading your question, my understanding is that you are asking if emmeans() does G-computation as part of what it ordinarily does. And based on my very limited understanding of causal models and G-
|
40,535
|
How to convert a list of integers into a probability distribution such that the smaller the integer the larger the probability value?
|
A simple way would be to use the first $n$ terms of the geometric series: if $0<r<1$, then
$$ 1+r+r^2+\dots+r^{n-1} = \frac{1-r^n}{1-r}. $$
So if you assign
$$ p_k:=\frac{1-r}{1-r^n}r^k \text{ for }k=0, \dots, n-1, $$
then these probabilities will sum to $1$. The parameter $r$ lets you tune how quickly the probabilities drop off.
R code:
rr <- c(0.5,0.7)
nn <- 21
(probs <- sapply(rr,function(xx)(1-xx)/(1-xx^nn)*xx^(0:(nn-1))))
colSums(probs) # yields 1 for both colunms
plot(1:nn,probs[,1],ylim=range(c(0,probs)),las=1,pch=19,cex=1.2,xlab="",ylab="Probability")
points(1:nn,probs[,2],pch=19,cex=1.2,col="red")
legend("topright",col=c("black","red"),pch=19,pt.cex=1.2,legend=paste("r =",rr))
|
How to convert a list of integers into a probability distribution such that the smaller the integer
|
A simple way would be to use the first $n$ terms of the geometric series: if $0<r<1$, then
$$ 1+r+r^2+\dots+r^{n-1} = \frac{1-r^n}{1-r}. $$
So if you assign
$$ p_k:=\frac{1-r}{1-r^n}r^k \text{ for }k=
|
How to convert a list of integers into a probability distribution such that the smaller the integer the larger the probability value?
A simple way would be to use the first $n$ terms of the geometric series: if $0<r<1$, then
$$ 1+r+r^2+\dots+r^{n-1} = \frac{1-r^n}{1-r}. $$
So if you assign
$$ p_k:=\frac{1-r}{1-r^n}r^k \text{ for }k=0, \dots, n-1, $$
then these probabilities will sum to $1$. The parameter $r$ lets you tune how quickly the probabilities drop off.
R code:
rr <- c(0.5,0.7)
nn <- 21
(probs <- sapply(rr,function(xx)(1-xx)/(1-xx^nn)*xx^(0:(nn-1))))
colSums(probs) # yields 1 for both colunms
plot(1:nn,probs[,1],ylim=range(c(0,probs)),las=1,pch=19,cex=1.2,xlab="",ylab="Probability")
points(1:nn,probs[,2],pch=19,cex=1.2,col="red")
legend("topright",col=c("black","red"),pch=19,pt.cex=1.2,legend=paste("r =",rr))
|
How to convert a list of integers into a probability distribution such that the smaller the integer
A simple way would be to use the first $n$ terms of the geometric series: if $0<r<1$, then
$$ 1+r+r^2+\dots+r^{n-1} = \frac{1-r^n}{1-r}. $$
So if you assign
$$ p_k:=\frac{1-r}{1-r^n}r^k \text{ for }k=
|
40,536
|
$R^2$ on out-sample data set
|
It makes sense. It is more common however to keep the training and test sets separated. So that you train your model on the train set, and then predict on the test set alone. From there you can calculate the prediction error, and a $R^2_{pred}$ if you like. (train on $n$ data points, evaluate on $p$ data points, in your terms.)
You can also look up stuff like the PRESS statistic, and other cross validation methods.
|
$R^2$ on out-sample data set
|
It makes sense. It is more common however to keep the training and test sets separated. So that you train your model on the train set, and then predict on the test set alone. From there you can calcul
|
$R^2$ on out-sample data set
It makes sense. It is more common however to keep the training and test sets separated. So that you train your model on the train set, and then predict on the test set alone. From there you can calculate the prediction error, and a $R^2_{pred}$ if you like. (train on $n$ data points, evaluate on $p$ data points, in your terms.)
You can also look up stuff like the PRESS statistic, and other cross validation methods.
|
$R^2$ on out-sample data set
It makes sense. It is more common however to keep the training and test sets separated. So that you train your model on the train set, and then predict on the test set alone. From there you can calcul
|
40,537
|
$R^2$ on out-sample data set
|
It makes sense to apply $R^2= 1-{\sum(y_i-\hat y_i)^2}/{\sum(y_i-\bar y)^2}$ to test set directly. It's a measure of the size of squared residuals compared to the variance of true values.
Alternatively, if you adopt the notion of deviance (see this answer), then you might use the null model from training data instead:
$$\tilde R^2= 1-\frac{\sum(y_i-\hat y_i)^2}{\sum(y_i-\bar y_{train})^2}$$
I've seen both in use, and both can be justified.
|
$R^2$ on out-sample data set
|
It makes sense to apply $R^2= 1-{\sum(y_i-\hat y_i)^2}/{\sum(y_i-\bar y)^2}$ to test set directly. It's a measure of the size of squared residuals compared to the variance of true values.
Alternativel
|
$R^2$ on out-sample data set
It makes sense to apply $R^2= 1-{\sum(y_i-\hat y_i)^2}/{\sum(y_i-\bar y)^2}$ to test set directly. It's a measure of the size of squared residuals compared to the variance of true values.
Alternatively, if you adopt the notion of deviance (see this answer), then you might use the null model from training data instead:
$$\tilde R^2= 1-\frac{\sum(y_i-\hat y_i)^2}{\sum(y_i-\bar y_{train})^2}$$
I've seen both in use, and both can be justified.
|
$R^2$ on out-sample data set
It makes sense to apply $R^2= 1-{\sum(y_i-\hat y_i)^2}/{\sum(y_i-\bar y)^2}$ to test set directly. It's a measure of the size of squared residuals compared to the variance of true values.
Alternativel
|
40,538
|
$R^2$ on out-sample data set
|
There are several definitions of $R^2$ that are equivalent for in-sample OLS linear regression.
The squared correlation between the $x$ and $y$ in simple linear regression
The squared correlation between the predictions $\hat y$ and truth $y$
The proportion of variance explained
A comparison of the model performance to the performance of a naïve model that always predicts $\bar y$, no matter what values the features take
The final one is the one that makes the most sense to me.
When you get to out-of-sample $R^2$, the first three present issues.
Typically, an out-of-sample metric is of interest to problems that require more complicated feature spaces than just one variable, so this is out.
Out-of-sample, all bets are off. If you've badly overfit, you could be in a position where high values of $y$ correspond to low values of $\hat y$, and low values of $y$ correspond to high values of $\hat y$, the extreme of which is $\hat y = -y$. This puts you in a position where $cor(y, \hat y)<0$, and when you square that value, you miss that the predictions are terrible. Further, this approach misses systematic bias (consistently predicting too high or too low by $k$) and predicting multiples of the true values. That is, if $\hat y = a+b y$ for $(a,b)\ne(0,1)$, $cor(y, \hat y)$ is oblivious to the poor predictions. That this metric misses such critical information is, to me, a dealbreaker.
This one is appealing, but the "proportion of variance explained" interpretation of $R^2$ breaks down in most situations. Out-of-sample is one such situation, as the coefficients that result in the orthogonality needed for this interpretation to hold out-of-sample are unlikely to be the coefficients estimated in-sample, even for an OLS linear regression.
Finally, this one makes sense. We have a model and are interested in the square loss. As minimizing square loss corresponds to finding the conditional mean, a good benchmark is to see if the predictions are better than a model that always predicts the conditional mean to be the pooled mean.
$$
\dfrac{
\sum_{i=1}^n\big(
y_i - \hat y_i
\big)^2
}{
\sum_{i=1}^n\big(
y_i - \bar y
\big)^2
}
$$
If the numerator is smaller than the denominator, it means that our predictions beat the predictions made by out baseline model that naïvely predicts $\bar y$ every time. If the numerator is larger than the denominator, then all of our statistics and machine learning efforts are doing worse than we would do by predicting AVERAGE(A:A) (to use some Excel terminology). That is, our model is doing a poor job of predicting.
It is typical to subtract this quantity from $1$ in order to align with the other three ways of defining in-sample $R^2$.
$$R^2 = 1-
\dfrac{
\sum_{i=1}^n\big(
y_i - \hat y_i
\big)^2
}{
\sum_{i=1}^n\big(
y_i - \bar y
\big)^2
}
$$
This idea of comparing to a baseline model exists for other metrics. For logistic regressions, there are two named metrics that do exactly this: Efron's and McFadden's pseudo $R^2$, as discussed on this UCLA page.
For evaluating an out-of-sample $R^2$, I would use the following:
$$R^2_{oos}=1-
\dfrac{
\sum_{i=1}^n\big(
y_i - \hat y_i
\big)^2
}{
\sum_{i=1}^n\big(
y_i - \bar y_{train}
\big)^2
}
$$
This compares the out-of-sample performance of your model to the out-of-sample performance you would get from a model trained just to predict the mean every time (the naïve baseline model).
Irritatingly, the popular Python machine learning package sklearn has an out-of-sample $R^2$ function, sklearn.metrics.r2_score, that uses the $\bar y$ from whatever you input as the truth values. This is fine for in-sample $R^2$, but for out-of-sample $R^2$, it results in the following formula:
$$R^2_{oos}1-
\dfrac{
\sum_{i=1}^n\big(
y_i - \hat y_i
\big)^2
}{
\sum_{i=1}^n\big(
y_i - \bar y_{test}
\big)^2
}
$$
The denominator is now based on the square loss of an intercept-only linear model that has been trained on the test data. We should never have access to this model, as it requires us to train on the test data, and I disagree with the sklearn implementation. Fortunately, however, this function does not fall for the traps that just the $cor(y, \hat y)$ does.
np.random.seed(2022)
N = 100
y = np.random.uniform(0, 1, N)
yhat1 = y + 4
plt.scatter(y, yhat1, label = "Observed Predictions")
plt.plot([0, 1], [0, 1], label = "Perfect Predictions")
plt.legend()
plt.show()
plt.close()
print("sklearn R^2: ", r2_score(y, yhat1))
print("Squared correlation between observations and predictions: ", np.corrcoef(y, yhat1)[1, 1]**2)
sklearn R^2: -157.49362110734847
Squared correlation between observations and predictions: 1.0
yhat2 = y * 3
plt.scatter(y, yhat2, label = "Observed Predictions")
plt.plot([0, 1], [0, 1], label = "Perfect Predictions")
plt.legend()
plt.show()
plt.close()
print("sklearn R^2: ", r2_score(y, yhat2))
print("Squared correlation between observations and predictions: ", np.corrcoef(y, yhat2)[1, 1]**2)
sklearn R^2: -12.341142004503533
Squared correlation between observations and predictions: 1.0
yhat3 = -y
plt.scatter(y, yhat3, label = "Observed Predictions")
plt.plot([0, 1], [0, 1], label = "Perfect Predictions")
plt.legend()
plt.show()
plt.close()
print("sklearn R^2: ", r2_score(y, yhat3))
print("Squared correlation between observations and predictions: ", np.corrcoef(y, yhat3)[1, 1]**2)
sklearn R^2: -12.341142004503533
Squared correlation between observations and predictions: 1.0
yhat4 = 2 + 3*y
plt.scatter(y, yhat4, label = "Observed Predictions")
plt.plot([0, 1], [0, 1], label = "Perfect Predictions")
plt.legend()
plt.show()
plt.close()
print("sklearn R^2: ", r2_score(y, yhat4))
print("Squared correlation between observations and predictions: ", np.corrcoef(y, yhat4)[1, 1]**2)
sklearn R^2: -90.44196171593748
Squared correlation between observations and predictions: 1.0
In all of these situations, the squared correlation between the predictions and the truth is a perfect-looking $1$, yet the sklearn.metrics.r2_score correctly indicates that the predictions are terrible, as is evident in all four plots.
Finally, for evaluating $R^2$ on the combined data (training and testing), it is unclear what this would tell you. If I had to do that calculation, I would be inclined to use the in-sample $\bar y$ in the denominator and just do the sums over all $n+p$ values. You might also be interested in bootstrap validation that uses all of the observations.
|
$R^2$ on out-sample data set
|
There are several definitions of $R^2$ that are equivalent for in-sample OLS linear regression.
The squared correlation between the $x$ and $y$ in simple linear regression
The squared correlation be
|
$R^2$ on out-sample data set
There are several definitions of $R^2$ that are equivalent for in-sample OLS linear regression.
The squared correlation between the $x$ and $y$ in simple linear regression
The squared correlation between the predictions $\hat y$ and truth $y$
The proportion of variance explained
A comparison of the model performance to the performance of a naïve model that always predicts $\bar y$, no matter what values the features take
The final one is the one that makes the most sense to me.
When you get to out-of-sample $R^2$, the first three present issues.
Typically, an out-of-sample metric is of interest to problems that require more complicated feature spaces than just one variable, so this is out.
Out-of-sample, all bets are off. If you've badly overfit, you could be in a position where high values of $y$ correspond to low values of $\hat y$, and low values of $y$ correspond to high values of $\hat y$, the extreme of which is $\hat y = -y$. This puts you in a position where $cor(y, \hat y)<0$, and when you square that value, you miss that the predictions are terrible. Further, this approach misses systematic bias (consistently predicting too high or too low by $k$) and predicting multiples of the true values. That is, if $\hat y = a+b y$ for $(a,b)\ne(0,1)$, $cor(y, \hat y)$ is oblivious to the poor predictions. That this metric misses such critical information is, to me, a dealbreaker.
This one is appealing, but the "proportion of variance explained" interpretation of $R^2$ breaks down in most situations. Out-of-sample is one such situation, as the coefficients that result in the orthogonality needed for this interpretation to hold out-of-sample are unlikely to be the coefficients estimated in-sample, even for an OLS linear regression.
Finally, this one makes sense. We have a model and are interested in the square loss. As minimizing square loss corresponds to finding the conditional mean, a good benchmark is to see if the predictions are better than a model that always predicts the conditional mean to be the pooled mean.
$$
\dfrac{
\sum_{i=1}^n\big(
y_i - \hat y_i
\big)^2
}{
\sum_{i=1}^n\big(
y_i - \bar y
\big)^2
}
$$
If the numerator is smaller than the denominator, it means that our predictions beat the predictions made by out baseline model that naïvely predicts $\bar y$ every time. If the numerator is larger than the denominator, then all of our statistics and machine learning efforts are doing worse than we would do by predicting AVERAGE(A:A) (to use some Excel terminology). That is, our model is doing a poor job of predicting.
It is typical to subtract this quantity from $1$ in order to align with the other three ways of defining in-sample $R^2$.
$$R^2 = 1-
\dfrac{
\sum_{i=1}^n\big(
y_i - \hat y_i
\big)^2
}{
\sum_{i=1}^n\big(
y_i - \bar y
\big)^2
}
$$
This idea of comparing to a baseline model exists for other metrics. For logistic regressions, there are two named metrics that do exactly this: Efron's and McFadden's pseudo $R^2$, as discussed on this UCLA page.
For evaluating an out-of-sample $R^2$, I would use the following:
$$R^2_{oos}=1-
\dfrac{
\sum_{i=1}^n\big(
y_i - \hat y_i
\big)^2
}{
\sum_{i=1}^n\big(
y_i - \bar y_{train}
\big)^2
}
$$
This compares the out-of-sample performance of your model to the out-of-sample performance you would get from a model trained just to predict the mean every time (the naïve baseline model).
Irritatingly, the popular Python machine learning package sklearn has an out-of-sample $R^2$ function, sklearn.metrics.r2_score, that uses the $\bar y$ from whatever you input as the truth values. This is fine for in-sample $R^2$, but for out-of-sample $R^2$, it results in the following formula:
$$R^2_{oos}1-
\dfrac{
\sum_{i=1}^n\big(
y_i - \hat y_i
\big)^2
}{
\sum_{i=1}^n\big(
y_i - \bar y_{test}
\big)^2
}
$$
The denominator is now based on the square loss of an intercept-only linear model that has been trained on the test data. We should never have access to this model, as it requires us to train on the test data, and I disagree with the sklearn implementation. Fortunately, however, this function does not fall for the traps that just the $cor(y, \hat y)$ does.
np.random.seed(2022)
N = 100
y = np.random.uniform(0, 1, N)
yhat1 = y + 4
plt.scatter(y, yhat1, label = "Observed Predictions")
plt.plot([0, 1], [0, 1], label = "Perfect Predictions")
plt.legend()
plt.show()
plt.close()
print("sklearn R^2: ", r2_score(y, yhat1))
print("Squared correlation between observations and predictions: ", np.corrcoef(y, yhat1)[1, 1]**2)
sklearn R^2: -157.49362110734847
Squared correlation between observations and predictions: 1.0
yhat2 = y * 3
plt.scatter(y, yhat2, label = "Observed Predictions")
plt.plot([0, 1], [0, 1], label = "Perfect Predictions")
plt.legend()
plt.show()
plt.close()
print("sklearn R^2: ", r2_score(y, yhat2))
print("Squared correlation between observations and predictions: ", np.corrcoef(y, yhat2)[1, 1]**2)
sklearn R^2: -12.341142004503533
Squared correlation between observations and predictions: 1.0
yhat3 = -y
plt.scatter(y, yhat3, label = "Observed Predictions")
plt.plot([0, 1], [0, 1], label = "Perfect Predictions")
plt.legend()
plt.show()
plt.close()
print("sklearn R^2: ", r2_score(y, yhat3))
print("Squared correlation between observations and predictions: ", np.corrcoef(y, yhat3)[1, 1]**2)
sklearn R^2: -12.341142004503533
Squared correlation between observations and predictions: 1.0
yhat4 = 2 + 3*y
plt.scatter(y, yhat4, label = "Observed Predictions")
plt.plot([0, 1], [0, 1], label = "Perfect Predictions")
plt.legend()
plt.show()
plt.close()
print("sklearn R^2: ", r2_score(y, yhat4))
print("Squared correlation between observations and predictions: ", np.corrcoef(y, yhat4)[1, 1]**2)
sklearn R^2: -90.44196171593748
Squared correlation between observations and predictions: 1.0
In all of these situations, the squared correlation between the predictions and the truth is a perfect-looking $1$, yet the sklearn.metrics.r2_score correctly indicates that the predictions are terrible, as is evident in all four plots.
Finally, for evaluating $R^2$ on the combined data (training and testing), it is unclear what this would tell you. If I had to do that calculation, I would be inclined to use the in-sample $\bar y$ in the denominator and just do the sums over all $n+p$ values. You might also be interested in bootstrap validation that uses all of the observations.
|
$R^2$ on out-sample data set
There are several definitions of $R^2$ that are equivalent for in-sample OLS linear regression.
The squared correlation between the $x$ and $y$ in simple linear regression
The squared correlation be
|
40,539
|
How can a cross-level interaction be random in a 2-level mixed-model?
|
It's important to remember that an interaction is the product of the two variables.
For example if we have
sch.id ses sector
1 1 1
1 2 1
2 3 2
2 4 2
..then the interaction for ses:sector will be:
sch.id ses sector ses:sector
1 1 1 1
1 2 1 2
2 3 2 6
2 4 2 8
So, although sector does not vary with sch.id, the interaction that includes it does, because the other variable, ses, does vary within school.
As such, there is no technical reason why random slopes for such an interaction cannot be included - however it would be odd to fit the first model in the OP - that is, a model with random slopes only for the interaction, since it is impossible to vary an interaction independently of both main effects - so a more appropriate model (provided it is supported by the data) is:
math ~ ses*sector + (ses + ses:sector | sch.id)
|
How can a cross-level interaction be random in a 2-level mixed-model?
|
It's important to remember that an interaction is the product of the two variables.
For example if we have
sch.id ses sector
1 1 1
1 2 1
2 3 2
2 4 2
..th
|
How can a cross-level interaction be random in a 2-level mixed-model?
It's important to remember that an interaction is the product of the two variables.
For example if we have
sch.id ses sector
1 1 1
1 2 1
2 3 2
2 4 2
..then the interaction for ses:sector will be:
sch.id ses sector ses:sector
1 1 1 1
1 2 1 2
2 3 2 6
2 4 2 8
So, although sector does not vary with sch.id, the interaction that includes it does, because the other variable, ses, does vary within school.
As such, there is no technical reason why random slopes for such an interaction cannot be included - however it would be odd to fit the first model in the OP - that is, a model with random slopes only for the interaction, since it is impossible to vary an interaction independently of both main effects - so a more appropriate model (provided it is supported by the data) is:
math ~ ses*sector + (ses + ses:sector | sch.id)
|
How can a cross-level interaction be random in a 2-level mixed-model?
It's important to remember that an interaction is the product of the two variables.
For example if we have
sch.id ses sector
1 1 1
1 2 1
2 3 2
2 4 2
..th
|
40,540
|
Why functions sampled from a linear kernel Gaussian Process are guaranteed to be a linear function?
|
Observe that you're covariance is degenerate, as it has rank 1. Sampling from this distribution does not give you as much "randomness" as expected as it can be written as the push-forward of a random variable on a lower dimensional space. Let's make this explicit!
Notice that we can write your covariance matrix as
$$\boldsymbol{\Sigma} = \begin{pmatrix}0 & 0 & 0 \\
0 & 25 & 50 \\
0 & 50 & 100 \end{pmatrix} = \begin{pmatrix}0 & 5 & 10 \end{pmatrix}\begin{pmatrix} 0 \\ 5 \\ 10 \end{pmatrix} = \boldsymbol{v}\boldsymbol{v}^{T}$$
which reveals the rank 1 structure of $\boldsymbol{\Sigma}=\boldsymbol{v}\boldsymbol{v}^{T}$. Notice that this structure directly comes from the linearity of the kernel! Denote the samples as $\boldsymbol{y} \sim \mathcal{N}\left(\boldsymbol{0}, \boldsymbol{\Sigma}\right) \in \mathbb{R}^{3}$. The trick is now rewriting this as
$$\boldsymbol{y} \stackrel{(d)}{=} z\boldsymbol{v}$$
where $z \sim \mathcal{N}(0,1)$ (one dimensional!). This is due to the fact that scaling $z$ with $v_i$ in each component is still Gaussian and we can check the statistics
$$\mathbb{E}[z\boldsymbol{v}] = 0 \hspace{3mm} \text{ and } \hspace{3mm} \text{cov}(z\boldsymbol{v}) = \boldsymbol{v} \text{ cov}(z)\boldsymbol{v}^{T} = \boldsymbol{v}\boldsymbol{v}^{T} = \boldsymbol{\Sigma}$$
which shows $\boldsymbol{y} \stackrel{(d)}{=} z\boldsymbol{v}$. But this means we can also sample $\boldsymbol{y}$ as $z\boldsymbol{v}$ which always is a random multiple of $\boldsymbol{v}$ (again, $z$ is one dimensional) and hence reveals that the three points will always lie on the line given by $\boldsymbol{v}$!
Essentially, having a low rank covariance implies that we only will access low-dimensional randomness although the random variable itself seems to live in a higher dimensional space. This is why all your points end up on a line!
Another, maybe helpful approach looks at the diagonalization of your covariance:
$$\boldsymbol{\Sigma} = \boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{T}$$
Due to rank 1, $\Lambda_{11} \not = 0$ while $\Lambda_{22}=\Lambda_{33}=0$. Since Gaussians are invariant under rotations, we can essentially use this new rotated coordinate system where your covariance $\boldsymbol{\Sigma}$ becomes the diagonal covariance $\boldsymbol{\Lambda}$. But here you see that the second and third entry of your random vector are fixed since they have no variance ($\Lambda_{22}=\Lambda_{33}=0$)!
You can check out this answer for a more general explanation of this effect.
|
Why functions sampled from a linear kernel Gaussian Process are guaranteed to be a linear function?
|
Observe that you're covariance is degenerate, as it has rank 1. Sampling from this distribution does not give you as much "randomness" as expected as it can be written as the push-forward of a random
|
Why functions sampled from a linear kernel Gaussian Process are guaranteed to be a linear function?
Observe that you're covariance is degenerate, as it has rank 1. Sampling from this distribution does not give you as much "randomness" as expected as it can be written as the push-forward of a random variable on a lower dimensional space. Let's make this explicit!
Notice that we can write your covariance matrix as
$$\boldsymbol{\Sigma} = \begin{pmatrix}0 & 0 & 0 \\
0 & 25 & 50 \\
0 & 50 & 100 \end{pmatrix} = \begin{pmatrix}0 & 5 & 10 \end{pmatrix}\begin{pmatrix} 0 \\ 5 \\ 10 \end{pmatrix} = \boldsymbol{v}\boldsymbol{v}^{T}$$
which reveals the rank 1 structure of $\boldsymbol{\Sigma}=\boldsymbol{v}\boldsymbol{v}^{T}$. Notice that this structure directly comes from the linearity of the kernel! Denote the samples as $\boldsymbol{y} \sim \mathcal{N}\left(\boldsymbol{0}, \boldsymbol{\Sigma}\right) \in \mathbb{R}^{3}$. The trick is now rewriting this as
$$\boldsymbol{y} \stackrel{(d)}{=} z\boldsymbol{v}$$
where $z \sim \mathcal{N}(0,1)$ (one dimensional!). This is due to the fact that scaling $z$ with $v_i$ in each component is still Gaussian and we can check the statistics
$$\mathbb{E}[z\boldsymbol{v}] = 0 \hspace{3mm} \text{ and } \hspace{3mm} \text{cov}(z\boldsymbol{v}) = \boldsymbol{v} \text{ cov}(z)\boldsymbol{v}^{T} = \boldsymbol{v}\boldsymbol{v}^{T} = \boldsymbol{\Sigma}$$
which shows $\boldsymbol{y} \stackrel{(d)}{=} z\boldsymbol{v}$. But this means we can also sample $\boldsymbol{y}$ as $z\boldsymbol{v}$ which always is a random multiple of $\boldsymbol{v}$ (again, $z$ is one dimensional) and hence reveals that the three points will always lie on the line given by $\boldsymbol{v}$!
Essentially, having a low rank covariance implies that we only will access low-dimensional randomness although the random variable itself seems to live in a higher dimensional space. This is why all your points end up on a line!
Another, maybe helpful approach looks at the diagonalization of your covariance:
$$\boldsymbol{\Sigma} = \boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{T}$$
Due to rank 1, $\Lambda_{11} \not = 0$ while $\Lambda_{22}=\Lambda_{33}=0$. Since Gaussians are invariant under rotations, we can essentially use this new rotated coordinate system where your covariance $\boldsymbol{\Sigma}$ becomes the diagonal covariance $\boldsymbol{\Lambda}$. But here you see that the second and third entry of your random vector are fixed since they have no variance ($\Lambda_{22}=\Lambda_{33}=0$)!
You can check out this answer for a more general explanation of this effect.
|
Why functions sampled from a linear kernel Gaussian Process are guaranteed to be a linear function?
Observe that you're covariance is degenerate, as it has rank 1. Sampling from this distribution does not give you as much "randomness" as expected as it can be written as the push-forward of a random
|
40,541
|
Why functions sampled from a linear kernel Gaussian Process are guaranteed to be a linear function?
|
Refer to Drawing values from the distribution section in Wikipedia. Let me use that to prove the linearity of functions with basic linear algebra.
We will use the following fact,
If we can write the covariance matrix $\Sigma$ of a multivariate normal
distribution as $\Sigma=AA^T$ for any real $A$, then
$\mathbf{y} = \boldsymbol{\mu} + A \mathbf{z}$ is a valid function drawn from
$\mathcal{N}(\boldsymbol{\mu},\Sigma)$, where $\mathbf{z}$ is a
function drawn from $\mathcal{N}(\mathbf{o},I)$ (standard multivariate normal
distribution).
We want to show that any $\mathbf{y}$ follows $m\mathbf{x}+b$ (linear) form, where $m$ and $b$ are slope and offset respectively.
According to the distill article you refer to in the question (also in general), the linear kernel is given as follows,
$$
K(x,x') = \sigma^2(x-c)(x'-c) + \sigma_b^2
$$
Writing it in covariance matrix form,
$$
\Sigma = K(\mathbf{x},\mathbf{x}) =
\begin{bmatrix}
\sigma^2(x_1-c)^2+\sigma_b^2 & \sigma^2(x_1-c)(x_2-c)+\sigma_b^2 & \cdots\\
\sigma^2(x_2-c)(x_1-c)+\sigma_b^2 & \sigma^2(x_2-c)^2+\sigma_b^2 & \cdots\\
\cdots& \cdots & \cdots
\end{bmatrix}
$$
If we want to write it in $\Sigma = AA^T$ form, $A$ would be,
$$
A=
\begin{bmatrix}
\sigma(x_1-c) & \sigma_b & 0 & \cdots\\
\sigma(x_2-c) & \sigma_b & 0 & \cdots\\
\cdots& \cdots & \cdots & \cdots\\
\sigma(x_n-c)& \sigma_b & 0 & \cdots
\end{bmatrix}
$$
Now, for GP, we take $\boldsymbol{\mu}=\mathbf{o}$, so, $\mathbf{y}=A\mathbf{z}$ is a valid function from $\mathcal{N}(\mathbf{o},\Sigma)$.
$$
\begin{bmatrix}
y_1\\y_2\\\cdots\\y_n
\end{bmatrix}
=A\mathbf{z}=
\begin{bmatrix}
\sigma(x_1-c) & \sigma_b & 0 & \cdots\\
\sigma(x_2-c) & \sigma_b & 0 & \cdots\\
\cdots& \cdots & \cdots & \cdots\\
\sigma(x_n-c)& \sigma_b & 0 & \cdots
\end{bmatrix}
\begin{bmatrix}
z_1\\z_2\\\cdots\\z_n
\end{bmatrix}=z_1\sigma
\begin{bmatrix}
x_1\\x_2\\\cdots\\x_n
\end{bmatrix}-(z_1\sigma c - z_2\sigma_b)
$$
As you can see, $\mathbf{y}$ follows $m\mathbf{x}+b$ form, where slope $m=z_1\sigma$ and offset $b=z_2\sigma_b - z_1\sigma c$.
Hence proved :)
(let me know in comments if any step requires more clarification).
|
Why functions sampled from a linear kernel Gaussian Process are guaranteed to be a linear function?
|
Refer to Drawing values from the distribution section in Wikipedia. Let me use that to prove the linearity of functions with basic linear algebra.
We will use the following fact,
If we can write the
|
Why functions sampled from a linear kernel Gaussian Process are guaranteed to be a linear function?
Refer to Drawing values from the distribution section in Wikipedia. Let me use that to prove the linearity of functions with basic linear algebra.
We will use the following fact,
If we can write the covariance matrix $\Sigma$ of a multivariate normal
distribution as $\Sigma=AA^T$ for any real $A$, then
$\mathbf{y} = \boldsymbol{\mu} + A \mathbf{z}$ is a valid function drawn from
$\mathcal{N}(\boldsymbol{\mu},\Sigma)$, where $\mathbf{z}$ is a
function drawn from $\mathcal{N}(\mathbf{o},I)$ (standard multivariate normal
distribution).
We want to show that any $\mathbf{y}$ follows $m\mathbf{x}+b$ (linear) form, where $m$ and $b$ are slope and offset respectively.
According to the distill article you refer to in the question (also in general), the linear kernel is given as follows,
$$
K(x,x') = \sigma^2(x-c)(x'-c) + \sigma_b^2
$$
Writing it in covariance matrix form,
$$
\Sigma = K(\mathbf{x},\mathbf{x}) =
\begin{bmatrix}
\sigma^2(x_1-c)^2+\sigma_b^2 & \sigma^2(x_1-c)(x_2-c)+\sigma_b^2 & \cdots\\
\sigma^2(x_2-c)(x_1-c)+\sigma_b^2 & \sigma^2(x_2-c)^2+\sigma_b^2 & \cdots\\
\cdots& \cdots & \cdots
\end{bmatrix}
$$
If we want to write it in $\Sigma = AA^T$ form, $A$ would be,
$$
A=
\begin{bmatrix}
\sigma(x_1-c) & \sigma_b & 0 & \cdots\\
\sigma(x_2-c) & \sigma_b & 0 & \cdots\\
\cdots& \cdots & \cdots & \cdots\\
\sigma(x_n-c)& \sigma_b & 0 & \cdots
\end{bmatrix}
$$
Now, for GP, we take $\boldsymbol{\mu}=\mathbf{o}$, so, $\mathbf{y}=A\mathbf{z}$ is a valid function from $\mathcal{N}(\mathbf{o},\Sigma)$.
$$
\begin{bmatrix}
y_1\\y_2\\\cdots\\y_n
\end{bmatrix}
=A\mathbf{z}=
\begin{bmatrix}
\sigma(x_1-c) & \sigma_b & 0 & \cdots\\
\sigma(x_2-c) & \sigma_b & 0 & \cdots\\
\cdots& \cdots & \cdots & \cdots\\
\sigma(x_n-c)& \sigma_b & 0 & \cdots
\end{bmatrix}
\begin{bmatrix}
z_1\\z_2\\\cdots\\z_n
\end{bmatrix}=z_1\sigma
\begin{bmatrix}
x_1\\x_2\\\cdots\\x_n
\end{bmatrix}-(z_1\sigma c - z_2\sigma_b)
$$
As you can see, $\mathbf{y}$ follows $m\mathbf{x}+b$ form, where slope $m=z_1\sigma$ and offset $b=z_2\sigma_b - z_1\sigma c$.
Hence proved :)
(let me know in comments if any step requires more clarification).
|
Why functions sampled from a linear kernel Gaussian Process are guaranteed to be a linear function?
Refer to Drawing values from the distribution section in Wikipedia. Let me use that to prove the linearity of functions with basic linear algebra.
We will use the following fact,
If we can write the
|
40,542
|
Why is ReLU so popular despite being NOT zero-centered
|
ReLU's non-zero centering is an issue. ReLUs are popular because it is simple and fast. On the other hand, if the only problem you're finding with ReLU is that the optimization is slow, training the network longer is a reasonable solution.
However, it's more common for state-of-the-art papers to use more complex activations. A general strategy is to come up with a function that retains approximately the identity function for positive values, but also controls the means and variances. For example, the mish activation has achieved state-of-the-art results recently.
But maybe you face time or cost constraints, or other problems with ReLU (e.g. dead units). In these cases, you may be interested in one of these alternative activations.
Leaky ReLUs have a negative portion, and this may reduce the occurrence of the detrimental "zig zag" effect that is noted in the linked thread. This paragraph is not at all conclusive, but I haven't found a better paper. (If you've found a better paper about the optimization dynamics of Leaky ReLU units, please share it in comments!)
Andrew L. Maas, Awni Y. Hannun, Andrew Y. Ng "Rectifier Nonlinearities Improve Neural Network Acoustic Models"
The choice of rectifier function used in the DNN appears unimportant for both frame-wise and WER metrics. Both the leaky and standard ReL networks perform similarly, suggesting the leaky rectifiers’ non-zero
gradient does not substantially impact training optimization. During training we observed leaky rectifier
DNNs converge slightly faster, which is perhaps due
to the difference in gradient among the two rectifiers.
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter. "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)"
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter "Self-Normalizing Neural Networks"
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: this http URL.
|
Why is ReLU so popular despite being NOT zero-centered
|
ReLU's non-zero centering is an issue. ReLUs are popular because it is simple and fast. On the other hand, if the only problem you're finding with ReLU is that the optimization is slow, training the n
|
Why is ReLU so popular despite being NOT zero-centered
ReLU's non-zero centering is an issue. ReLUs are popular because it is simple and fast. On the other hand, if the only problem you're finding with ReLU is that the optimization is slow, training the network longer is a reasonable solution.
However, it's more common for state-of-the-art papers to use more complex activations. A general strategy is to come up with a function that retains approximately the identity function for positive values, but also controls the means and variances. For example, the mish activation has achieved state-of-the-art results recently.
But maybe you face time or cost constraints, or other problems with ReLU (e.g. dead units). In these cases, you may be interested in one of these alternative activations.
Leaky ReLUs have a negative portion, and this may reduce the occurrence of the detrimental "zig zag" effect that is noted in the linked thread. This paragraph is not at all conclusive, but I haven't found a better paper. (If you've found a better paper about the optimization dynamics of Leaky ReLU units, please share it in comments!)
Andrew L. Maas, Awni Y. Hannun, Andrew Y. Ng "Rectifier Nonlinearities Improve Neural Network Acoustic Models"
The choice of rectifier function used in the DNN appears unimportant for both frame-wise and WER metrics. Both the leaky and standard ReL networks perform similarly, suggesting the leaky rectifiers’ non-zero
gradient does not substantially impact training optimization. During training we observed leaky rectifier
DNNs converge slightly faster, which is perhaps due
to the difference in gradient among the two rectifiers.
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter. "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)"
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter "Self-Normalizing Neural Networks"
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: this http URL.
|
Why is ReLU so popular despite being NOT zero-centered
ReLU's non-zero centering is an issue. ReLUs are popular because it is simple and fast. On the other hand, if the only problem you're finding with ReLU is that the optimization is slow, training the n
|
40,543
|
Does using a random train-test split lead to data leakage?
|
Yes, random train-test splits can lead to data leakage, and if traditional k-fold and leave-one-out CV are the default procedures being followed, data leakage will happen. Leakage is the major reason why traditional CV is not appropriate for time series. Using these cross-validators when you shouldn't will inflate performance metrics since they allow the training model to cheat because observations "from the future" (posterior samples) leak into the training set.
Since time series data is time ordered, we want to keep intact the fact that we must use past observations to predict future observations. The randomization in the standard cross-validation algorithm does not preserve the time ordering, and we end up making predictions for some samples using a model trained on posterior samples. While this is not immediately a huge problem for some applications, it becomes a critical one if the time series data is often strongly correlated along the time axis. The randomization of traditional CV will make it likely that for each sample in the validation set, numerous strongly correlated samples exist in the train set. This defeats the very purpose of having a validation set: the model essentially “knows” about the validation set already, leading to inflated performance metrics on the validation set in case of overfitting.
One solution is to use Walk-forward cross-validation (closest package implementation being Time Series Split in sklearn), which restricts the full sample set differently for each split, but this suffers from the problem that, near the split point, we may have training samples whose evaluation time is posterior to the prediction time of validation samples. Such overlapping samples are unlikely to be independent, leading to information leaking from the train set into the validation set. To deal with this, purging can be used.
Purging
Purging involves dropping from the train set any sample whose evaluation time is posterior to the earliest prediction time in the validation set. This ensures that predictions on the validation set are free of look-ahead bias. But since walk-forward CV has other grievances, like its lack of focus on the most recent data during training fold construction, the following is becoming more widely adopted for time series data:
Combinatorial cross-validation
Consider we abandon the requirement that all the samples in the train set precede the samples in the validation set. This is not as problematic as it may sound. The crucial point is to ensure that the samples in the validation set are reasonably independent from the samples in the training set. If this condition is verified, the validation set performance will still be a good proxy for the performance on new data.
Combinatorial K-fold cross-validation is similar to K-fold cross-validation, except that we take the validation set to consists in $j < K$ blocks of samples. We then have $K$ choose $j$ possible different splits. This allows us to create easily a large number of splits, just by taking $j = 2$ or $3$, addressing two other problems not mentioned that purging did not address.
It is however clear that we cannot use combinatorial K-fold cross-validation as it stands. We have to make sure that the samples in the train set and in the validation set are independent. We already saw that purging helps reduce their dependence. However, when there are train samples occurring after validation samples, this is not sufficient.
Embargoing
We obviously also need to prevent the overlap of train and validation samples at the right end(s) of the validation set. But simply dropping any train sample whose prediction time occurs before the latest evaluation time in the preceding block of validation samples may not be sufficient. There may be correlations between the samples over longer periods of time. In order to deal with such long range correlation, we can define an embargo period after each right end of the validation set. If a train sample prediction time falls into the embargo period, we simply drop the sample from the train set. The required embargo period has to be estimated from the problem and dataset at hand.
A nice feature of combinatorial cross-validation is also that as each block of samples appears the same number of times in the validation set, we can group them (arbitrarily) into validation predictions over the full dataset (keeping in mind that these predictions have been made by models trained on different train sets). This is very useful to extract performance statistics over the whole dataset.
|
Does using a random train-test split lead to data leakage?
|
Yes, random train-test splits can lead to data leakage, and if traditional k-fold and leave-one-out CV are the default procedures being followed, data leakage will happen. Leakage is the major reason
|
Does using a random train-test split lead to data leakage?
Yes, random train-test splits can lead to data leakage, and if traditional k-fold and leave-one-out CV are the default procedures being followed, data leakage will happen. Leakage is the major reason why traditional CV is not appropriate for time series. Using these cross-validators when you shouldn't will inflate performance metrics since they allow the training model to cheat because observations "from the future" (posterior samples) leak into the training set.
Since time series data is time ordered, we want to keep intact the fact that we must use past observations to predict future observations. The randomization in the standard cross-validation algorithm does not preserve the time ordering, and we end up making predictions for some samples using a model trained on posterior samples. While this is not immediately a huge problem for some applications, it becomes a critical one if the time series data is often strongly correlated along the time axis. The randomization of traditional CV will make it likely that for each sample in the validation set, numerous strongly correlated samples exist in the train set. This defeats the very purpose of having a validation set: the model essentially “knows” about the validation set already, leading to inflated performance metrics on the validation set in case of overfitting.
One solution is to use Walk-forward cross-validation (closest package implementation being Time Series Split in sklearn), which restricts the full sample set differently for each split, but this suffers from the problem that, near the split point, we may have training samples whose evaluation time is posterior to the prediction time of validation samples. Such overlapping samples are unlikely to be independent, leading to information leaking from the train set into the validation set. To deal with this, purging can be used.
Purging
Purging involves dropping from the train set any sample whose evaluation time is posterior to the earliest prediction time in the validation set. This ensures that predictions on the validation set are free of look-ahead bias. But since walk-forward CV has other grievances, like its lack of focus on the most recent data during training fold construction, the following is becoming more widely adopted for time series data:
Combinatorial cross-validation
Consider we abandon the requirement that all the samples in the train set precede the samples in the validation set. This is not as problematic as it may sound. The crucial point is to ensure that the samples in the validation set are reasonably independent from the samples in the training set. If this condition is verified, the validation set performance will still be a good proxy for the performance on new data.
Combinatorial K-fold cross-validation is similar to K-fold cross-validation, except that we take the validation set to consists in $j < K$ blocks of samples. We then have $K$ choose $j$ possible different splits. This allows us to create easily a large number of splits, just by taking $j = 2$ or $3$, addressing two other problems not mentioned that purging did not address.
It is however clear that we cannot use combinatorial K-fold cross-validation as it stands. We have to make sure that the samples in the train set and in the validation set are independent. We already saw that purging helps reduce their dependence. However, when there are train samples occurring after validation samples, this is not sufficient.
Embargoing
We obviously also need to prevent the overlap of train and validation samples at the right end(s) of the validation set. But simply dropping any train sample whose prediction time occurs before the latest evaluation time in the preceding block of validation samples may not be sufficient. There may be correlations between the samples over longer periods of time. In order to deal with such long range correlation, we can define an embargo period after each right end of the validation set. If a train sample prediction time falls into the embargo period, we simply drop the sample from the train set. The required embargo period has to be estimated from the problem and dataset at hand.
A nice feature of combinatorial cross-validation is also that as each block of samples appears the same number of times in the validation set, we can group them (arbitrarily) into validation predictions over the full dataset (keeping in mind that these predictions have been made by models trained on different train sets). This is very useful to extract performance statistics over the whole dataset.
|
Does using a random train-test split lead to data leakage?
Yes, random train-test splits can lead to data leakage, and if traditional k-fold and leave-one-out CV are the default procedures being followed, data leakage will happen. Leakage is the major reason
|
40,544
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
|
Defining extremeness of test statistic and defining p-value for a
two-sided test...
I would suggest that an appropriate perspective here is that, when one has the "right" statistic, the statistic itself tells you what "extremeness" means for the test problem at hand---one-sided or two-sided. The more basic question is therefore what the "right" statistic is. Test problems are special cases of optimization problems---you want to maximize power subject to size constraint. So this means defining the "right" solution concept.
For example, finding the most powerful test for the test problem with a simple null vs. simple alternative is a special case of a linear program:
$$
\sup_{0 \leq \phi \leq 1, \, \\ \\ \int \phi(\omega) f_0(\omega) d\mu \leq \alpha} \int \phi(\omega) f_1(\omega) d\mu.
$$
It is a general fact that a solution $\phi^*$for any such program takes the form
$$
\phi^* =
\begin{cases}
1 & \text{if } f_1 \geq k f_0 \\
0 & \text{if } f_1 \geq k f_0,
\end{cases}
$$
for some $k$. In the context of a test problem, a natural interpretation is then that one rejects when the likelihood ratio statistic $\frac{f_1}{f_0}$ is larger than $k$.
(It is suggested in the comments that the threshold $k$ is interpreted to be the "shadow price" of the size constraint. Apparently this terminology is borrowed from economics. $k$ is the Kuhn-Tucker-Lagrange multiplier of the problem. For interior solutions, typically one would say that if $\alpha$---the budget, in economic problems---is relaxed by $\epsilon$, the power of the test increases by $k \epsilon$. This interpretation, however, does not really hold for linear programs in general.)
Similarly, finding a most powerful test of composite null vs. simple alternative amounts to solving a linear program. The solution to the corresponding dual program tells us that the most powerful statistic is a likelihood ratio statistic with respect to the least favorable Bayesian prior on the null. (The simple null case is a special case, with trivial prior.)
Tests with one-sided alternatives for models with monotone likelihood ratio (MLR) property is of course another example. MLR means the model admits a ranking of likelihood ratios that's invariant with respect to data $\omega$. So the likelihood ratio test is a most powerful test, almost by assumption.
For two-sided alternatives, e.g. $\Gamma_0 = \{\gamma_0\}$ and $\Gamma_1 = (-\infty,\gamma_0)\cup (\gamma_0, \infty)$ for normal densities parametrized by mean $\gamma \in \mathbb{R}$, the most powerful test does not exist in general. Therefore the right statistic needs to be determined by some other criterion---e.g. one can instead look for a locally most powerful test.
A test $\phi^*$ is a locally most powerful test if for any other test $\phi$, there exists an open neighborhood $N_{\gamma_0, \phi}$ of the null hypothesis such that $\phi^*$ has uniformly higher power than $\phi$ on $N_{\gamma_0, \phi}$. The corresponding first-order optimality condition gives the criterion
$$
\phi^* =
\begin{cases}
1 & \text{if } \frac{\partial^2}{\partial \gamma^2}f_{\gamma_0} \geq k_1 \frac{\partial}{\partial \gamma} f_{\gamma_0} + k_2 f_{\gamma_0} \\
0 & \text{if } \frac{\partial^2}{\partial \gamma^2}f_{\gamma_0} < k_1 \frac{\partial}{\partial \gamma} f_{\gamma_0} + k_2 f_{\gamma_0}
\end{cases}
$$
for some $k_1$ and $k_2$. Substituting the normal density into above expressions, we have that $\phi^*$ rejects when $|x- \gamma_0|$ is large---a two-sided test.
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
|
Defining extremeness of test statistic and defining p-value for a
two-sided test...
I would suggest that an appropriate perspective here is that, when one has the "right" statistic, the statistic its
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
Defining extremeness of test statistic and defining p-value for a
two-sided test...
I would suggest that an appropriate perspective here is that, when one has the "right" statistic, the statistic itself tells you what "extremeness" means for the test problem at hand---one-sided or two-sided. The more basic question is therefore what the "right" statistic is. Test problems are special cases of optimization problems---you want to maximize power subject to size constraint. So this means defining the "right" solution concept.
For example, finding the most powerful test for the test problem with a simple null vs. simple alternative is a special case of a linear program:
$$
\sup_{0 \leq \phi \leq 1, \, \\ \\ \int \phi(\omega) f_0(\omega) d\mu \leq \alpha} \int \phi(\omega) f_1(\omega) d\mu.
$$
It is a general fact that a solution $\phi^*$for any such program takes the form
$$
\phi^* =
\begin{cases}
1 & \text{if } f_1 \geq k f_0 \\
0 & \text{if } f_1 \geq k f_0,
\end{cases}
$$
for some $k$. In the context of a test problem, a natural interpretation is then that one rejects when the likelihood ratio statistic $\frac{f_1}{f_0}$ is larger than $k$.
(It is suggested in the comments that the threshold $k$ is interpreted to be the "shadow price" of the size constraint. Apparently this terminology is borrowed from economics. $k$ is the Kuhn-Tucker-Lagrange multiplier of the problem. For interior solutions, typically one would say that if $\alpha$---the budget, in economic problems---is relaxed by $\epsilon$, the power of the test increases by $k \epsilon$. This interpretation, however, does not really hold for linear programs in general.)
Similarly, finding a most powerful test of composite null vs. simple alternative amounts to solving a linear program. The solution to the corresponding dual program tells us that the most powerful statistic is a likelihood ratio statistic with respect to the least favorable Bayesian prior on the null. (The simple null case is a special case, with trivial prior.)
Tests with one-sided alternatives for models with monotone likelihood ratio (MLR) property is of course another example. MLR means the model admits a ranking of likelihood ratios that's invariant with respect to data $\omega$. So the likelihood ratio test is a most powerful test, almost by assumption.
For two-sided alternatives, e.g. $\Gamma_0 = \{\gamma_0\}$ and $\Gamma_1 = (-\infty,\gamma_0)\cup (\gamma_0, \infty)$ for normal densities parametrized by mean $\gamma \in \mathbb{R}$, the most powerful test does not exist in general. Therefore the right statistic needs to be determined by some other criterion---e.g. one can instead look for a locally most powerful test.
A test $\phi^*$ is a locally most powerful test if for any other test $\phi$, there exists an open neighborhood $N_{\gamma_0, \phi}$ of the null hypothesis such that $\phi^*$ has uniformly higher power than $\phi$ on $N_{\gamma_0, \phi}$. The corresponding first-order optimality condition gives the criterion
$$
\phi^* =
\begin{cases}
1 & \text{if } \frac{\partial^2}{\partial \gamma^2}f_{\gamma_0} \geq k_1 \frac{\partial}{\partial \gamma} f_{\gamma_0} + k_2 f_{\gamma_0} \\
0 & \text{if } \frac{\partial^2}{\partial \gamma^2}f_{\gamma_0} < k_1 \frac{\partial}{\partial \gamma} f_{\gamma_0} + k_2 f_{\gamma_0}
\end{cases}
$$
for some $k_1$ and $k_2$. Substituting the normal density into above expressions, we have that $\phi^*$ rejects when $|x- \gamma_0|$ is large---a two-sided test.
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
Defining extremeness of test statistic and defining p-value for a
two-sided test...
I would suggest that an appropriate perspective here is that, when one has the "right" statistic, the statistic its
|
40,545
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
|
In addition to scenarios in two-sided tests, this question arises in a less avoidable way in group sequential clinical trials.
In a group sequential trial there are a set of analysis times, and a stopping boundary specifying thresholds at each analysis for the trial to stop. In calculating $p$-values or confidence intervals it is necessary to specify a ordering of the possible outcomes. For example, if you stop at time 2 out of 4 with a $Z$-score of 3, how does that compare to stopping at time 3 with a $Z$-score of 2.5?
Among the orderings actually proposed are
ordering by the magnitude of difference
ordering by time, so that any stopping at an earlier time is more extreme than any stopping at a later time
These are genuine choices; different people could legitimately pick different orderings. Ordering by the magnitude of difference tends to lead to narrower confidence intervals, more accurate p-values, and less bias, but it increases the sensitivity of the analysis to the (unobservable) times at which future analyses of a stopped trial would have occurred.
(Reference: short course by Kittleson and Gillen)
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
|
In addition to scenarios in two-sided tests, this question arises in a less avoidable way in group sequential clinical trials.
In a group sequential trial there are a set of analysis times, and a stop
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
In addition to scenarios in two-sided tests, this question arises in a less avoidable way in group sequential clinical trials.
In a group sequential trial there are a set of analysis times, and a stopping boundary specifying thresholds at each analysis for the trial to stop. In calculating $p$-values or confidence intervals it is necessary to specify a ordering of the possible outcomes. For example, if you stop at time 2 out of 4 with a $Z$-score of 3, how does that compare to stopping at time 3 with a $Z$-score of 2.5?
Among the orderings actually proposed are
ordering by the magnitude of difference
ordering by time, so that any stopping at an earlier time is more extreme than any stopping at a later time
These are genuine choices; different people could legitimately pick different orderings. Ordering by the magnitude of difference tends to lead to narrower confidence intervals, more accurate p-values, and less bias, but it increases the sensitivity of the analysis to the (unobservable) times at which future analyses of a stopped trial would have occurred.
(Reference: short course by Kittleson and Gillen)
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
In addition to scenarios in two-sided tests, this question arises in a less avoidable way in group sequential clinical trials.
In a group sequential trial there are a set of analysis times, and a stop
|
40,546
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
|
The answer to this question is what defines the particular test
But how do we define what is more extreme?
This choice is really the essence of what defines the particular hypothesis test under use. Indeed, a classical hypothesis test can be reduced to a specification of a total order $\preceq$ on the set of possible outcomes for the observable data. This total order, which I will call an evidential ordering, defines an ordering of which observable outcomes are more conducive to the null hypothesis and which are more conducive to the alternative hypothesis (i.e., "more extreme").
Suppose we have an observable data vector $\mathbf{x} \in \mathscr{X}$ from a model $f_\theta$ and we define hypotheses $H_0: \theta \in \Theta_0$ and $H_A: \theta \in \Theta_A$. Now suppose we choose an evidential ordering $\preceq$ on the set $\mathscr{X}$, where larger values in the ordering are regarded as being more conducive to the alternative hypothesis. Then we can define the p-value function for the corresponding hypothesis test as:
$$p(\mathbf{x}) = \sup_{\theta \in \Theta_0} \mathbb{P}( \mathbf{x} \in \mathcal{H}_A(\mathbf{x}) | \theta)
\quad \quad \quad
\mathcal{H}_A(\mathbf{x}) \equiv\{ \mathbf{x}' \in \mathscr{X} | \mathbf{x} \preceq \mathbf{x}' \}.$$
Since the hypothesis test is fully defined by its p-value function, and since this function is fully determined by the evidential ordering, the evidential ordering fully defines the test. Two hypothesis tests are equivalent if they use the same total order $\preceq$ (e.g., the T-test and F-test in a linear regression are equivalent if there is only one explanatory variable). In practice, the evidential ordering is usually defined only implicitly through the formation of a test statistic $T:\mathscr{X} \rightarrow \mathbb{R}$ and a specification of an ordering on $\mathbb{R}$. Nevertheless, the test statistic is really just a mechanism to define the underlying evidential ordering, and two different specifications of test statistics that lead to the same evidential ordering essentially define the same test. (You can find more discussion of the mathematical structure of a hypothesis test in this related answer.)
Now, we can make different choices of the evidential ordering and this defines different hypothesis tests. We can then explore the properties of those tests ---e.g., their power function, etc.--- to see which orderings lead to tests with good properties. Finding a good ordering that yields good properties for the test is an art form in itself, but the general idea is that we usually try to form a statistic that tends to be "small" when the null hypothesis is true, and gets larger the further we depart into the alternative hypothesis. The likelihood-ratio statistic you use in your question is a statistic that has this property, but there are others as well. As to how to generalise the likelihood ratio statistic to composite hypotheses, the usual generalisation is:
$$R(\mathbf{x}) = \frac{\sup_{\theta \in \Theta_A} f_\theta(\mathbf{x})}{\sup_{\theta \in \Theta_0} f_\theta(\mathbf{x})}.$$
As you can see, this statistic defines "extremeness" by looking at the constrained maximised likelihood within each composite hypothesis. Other generalisations are of course possible, and it is open to you to formulate an alternative test statistic leading to a different test (through a different evidentiary ordering).
Trying to clear up your confusion: From what you have written in your question, I think this issue here is a failure to understand why we use "two-sided" hypothesis tests (or statistics like the LR statistic) that count unlikely deviations in any direction as being conducive to the alternative hypothesis. The reason for this is that we are usually a priori ignorant of the likely direction in which extreme deviations might occur. If we first observe the data and then see which tail it is in, and then choose a "one-sided" alternative in that direction, we are essentially altering the evidential ordering after seeing the data. This induces serious confirmatory bias in the test, since the alternative hypothesis is formulated after seeing the data in a way that treats deviations in the observed direction to be conducive to the alternative.
For example, if we have a symmetric distribution and we use a post hoc one-sided test, we essentially halve the p-value (it is now uniformly distributed between zero and a half). This imposes serious confirmatory bias, and it means that the stipulated size of the test is actually only half the true size.
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
|
The answer to this question is what defines the particular test
But how do we define what is more extreme?
This choice is really the essence of what defines the particular hypothesis test under use.
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
The answer to this question is what defines the particular test
But how do we define what is more extreme?
This choice is really the essence of what defines the particular hypothesis test under use. Indeed, a classical hypothesis test can be reduced to a specification of a total order $\preceq$ on the set of possible outcomes for the observable data. This total order, which I will call an evidential ordering, defines an ordering of which observable outcomes are more conducive to the null hypothesis and which are more conducive to the alternative hypothesis (i.e., "more extreme").
Suppose we have an observable data vector $\mathbf{x} \in \mathscr{X}$ from a model $f_\theta$ and we define hypotheses $H_0: \theta \in \Theta_0$ and $H_A: \theta \in \Theta_A$. Now suppose we choose an evidential ordering $\preceq$ on the set $\mathscr{X}$, where larger values in the ordering are regarded as being more conducive to the alternative hypothesis. Then we can define the p-value function for the corresponding hypothesis test as:
$$p(\mathbf{x}) = \sup_{\theta \in \Theta_0} \mathbb{P}( \mathbf{x} \in \mathcal{H}_A(\mathbf{x}) | \theta)
\quad \quad \quad
\mathcal{H}_A(\mathbf{x}) \equiv\{ \mathbf{x}' \in \mathscr{X} | \mathbf{x} \preceq \mathbf{x}' \}.$$
Since the hypothesis test is fully defined by its p-value function, and since this function is fully determined by the evidential ordering, the evidential ordering fully defines the test. Two hypothesis tests are equivalent if they use the same total order $\preceq$ (e.g., the T-test and F-test in a linear regression are equivalent if there is only one explanatory variable). In practice, the evidential ordering is usually defined only implicitly through the formation of a test statistic $T:\mathscr{X} \rightarrow \mathbb{R}$ and a specification of an ordering on $\mathbb{R}$. Nevertheless, the test statistic is really just a mechanism to define the underlying evidential ordering, and two different specifications of test statistics that lead to the same evidential ordering essentially define the same test. (You can find more discussion of the mathematical structure of a hypothesis test in this related answer.)
Now, we can make different choices of the evidential ordering and this defines different hypothesis tests. We can then explore the properties of those tests ---e.g., their power function, etc.--- to see which orderings lead to tests with good properties. Finding a good ordering that yields good properties for the test is an art form in itself, but the general idea is that we usually try to form a statistic that tends to be "small" when the null hypothesis is true, and gets larger the further we depart into the alternative hypothesis. The likelihood-ratio statistic you use in your question is a statistic that has this property, but there are others as well. As to how to generalise the likelihood ratio statistic to composite hypotheses, the usual generalisation is:
$$R(\mathbf{x}) = \frac{\sup_{\theta \in \Theta_A} f_\theta(\mathbf{x})}{\sup_{\theta \in \Theta_0} f_\theta(\mathbf{x})}.$$
As you can see, this statistic defines "extremeness" by looking at the constrained maximised likelihood within each composite hypothesis. Other generalisations are of course possible, and it is open to you to formulate an alternative test statistic leading to a different test (through a different evidentiary ordering).
Trying to clear up your confusion: From what you have written in your question, I think this issue here is a failure to understand why we use "two-sided" hypothesis tests (or statistics like the LR statistic) that count unlikely deviations in any direction as being conducive to the alternative hypothesis. The reason for this is that we are usually a priori ignorant of the likely direction in which extreme deviations might occur. If we first observe the data and then see which tail it is in, and then choose a "one-sided" alternative in that direction, we are essentially altering the evidential ordering after seeing the data. This induces serious confirmatory bias in the test, since the alternative hypothesis is formulated after seeing the data in a way that treats deviations in the observed direction to be conducive to the alternative.
For example, if we have a symmetric distribution and we use a post hoc one-sided test, we essentially halve the p-value (it is now uniformly distributed between zero and a half). This imposes serious confirmatory bias, and it means that the stipulated size of the test is actually only half the true size.
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
The answer to this question is what defines the particular test
But how do we define what is more extreme?
This choice is really the essence of what defines the particular hypothesis test under use.
|
40,547
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
|
Not taking it further than likelihood ratios ...
$\renewcommand\vec{\boldsymbol}$
The generalized likelihood ratio, for a test of the null hypothesis $\vec\theta \in \Theta_0$ vs the alternative $\vec\theta \in \Theta_\mathrm{A}$ is
$$
\renewcommand\vec{\boldsymbol}
\frac{\sup_{\vec{\theta} \in \Theta_\mathrm{A}} \left[f(\vec{x}; \vec\theta)\right]}{\sup_{\vec{\theta} \in \Theta_0} \left[f(\vec{x}; \vec\theta)\right]}
$$
& for simple, i.e. fully specified, null ($\vec\theta = \vec\theta_0$) & alternative ($\vec\theta=\vec\theta_\mathrm{A}$) reduces to the ordinary likelihood ratio. On the face of it, it's a reasonable measure of extremeness: the ratio of the likelihood of the most likely parameter values consistent with the alternative to the likelihood of the most likely parameter values consistent with the null.
A typical two-tailed test for a scalar $\theta$ of the null $\theta=\theta_0$ vs the unrestricted alternative $\theta \in \Theta$ would use
$$
\frac{f(\vec{x}; \hat\theta)}{f(\vec{x}; \theta_0)}
$$
where $\hat\theta$ is the maximum-likelihood estimate; with the test statistic increasing as $\hat\theta$ moves away from $\theta_0$ in either direction. For the mean of a Gaussian distribution, with known variance, that's
$$\frac{f(\vec{x}; \bar x, \sigma^2)}{f(\vec{x}; \mu_0; \sigma^2)}$$
(where $f(\cdot)$ is the Gaussian density function). When the variance is unknown, i.e. a nuisance parameter, the generalized likelihood ratio becomes
$$\frac{f\left(\vec{x}; \bar{x}, \frac{\sum(x-\bar{x})^2}{n}\right)}{f\left(\vec{x}; \mu_0, \frac{\sum(x-\mu_0)^2}{n}\right)}$$
These equate to the z- & t-statistics—to @Glen_b's point, any sensible test statistics for these cases will.
Note that the generalized likelihood test† doesn't in general enjoy any optimal power properties for small samples. It may be uniformly most powerful (though not for a two-tailed test—see @Michael's answer), locally most powerful (i.e. it coincides with Rao's score test—see @Michael's answer again), uniformly most powerful among unbiased tests, or at least admissible; but it may be inadmissible, or even "worse than useless". (For large samples, given some regularity conditions, Wilks' Theorem applies.)
† More often called just the "likelihood ratio test"; perhaps because in practice testing of point vs point hypotheses is rare & there's little need for the distinction.
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
|
Not taking it further than likelihood ratios ...
$\renewcommand\vec{\boldsymbol}$
The generalized likelihood ratio, for a test of the null hypothesis $\vec\theta \in \Theta_0$ vs the alternative $\vec
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
Not taking it further than likelihood ratios ...
$\renewcommand\vec{\boldsymbol}$
The generalized likelihood ratio, for a test of the null hypothesis $\vec\theta \in \Theta_0$ vs the alternative $\vec\theta \in \Theta_\mathrm{A}$ is
$$
\renewcommand\vec{\boldsymbol}
\frac{\sup_{\vec{\theta} \in \Theta_\mathrm{A}} \left[f(\vec{x}; \vec\theta)\right]}{\sup_{\vec{\theta} \in \Theta_0} \left[f(\vec{x}; \vec\theta)\right]}
$$
& for simple, i.e. fully specified, null ($\vec\theta = \vec\theta_0$) & alternative ($\vec\theta=\vec\theta_\mathrm{A}$) reduces to the ordinary likelihood ratio. On the face of it, it's a reasonable measure of extremeness: the ratio of the likelihood of the most likely parameter values consistent with the alternative to the likelihood of the most likely parameter values consistent with the null.
A typical two-tailed test for a scalar $\theta$ of the null $\theta=\theta_0$ vs the unrestricted alternative $\theta \in \Theta$ would use
$$
\frac{f(\vec{x}; \hat\theta)}{f(\vec{x}; \theta_0)}
$$
where $\hat\theta$ is the maximum-likelihood estimate; with the test statistic increasing as $\hat\theta$ moves away from $\theta_0$ in either direction. For the mean of a Gaussian distribution, with known variance, that's
$$\frac{f(\vec{x}; \bar x, \sigma^2)}{f(\vec{x}; \mu_0; \sigma^2)}$$
(where $f(\cdot)$ is the Gaussian density function). When the variance is unknown, i.e. a nuisance parameter, the generalized likelihood ratio becomes
$$\frac{f\left(\vec{x}; \bar{x}, \frac{\sum(x-\bar{x})^2}{n}\right)}{f\left(\vec{x}; \mu_0, \frac{\sum(x-\mu_0)^2}{n}\right)}$$
These equate to the z- & t-statistics—to @Glen_b's point, any sensible test statistics for these cases will.
Note that the generalized likelihood test† doesn't in general enjoy any optimal power properties for small samples. It may be uniformly most powerful (though not for a two-tailed test—see @Michael's answer), locally most powerful (i.e. it coincides with Rao's score test—see @Michael's answer again), uniformly most powerful among unbiased tests, or at least admissible; but it may be inadmissible, or even "worse than useless". (For large samples, given some regularity conditions, Wilks' Theorem applies.)
† More often called just the "likelihood ratio test"; perhaps because in practice testing of point vs point hypotheses is rare & there's little need for the distinction.
|
Defining extremeness of test statistic and defining $p$-value for a two-sided test
Not taking it further than likelihood ratios ...
$\renewcommand\vec{\boldsymbol}$
The generalized likelihood ratio, for a test of the null hypothesis $\vec\theta \in \Theta_0$ vs the alternative $\vec
|
40,548
|
LRT comparing a random effects model and nested logistic regression model
|
Yes, they are nested: the mixed model reduces to the simpler model if $\sigma^2_1=\sigma^2_{x_2}=0$. (This is the same as $G=0$, because the covariances must be zero if the variances are, but stating it in terms of a joint condition on $\{\sigma^2_1, \sigma^2_{x_2}\}$ is probably easier to understand.)
The likelihood ratio test in its usual form doesn't work right — it's conservative — because the derivation of the likelihood ratio test depends on a Taylor expansion of the log-likelihood around the null parameters, which doesn't work if the null parameters are on the boundary of the feasible model space (you can't expand around $\sigma^2=0$, because that implies that you're including negative variance values in your expansion). This is discussed in a variety of places (Self and Liang 1987; Stram and Lee 1994; Goldman and Whelan 2000; Pinheiro and Bates 2000). For simple models there is a known correction factor to the usual null distribution. For example if you're testing between models that differ by a single variance parameter (e.g. random intercept model vs. no-random-intercept model), the null distribution of $-2\Delta(\log L)$ is $0.5\chi^2_0 + 0.5\chi^2_1$, where $\chi^2_0$ is a point mass at zero; the bottom line here is that the nominal LRT p-value should be divided by 2. For more complicated models it's usually hard to derive, and people often calculate the p-value by parametric bootstrapping. The GLMM FAQ has a section on this ...
In particular, Stram and Lee (1994) discuss the geometry of some of the more complex cases (it's been a long time since I read it ...) The particular mixture of $\chi^2$s that forms the null distribution may be analytically derivable, but in my experience people usually give up and find the null distribution by simulation. The example below is from Pinheiro and Bates (2000) p. 87 (via Google Books): they show computationally that the null distribution for a particular comparison (which would be 1|Worker vs. 1|Worker/Machine) is approximately $\sim 0.65 \chi^2_0 + 0.35 \chi^2_1$; they then more or less say that they go ahead and use the naive LRT because it's easier.
As shown in the above-linked GLMM FAQ section, you can use pbkrtest::PBmodcomp() to get a valid p-value by parametric bootstrapping ...
Stram, Daniel O, and Jae Won Lee. “Variance Components Testing in the Longitudinal Fixed Effects Model.” Biometrics 50, no. 4 (1994): 1171–77.
|
LRT comparing a random effects model and nested logistic regression model
|
Yes, they are nested: the mixed model reduces to the simpler model if $\sigma^2_1=\sigma^2_{x_2}=0$. (This is the same as $G=0$, because the covariances must be zero if the variances are, but stating
|
LRT comparing a random effects model and nested logistic regression model
Yes, they are nested: the mixed model reduces to the simpler model if $\sigma^2_1=\sigma^2_{x_2}=0$. (This is the same as $G=0$, because the covariances must be zero if the variances are, but stating it in terms of a joint condition on $\{\sigma^2_1, \sigma^2_{x_2}\}$ is probably easier to understand.)
The likelihood ratio test in its usual form doesn't work right — it's conservative — because the derivation of the likelihood ratio test depends on a Taylor expansion of the log-likelihood around the null parameters, which doesn't work if the null parameters are on the boundary of the feasible model space (you can't expand around $\sigma^2=0$, because that implies that you're including negative variance values in your expansion). This is discussed in a variety of places (Self and Liang 1987; Stram and Lee 1994; Goldman and Whelan 2000; Pinheiro and Bates 2000). For simple models there is a known correction factor to the usual null distribution. For example if you're testing between models that differ by a single variance parameter (e.g. random intercept model vs. no-random-intercept model), the null distribution of $-2\Delta(\log L)$ is $0.5\chi^2_0 + 0.5\chi^2_1$, where $\chi^2_0$ is a point mass at zero; the bottom line here is that the nominal LRT p-value should be divided by 2. For more complicated models it's usually hard to derive, and people often calculate the p-value by parametric bootstrapping. The GLMM FAQ has a section on this ...
In particular, Stram and Lee (1994) discuss the geometry of some of the more complex cases (it's been a long time since I read it ...) The particular mixture of $\chi^2$s that forms the null distribution may be analytically derivable, but in my experience people usually give up and find the null distribution by simulation. The example below is from Pinheiro and Bates (2000) p. 87 (via Google Books): they show computationally that the null distribution for a particular comparison (which would be 1|Worker vs. 1|Worker/Machine) is approximately $\sim 0.65 \chi^2_0 + 0.35 \chi^2_1$; they then more or less say that they go ahead and use the naive LRT because it's easier.
As shown in the above-linked GLMM FAQ section, you can use pbkrtest::PBmodcomp() to get a valid p-value by parametric bootstrapping ...
Stram, Daniel O, and Jae Won Lee. “Variance Components Testing in the Longitudinal Fixed Effects Model.” Biometrics 50, no. 4 (1994): 1171–77.
|
LRT comparing a random effects model and nested logistic regression model
Yes, they are nested: the mixed model reduces to the simpler model if $\sigma^2_1=\sigma^2_{x_2}=0$. (This is the same as $G=0$, because the covariances must be zero if the variances are, but stating
|
40,549
|
Standard errors in LME4 linear mixed models
|
By default in R, treatment contrasts are used for factors. This means that what you get in the output from summary(mod) are the differences from the reference level for treatment. E.g., 37.4 is the difference between treatment B and treatment A.
If you want to get the mean for treatment B, you will need to add the coefficients. For the standard errors, you also need to account for the covariance between the estimates of the fixed effects. The following code illustrates how this is done (which essentially what effects and emmeans do under the hood):
coefs <- fixef(mod)
V <- vcov(mod)
# mean and std. error for treatment B
DF <- data.frame(treatment = factor("B", levels = LETTERS[1:3]))
X <- model.matrix(~ treatment, data = DF)
c(X %*% coefs)
sqrt(diag(X %*% V %*% t(X)))
# mean and std. error for treatment C
DF <- data.frame(treatment = factor("C", levels = LETTERS[1:3]))
X <- model.matrix(~ treatment, data = DF)
c(X %*% coefs)
sqrt(diag(X %*% V %*% t(X)))
|
Standard errors in LME4 linear mixed models
|
By default in R, treatment contrasts are used for factors. This means that what you get in the output from summary(mod) are the differences from the reference level for treatment. E.g., 37.4 is the di
|
Standard errors in LME4 linear mixed models
By default in R, treatment contrasts are used for factors. This means that what you get in the output from summary(mod) are the differences from the reference level for treatment. E.g., 37.4 is the difference between treatment B and treatment A.
If you want to get the mean for treatment B, you will need to add the coefficients. For the standard errors, you also need to account for the covariance between the estimates of the fixed effects. The following code illustrates how this is done (which essentially what effects and emmeans do under the hood):
coefs <- fixef(mod)
V <- vcov(mod)
# mean and std. error for treatment B
DF <- data.frame(treatment = factor("B", levels = LETTERS[1:3]))
X <- model.matrix(~ treatment, data = DF)
c(X %*% coefs)
sqrt(diag(X %*% V %*% t(X)))
# mean and std. error for treatment C
DF <- data.frame(treatment = factor("C", levels = LETTERS[1:3]))
X <- model.matrix(~ treatment, data = DF)
c(X %*% coefs)
sqrt(diag(X %*% V %*% t(X)))
|
Standard errors in LME4 linear mixed models
By default in R, treatment contrasts are used for factors. This means that what you get in the output from summary(mod) are the differences from the reference level for treatment. E.g., 37.4 is the di
|
40,550
|
Is my proof that relative entropy is never negative correct?
|
I think you have introduced good ideas, but some care is needed to make sense of all this.
The unifying concept is of absolutely continuous measure. Given two measures $\nu$ and $\mu$ on the same measure space, $\nu$ is said to be absolutely continuous with respect to $\mu$ provided $\nu$ never assigns a nonzero value to any set of zero $\mu$ measure. The Radon-Nikodym Theorem asserts this is tantamount to the existence of a $\mu$-measurable function $f$ which converts $\mu$ into $\nu;$ that is, for all measurable sets $A,$
$$\nu(A) = \int f\,\mathrm{d}\mu.$$
In this case $f$ is the Radon-Nikodym derivative of $\nu$ with respect to $\mu,$ written
$$f = \frac{\mathrm{d}\nu}{\mathrm{d}\mu}.$$
(Think of $f$ as a "multiplicative change of measure:" by multiplying the values of $\mu$ it distorts $\mu$ into a different measure, which is precisely $\nu;$ and provided almost all values of $f$ are finite, $f$ cannot distort the measure too much and make it "singular.")
The two most prominent examples in statistics are
$\mu$ is Lebesgue measure on $\mathbb{R}^n$ and $\nu$ is the probability measure of an absolutely continuous random variable $X$ with values in $\mathbb{R}^n.$ In this case $f$ is the probability density function (pdf) of $X.$
$\mu$ is the counting measure on $\mathbb{R}^n$ and $\nu$ is the probability measure of a discrete variable $X$ with values in $\mathbb{R}^n.$ In this case $f$ is the probability mass function (pmf) of $X.$
Measure is the unifying concept and the Radon-Nikodym derivative simultaneously handles ratios of pmfs and ratios of pdfs.
The setting of the question concerns two random variables $X$ and $Y$ absolutely continuous with respect to some measure $\mu,$ with Radon-Nikodym derivatives $f$ and $g$ respectively. Suppose, further, that $Y$ is absolutely continuous with respect to $X,$ the probability measure of $Y$ is $\lambda,$ and the probability measure of $X$ is $\nu.$ It follows easily (from the definitions) that the function $h = g/f$ is the Radon-Nikodym derivative of $\lambda$ with respect to $\nu$ and it is almost everywhere defined with respect to the measure $\mu.$
In any event, because $\log$ is a convex extended-real function on the non-negative reals (taking the value $-\infty$ at $0$), its value at any weighted average of a set of points is never less than the weighted average of its values at those points (Jensen's Inequality). The broadest concept of "weighted average" is the integral against a measure like $\nu;$ thus, for any $\nu$-measurable function $h:\mathbb{R}\to [0,\infty),$
$$\log \int h\, \mathrm{d}\nu \ge \int \log(h)\,\mathrm{d}\nu.$$
(When both sides are exponentiated this is also known as the (weighted) Arithmetic Mean - Geometric Mean Inequality.)
Plugging in $h = g/f$ and $f = \mathrm{d}\nu/\mathrm{d}\mu$ and remembering all probability measures integrate to unity (as part of their definition) gives
$$\eqalign{
0 &= \log(1) = \log \int \mathrm{d}\lambda &&\color{Gray}{\lambda\text{ is a probability measure}}\\
&= \log \int g\,\mathrm{d}\mu &&\color{Gray}{g = \frac{\mathrm{d}\lambda}{\mathrm{d}\mu}}\\
&= \log \int \frac{g}{f}\,f\,\mathrm{d}\mu &&\color{Gray}{gf/f=g}\\
&= \log \int h \,\mathrm{d}\nu &&\color{Gray}{h = g/f\text{ and }f = \frac{\mathrm{d}\nu}{\mathrm{d}\mu}} \\
&\ge \int \log(h)\,\mathrm{d}\nu &&\color{Gray}{\text{Jensen}} \\
&= \int \log\left(\frac{g}{f}\right)\,f\,\mathrm{d}\mu &&\color{Gray}{h=g/f\text{ and } f = \frac{\mathrm{d}\nu}{\mathrm{d}\mu}}.
}$$
Negating this inequality produces the desired result, QED.
|
Is my proof that relative entropy is never negative correct?
|
I think you have introduced good ideas, but some care is needed to make sense of all this.
The unifying concept is of absolutely continuous measure. Given two measures $\nu$ and $\mu$ on the same mea
|
Is my proof that relative entropy is never negative correct?
I think you have introduced good ideas, but some care is needed to make sense of all this.
The unifying concept is of absolutely continuous measure. Given two measures $\nu$ and $\mu$ on the same measure space, $\nu$ is said to be absolutely continuous with respect to $\mu$ provided $\nu$ never assigns a nonzero value to any set of zero $\mu$ measure. The Radon-Nikodym Theorem asserts this is tantamount to the existence of a $\mu$-measurable function $f$ which converts $\mu$ into $\nu;$ that is, for all measurable sets $A,$
$$\nu(A) = \int f\,\mathrm{d}\mu.$$
In this case $f$ is the Radon-Nikodym derivative of $\nu$ with respect to $\mu,$ written
$$f = \frac{\mathrm{d}\nu}{\mathrm{d}\mu}.$$
(Think of $f$ as a "multiplicative change of measure:" by multiplying the values of $\mu$ it distorts $\mu$ into a different measure, which is precisely $\nu;$ and provided almost all values of $f$ are finite, $f$ cannot distort the measure too much and make it "singular.")
The two most prominent examples in statistics are
$\mu$ is Lebesgue measure on $\mathbb{R}^n$ and $\nu$ is the probability measure of an absolutely continuous random variable $X$ with values in $\mathbb{R}^n.$ In this case $f$ is the probability density function (pdf) of $X.$
$\mu$ is the counting measure on $\mathbb{R}^n$ and $\nu$ is the probability measure of a discrete variable $X$ with values in $\mathbb{R}^n.$ In this case $f$ is the probability mass function (pmf) of $X.$
Measure is the unifying concept and the Radon-Nikodym derivative simultaneously handles ratios of pmfs and ratios of pdfs.
The setting of the question concerns two random variables $X$ and $Y$ absolutely continuous with respect to some measure $\mu,$ with Radon-Nikodym derivatives $f$ and $g$ respectively. Suppose, further, that $Y$ is absolutely continuous with respect to $X,$ the probability measure of $Y$ is $\lambda,$ and the probability measure of $X$ is $\nu.$ It follows easily (from the definitions) that the function $h = g/f$ is the Radon-Nikodym derivative of $\lambda$ with respect to $\nu$ and it is almost everywhere defined with respect to the measure $\mu.$
In any event, because $\log$ is a convex extended-real function on the non-negative reals (taking the value $-\infty$ at $0$), its value at any weighted average of a set of points is never less than the weighted average of its values at those points (Jensen's Inequality). The broadest concept of "weighted average" is the integral against a measure like $\nu;$ thus, for any $\nu$-measurable function $h:\mathbb{R}\to [0,\infty),$
$$\log \int h\, \mathrm{d}\nu \ge \int \log(h)\,\mathrm{d}\nu.$$
(When both sides are exponentiated this is also known as the (weighted) Arithmetic Mean - Geometric Mean Inequality.)
Plugging in $h = g/f$ and $f = \mathrm{d}\nu/\mathrm{d}\mu$ and remembering all probability measures integrate to unity (as part of their definition) gives
$$\eqalign{
0 &= \log(1) = \log \int \mathrm{d}\lambda &&\color{Gray}{\lambda\text{ is a probability measure}}\\
&= \log \int g\,\mathrm{d}\mu &&\color{Gray}{g = \frac{\mathrm{d}\lambda}{\mathrm{d}\mu}}\\
&= \log \int \frac{g}{f}\,f\,\mathrm{d}\mu &&\color{Gray}{gf/f=g}\\
&= \log \int h \,\mathrm{d}\nu &&\color{Gray}{h = g/f\text{ and }f = \frac{\mathrm{d}\nu}{\mathrm{d}\mu}} \\
&\ge \int \log(h)\,\mathrm{d}\nu &&\color{Gray}{\text{Jensen}} \\
&= \int \log\left(\frac{g}{f}\right)\,f\,\mathrm{d}\mu &&\color{Gray}{h=g/f\text{ and } f = \frac{\mathrm{d}\nu}{\mathrm{d}\mu}}.
}$$
Negating this inequality produces the desired result, QED.
|
Is my proof that relative entropy is never negative correct?
I think you have introduced good ideas, but some care is needed to make sense of all this.
The unifying concept is of absolutely continuous measure. Given two measures $\nu$ and $\mu$ on the same mea
|
40,551
|
Smoothing splines with a boundary constraint
|
As pointed in the comments, the pc argument of the s() function included in the mgcv package does not allow for multiple constraint points. This is unfortunate but I think should not be too complicated to achieve the objective outside the realm of the specific package.
Intro
I think we can obtain the result we wish using two strategy:
approximate the conditions (we will not get exactly $y = 0$ at $x = [0, 10]$)
exact constraints
The first strategy has the advantage to allow for easy inference and can also be easily translate in Bayesian settings if one wish so (and might also be possible to achieve within mgcv but I am not a super expert of the package). However I will not go so much into the details but I will point to some reference.
I will discuss both solutions using P-splines smoothing as introduced by Eilers and Marx, 1991 (option bs = ps in s() function). P-splines combine B-spline bases and finite difference penalties (you can read more about this here and here...please give a look at the extrapolation properties of the P-splines because it is relevant in your case).
In what follows I will indicate with $B$ a matrix of B-spline bases, with $P$ a finite difference penalty matrix and with $\lambda$ the smoothing parameter (I will keep it fixed for convenience in the codes).
Strategy 1 - extra penalty
This 'trick' consists in adding an extra penalty term to the penalized problem. The penalized least squares problem becomes then
$$
\min_{c} S_{p} = \|y - B c\|^{2} + \lambda c^{\top} P c + \kappa (\Gamma c - v(x_{0}))^{\top} (\Gamma c - v(x_{0}))
$$
where $\Gamma$ is the B-spline functions evaluated at the boundary points, $\kappa$ is a large constant (say $10^8$) and $v(x_{0})$ are the boundary abscissa (n your case a vector of zero of dim 2).
Strategy 2 - Lagrange multipliers
The previous strategy gives only a sort of 'soft' approximations. We can obtain an exact match using Lagrange multipliers (a reference in this context is here). In this case the penalized least squares problem is slightly different:
$$
\min_{c} S_{l} = \|y - B c\|^{2} + \lambda c^{\top} P c + \gamma^{\top} (\Gamma c - v(x_{0}))
$$
where $\gamma$ is a vector of Lagrange multipliers to be estimated.
A small R code
I will now use both strategies to smooth your data. I hope the code is clear enough (anyway I left some comments to guide you). The code supposes that you have a function to compute the B-splines $B$ (see for example Eilers and Marx, 2010).
rm(list =ls()); graphics.off()
# Simulate some data
set.seed(2020)
xmin = 1
xmax = 9
m = 200
x = seq(xmin, xmax, length = m)
ys = sin((x^2)/10)
y = ys+rnorm(m) * 0.2
# Boundary conditions
bx = c(0, 10)
by = c(0, 0)
# Compute bases for function, first and second derivative
bdeg = 3
nseg = 50
B0 = bbase(x, bx[1], bx[2], nseg, bdeg)
nb = ncol(B0)
Gi = bbase(bx, bx[1], bx[2], nseg, bdeg)
# Set syste penalty and with extra penalty
D = diff(diag(nb), diff = 2)
P = t(D) %*% D
Bb = t(B0) %*% B0
Ci = t(Gi) %*% Gi
lam = 1e1
kap = 1e8
# Solve system strategy 1
cof_p = solve(Bb + lam * P + kap * Ci) %*% (t(B0) %*% y + kap * t(Gi) %*% by)
# Solve system strategy 2
LS = rbind((Bb + lam * P), Gi)
RS = rbind(t(Gi), 0 * diag(0, nrow(Gi)))
cof_l = solve(cbind(LS, RS)) %*% c(t(B0) %*% y, by)
# Plot results
plot(x, y, xlim = bx, pch = 16)
lines(x, ys, col = 8, lwd = 2)
points(bx, by, pch = 15)
# Strategy 1
lines(x, B0 %*% cof_p, lwd = 2, col = 2)
points(bx[1], (Gi %*% cof_p)[1], col = 2, pch = 16)
points(bx[2], (Gi %*% cof_p)[2], col = 2, pch = 16)
# Strategy 2
lines(x, B0 %*% cof_l[1:nb], lwd = 2, col = 3, lty = 2)
points(bx[1], (Gi %*% cof_l[1:nb])[1], col = 3, pch = 16, cex = 0.75)
points(bx[2], (Gi %*% cof_l[1:nb])[2], col = 3, pch = 16, cex = 0.75)
legend('bottomleft', c('data', 'signal', 'strategy1', 'strategy2'), col = c(1, 8, 2, 3), pch = 16)
The final results should look like this:
I hope this helps somehow.
|
Smoothing splines with a boundary constraint
|
As pointed in the comments, the pc argument of the s() function included in the mgcv package does not allow for multiple constraint points. This is unfortunate but I think should not be too complicate
|
Smoothing splines with a boundary constraint
As pointed in the comments, the pc argument of the s() function included in the mgcv package does not allow for multiple constraint points. This is unfortunate but I think should not be too complicated to achieve the objective outside the realm of the specific package.
Intro
I think we can obtain the result we wish using two strategy:
approximate the conditions (we will not get exactly $y = 0$ at $x = [0, 10]$)
exact constraints
The first strategy has the advantage to allow for easy inference and can also be easily translate in Bayesian settings if one wish so (and might also be possible to achieve within mgcv but I am not a super expert of the package). However I will not go so much into the details but I will point to some reference.
I will discuss both solutions using P-splines smoothing as introduced by Eilers and Marx, 1991 (option bs = ps in s() function). P-splines combine B-spline bases and finite difference penalties (you can read more about this here and here...please give a look at the extrapolation properties of the P-splines because it is relevant in your case).
In what follows I will indicate with $B$ a matrix of B-spline bases, with $P$ a finite difference penalty matrix and with $\lambda$ the smoothing parameter (I will keep it fixed for convenience in the codes).
Strategy 1 - extra penalty
This 'trick' consists in adding an extra penalty term to the penalized problem. The penalized least squares problem becomes then
$$
\min_{c} S_{p} = \|y - B c\|^{2} + \lambda c^{\top} P c + \kappa (\Gamma c - v(x_{0}))^{\top} (\Gamma c - v(x_{0}))
$$
where $\Gamma$ is the B-spline functions evaluated at the boundary points, $\kappa$ is a large constant (say $10^8$) and $v(x_{0})$ are the boundary abscissa (n your case a vector of zero of dim 2).
Strategy 2 - Lagrange multipliers
The previous strategy gives only a sort of 'soft' approximations. We can obtain an exact match using Lagrange multipliers (a reference in this context is here). In this case the penalized least squares problem is slightly different:
$$
\min_{c} S_{l} = \|y - B c\|^{2} + \lambda c^{\top} P c + \gamma^{\top} (\Gamma c - v(x_{0}))
$$
where $\gamma$ is a vector of Lagrange multipliers to be estimated.
A small R code
I will now use both strategies to smooth your data. I hope the code is clear enough (anyway I left some comments to guide you). The code supposes that you have a function to compute the B-splines $B$ (see for example Eilers and Marx, 2010).
rm(list =ls()); graphics.off()
# Simulate some data
set.seed(2020)
xmin = 1
xmax = 9
m = 200
x = seq(xmin, xmax, length = m)
ys = sin((x^2)/10)
y = ys+rnorm(m) * 0.2
# Boundary conditions
bx = c(0, 10)
by = c(0, 0)
# Compute bases for function, first and second derivative
bdeg = 3
nseg = 50
B0 = bbase(x, bx[1], bx[2], nseg, bdeg)
nb = ncol(B0)
Gi = bbase(bx, bx[1], bx[2], nseg, bdeg)
# Set syste penalty and with extra penalty
D = diff(diag(nb), diff = 2)
P = t(D) %*% D
Bb = t(B0) %*% B0
Ci = t(Gi) %*% Gi
lam = 1e1
kap = 1e8
# Solve system strategy 1
cof_p = solve(Bb + lam * P + kap * Ci) %*% (t(B0) %*% y + kap * t(Gi) %*% by)
# Solve system strategy 2
LS = rbind((Bb + lam * P), Gi)
RS = rbind(t(Gi), 0 * diag(0, nrow(Gi)))
cof_l = solve(cbind(LS, RS)) %*% c(t(B0) %*% y, by)
# Plot results
plot(x, y, xlim = bx, pch = 16)
lines(x, ys, col = 8, lwd = 2)
points(bx, by, pch = 15)
# Strategy 1
lines(x, B0 %*% cof_p, lwd = 2, col = 2)
points(bx[1], (Gi %*% cof_p)[1], col = 2, pch = 16)
points(bx[2], (Gi %*% cof_p)[2], col = 2, pch = 16)
# Strategy 2
lines(x, B0 %*% cof_l[1:nb], lwd = 2, col = 3, lty = 2)
points(bx[1], (Gi %*% cof_l[1:nb])[1], col = 3, pch = 16, cex = 0.75)
points(bx[2], (Gi %*% cof_l[1:nb])[2], col = 3, pch = 16, cex = 0.75)
legend('bottomleft', c('data', 'signal', 'strategy1', 'strategy2'), col = c(1, 8, 2, 3), pch = 16)
The final results should look like this:
I hope this helps somehow.
|
Smoothing splines with a boundary constraint
As pointed in the comments, the pc argument of the s() function included in the mgcv package does not allow for multiple constraint points. This is unfortunate but I think should not be too complicate
|
40,552
|
Smoothing splines with a boundary constraint
|
This might be a bit long so I will reply in another answer.
Following the comment to my previous answer I will attempt a solution to the following problem: fit an additive model with a shared smooth trend effect subject to boundary constraint and a random Id intercept.
Extra penalty in matrix augmentation form
In the comments above I mentioned that strategy 1 of my previous answer can be used to achieve the constrained fit in a GAMM setting. This becomes clear if we write the extra penalty solution in augmented matrix form (in what follows I will use the same notation as in my previous answer). We can say that:
$$
\min_{c} S_{p} = \|W^{1/2} (y_{p} - B_{p}c)\|^{2} + \lambda \|D_{d} c\|^{2}
$$
where $c$ is a $(m \times 1)$ vector of unknown spline coefficients, $y_{p}$ is a $((n+ 2) \times 1)$ vector obtained stacking the observed $y$ and boundary values $v(x_{0})$, $B_{p}$ is a $((n+2) \times m)$ B-splines basis matrix obtained placing matrices $B$ and $\Gamma$ on top of each other, $W^{1/2}$ is a $((m + 2) \times (m+2))$ diagonal matrix with first $m$ non-zero elements equal to 1 and last two equal to $\sqrt{\kappa}$ and $D_{d}$ is a $d$ order finite difference matrix operator (the penalty matrix is equal to $P = D_{d}^{\top} D_{d}$).
P-splines as mixed models
P-splines (and all the penalized smoothing techniques included in the s() function of the mgcv package) can be written in a linear 'mixed model form'. For P-splines different re-parameterization are possible (see e.g. par 10 of Eilers et al. (2015)). We can for example define
$$
\begin{array}{ll}
X = [1, x_{p}^{1}, ..., x_{p}^{d-1}] \\
Z = B_{p}D_{d}^{\top} (D_{d}D_{d}^{\top})^{-1}
\end{array}
$$
where $x_{p}$ is the $(m+2)$ vector of time points including the last two boundary ordinates. With this in mind we can then rewrite the normal equations for the min problem above as follows
(see also this):
$$
\left[
\begin{array}{lll}
X^{\top} W X & X^{\top} W Z \\
Z^{\top} W X & Z^{\top} W Z + \lambda I
\end{array}
\right]
\left[
\begin{array}{ll}
\beta\\ b
\end{array}
\right]
=
\left[
\begin{array}{ll}
X^{\top} W y_{p} \\ Z^{\top} W y_{p}
\end{array}
\right]
$$
where $\lambda$ is still the smoothing parameter and it is equal to the ratio of variances $\sigma^{2}/\tau^{2}$ with $b \sim N(0, \tau^{2} I)$ and $\epsilon \sim N(0, \sigma^{2} I)$.
Include random intercept
To solve the original problem, we would like to include also a random intercept. This can be achieved by modifying the form of the Z matrix as follows (see also this link ):
$$
Z = \left(
\begin{array}{lll}
Z_{1} ,& \texttt{1}_{1},& 0,& \dots ,& 0 \\
Z_{2} ,& 0,& \texttt{1}_{2},& \dots ,& 0 \\
\vdots ,& \vdots,& \vdots,& \vdots,& \vdots \\
Z_{J} ,& \dots,& \dots, & \dots,& \texttt{1}_{J}
\end{array}
\right)
$$
where
$\texttt{1}_{j}$ is a $((n_{j} + 2) \times 1)$ vector of ones used to model the $j-$th subject-specific intercept. Of course this also 'adds' $J$ elements to the vector of random effects $b$ with $\text{Cov}(b) = \begin{pmatrix}
\tau^2 \boldsymbol{I} & 0 \\
0 & \sigma_{\texttt{1}}^2 \boldsymbol{I}
\end{pmatrix}$
Small R-code
I will suppose here that you have a function for the definition of the B-spline matrices and their mixed model representation. I left comments and references in the code. In principle, I think this can be achieved within the mgcv package but unfortunately I do not know the package well enough. Instead, I will use nlme package (on which I thing mgcv is written on at least partially).
#####################
# Utility functions #
#####################
Conf_Bands = function(X, Z, f_hat, s2, s2.alpha, alpha = 0.975)
{
# cit: #http://halweb.uc3m.es/esp/Personal/personas/durban/esp/web/cursos/Maringa/gam-markdown/Gams.html#26_penalized_splines_as_mixed_models
C = cbind(X, Z)
lambda = s2/s2.alpha
D = diag(c(rep(0, ncol(X)), rep(lambda, ncol(Z))))
S = s2 * rowSums(C %*% solve(t(C) %*% C + D) * C)
CB_lower = f_hat - qnorm(alpha) * sqrt(S)
CB_upper = f_hat + qnorm(alpha) * sqrt(S)
CB = cbind(CB_lower, CB_upper)
CB
}
basesMM = function(B, D, dd, ns, x)
{
# NB: needs to be modified if n_{j} is different for some j
Z0 = B %*% t(D) %*% solve(D %*% t(D))
X0 = outer(x, 1:(dd-1), '^')
Z = do.call('rbind', lapply(1:ns, function(i) Z0))
X = do.call('rbind', lapply(1:ns, function(i) X0))
return(list(X = X, Z = Z))
}
#########################
# Utility functions end #
#########################
# Simulate some data
set.seed(2020)
xmin = 1
xmax = 9
m = 100
x = seq(xmin, xmax, length = m)
ys = sin((x^2)/10)
ns = 3
y = ys + rnorm(m) * 0.2
yl = c(-2 + y, -0 + y, 2 + y)
sb = factor(rep(1:3, each = m))
dat = data.frame(y = yl, x = rep(x, ns), sub = sb)
# Boundary conditions
bx = c(0, 10)
by = c(-0, -0)
xfine = seq(bx[1], bx[2], len = m * 2)
# Create bases
# see https://onlinelibrary.wiley.com/doi/10.1002/wics.125
bdeg = 3
nseg = 25
dx = (bx[2] - bx[1]) /nseg
knots = seq(bx[1] - bdeg * dx, bx[2] + bdeg * dx, by = dx)
B0 = bbase(x, bx[1], bx[2], nseg, bdeg)
nb = ncol(B0)
Gi = bbase(bx, bx[1], bx[2], nseg, bdeg)
Bf = bbase(xfine, bx[1], bx[2], nseg, bdeg)
# Penalty stuffs
dd = 3
D = diff(diag(1, nb), diff = dd)
kap = 1e8
# Augmented matrix
Bp = rbind(B0, Gi)
# Mixed model representation for lme
# see https://www.researchgate.net/publication/290086196_Twenty_years_of_P-splines
yp = do.call('c', lapply(split(dat, dat$sub), function (x) c(x$y, by)))
datMM = data.frame(y = yp)
mmBases = basesMM(Bp, D, dd, ns, x = c(x, bx))
datMM$X = mmBases$X
datMM$Z = mmBases$Z
datMM$w = c(rep(1, m), 1/kap, 1/kap)
datMM$Id = factor(rep(1, ns * (m + 2)))
datMM$sb = factor(rep(1:ns, each = m + 2))
# lme fit:
# https://www.researchgate.net/publication/8159699_Simple_fitting_of_subject-specific_curves_for_longitudinal_data
# https://stat.ethz.ch/pipermail/r-help/2006-January/087023.html
# https://stats.stackexchange.com/questions/30970/understanding-the-linear-mixed-effects-model-equation-and-fitting-a-random-effec
fit = lme(y ~ X, random = list(Id = pdIdent(~ Z - 1), sb = pdIdent( ~ w - 1)), data = datMM, weights = ~w)
# Variance components
s2 = fit$sigm ^ 2
s2.alpha = s2 * exp(2 * unlist(fit$modelStruct)[1])
# Extract coefficients + get fit + value at boundaries
X0 = datMM$X[1:(m+2), ]
Z0 = datMM$Z[1:(m+2), ]
beta.hat = fit$coef$fixed
b.hat = fit$coef$random
f.hat = cbind(1, X0[1:m, ]) %*% beta.hat + Z0[1:m, ] %*% t(b.hat$Id)
f.hatfine = cbind(1,basesMM(Bf, D, dd, ns = 1, x = xfine)$X) %*% beta.hat + basesMM(Bf, D, dd, ns = 1, x = xfine)$Z %*% t(b.hat$Id)
f.cnt = cbind(1, X0[-c(1:m), ]) %*% beta.hat + Z0[-c(1:m), ] %*% t(b.hat$Id)
fit_bands = Conf_Bands(cbind(1, X0[1:m, ]) , Z0[1:m, ], f.hat, s2, s2.alpha)
# Plots fits
par(mfrow = c(2, 1), mar = rep(2, 4))
plot(rep(x,ns), yl, xlim = range(c(x, bx) + c(-0.5, 0.5)), main = 'Fitted curves', col = as.numeric(dat$sub), pch = 16)
abline(h = 0, lty = 3)
lines(x, f.hat[1:m] + fit$coefficients$random$sb[1], col = 8, lwd = 2)
lines(x, f.hat[1:m] + fit$coefficients$random$sb[2], col = 8, lwd = 2)
lines(x, f.hat[1:m] + fit$coefficients$random$sb[3], col = 8, lwd = 2)
# Plot smooths
plot(x, f.hat, type = 'l', main = 'Smooth-term', xlim = range(c(x, bx) + c(-0.5, 0.5)), ylim = range(fit_bands + c(-0.5, 0.5)))
rug(knots[knots <= bx[2] & knots >= bx[1]])
polygon(x = c(x, rev(x)), y = c(fit_bands[, 1], rev(fit_bands[, 2])), lty = 0, col = scales::alpha('black', alpha = 0.25))
abline(h = by)
points(bx, f.cnt, pch = 16)
lines(xfine, f.hatfine, col = 2, lty = 2)
legend('topleft', legend = c('Smooth', 'Extrapolation', 'Constraint'), col = c(1, 2, 1), lty = c(1, 2, 0), pch = c(-1, -1, 16))
I hope everything here is correct (if you find some mistake, things that are not clear or suggestions please let me know). Finally, I hope my answer will be useful.
|
Smoothing splines with a boundary constraint
|
This might be a bit long so I will reply in another answer.
Following the comment to my previous answer I will attempt a solution to the following problem: fit an additive model with a shared smooth t
|
Smoothing splines with a boundary constraint
This might be a bit long so I will reply in another answer.
Following the comment to my previous answer I will attempt a solution to the following problem: fit an additive model with a shared smooth trend effect subject to boundary constraint and a random Id intercept.
Extra penalty in matrix augmentation form
In the comments above I mentioned that strategy 1 of my previous answer can be used to achieve the constrained fit in a GAMM setting. This becomes clear if we write the extra penalty solution in augmented matrix form (in what follows I will use the same notation as in my previous answer). We can say that:
$$
\min_{c} S_{p} = \|W^{1/2} (y_{p} - B_{p}c)\|^{2} + \lambda \|D_{d} c\|^{2}
$$
where $c$ is a $(m \times 1)$ vector of unknown spline coefficients, $y_{p}$ is a $((n+ 2) \times 1)$ vector obtained stacking the observed $y$ and boundary values $v(x_{0})$, $B_{p}$ is a $((n+2) \times m)$ B-splines basis matrix obtained placing matrices $B$ and $\Gamma$ on top of each other, $W^{1/2}$ is a $((m + 2) \times (m+2))$ diagonal matrix with first $m$ non-zero elements equal to 1 and last two equal to $\sqrt{\kappa}$ and $D_{d}$ is a $d$ order finite difference matrix operator (the penalty matrix is equal to $P = D_{d}^{\top} D_{d}$).
P-splines as mixed models
P-splines (and all the penalized smoothing techniques included in the s() function of the mgcv package) can be written in a linear 'mixed model form'. For P-splines different re-parameterization are possible (see e.g. par 10 of Eilers et al. (2015)). We can for example define
$$
\begin{array}{ll}
X = [1, x_{p}^{1}, ..., x_{p}^{d-1}] \\
Z = B_{p}D_{d}^{\top} (D_{d}D_{d}^{\top})^{-1}
\end{array}
$$
where $x_{p}$ is the $(m+2)$ vector of time points including the last two boundary ordinates. With this in mind we can then rewrite the normal equations for the min problem above as follows
(see also this):
$$
\left[
\begin{array}{lll}
X^{\top} W X & X^{\top} W Z \\
Z^{\top} W X & Z^{\top} W Z + \lambda I
\end{array}
\right]
\left[
\begin{array}{ll}
\beta\\ b
\end{array}
\right]
=
\left[
\begin{array}{ll}
X^{\top} W y_{p} \\ Z^{\top} W y_{p}
\end{array}
\right]
$$
where $\lambda$ is still the smoothing parameter and it is equal to the ratio of variances $\sigma^{2}/\tau^{2}$ with $b \sim N(0, \tau^{2} I)$ and $\epsilon \sim N(0, \sigma^{2} I)$.
Include random intercept
To solve the original problem, we would like to include also a random intercept. This can be achieved by modifying the form of the Z matrix as follows (see also this link ):
$$
Z = \left(
\begin{array}{lll}
Z_{1} ,& \texttt{1}_{1},& 0,& \dots ,& 0 \\
Z_{2} ,& 0,& \texttt{1}_{2},& \dots ,& 0 \\
\vdots ,& \vdots,& \vdots,& \vdots,& \vdots \\
Z_{J} ,& \dots,& \dots, & \dots,& \texttt{1}_{J}
\end{array}
\right)
$$
where
$\texttt{1}_{j}$ is a $((n_{j} + 2) \times 1)$ vector of ones used to model the $j-$th subject-specific intercept. Of course this also 'adds' $J$ elements to the vector of random effects $b$ with $\text{Cov}(b) = \begin{pmatrix}
\tau^2 \boldsymbol{I} & 0 \\
0 & \sigma_{\texttt{1}}^2 \boldsymbol{I}
\end{pmatrix}$
Small R-code
I will suppose here that you have a function for the definition of the B-spline matrices and their mixed model representation. I left comments and references in the code. In principle, I think this can be achieved within the mgcv package but unfortunately I do not know the package well enough. Instead, I will use nlme package (on which I thing mgcv is written on at least partially).
#####################
# Utility functions #
#####################
Conf_Bands = function(X, Z, f_hat, s2, s2.alpha, alpha = 0.975)
{
# cit: #http://halweb.uc3m.es/esp/Personal/personas/durban/esp/web/cursos/Maringa/gam-markdown/Gams.html#26_penalized_splines_as_mixed_models
C = cbind(X, Z)
lambda = s2/s2.alpha
D = diag(c(rep(0, ncol(X)), rep(lambda, ncol(Z))))
S = s2 * rowSums(C %*% solve(t(C) %*% C + D) * C)
CB_lower = f_hat - qnorm(alpha) * sqrt(S)
CB_upper = f_hat + qnorm(alpha) * sqrt(S)
CB = cbind(CB_lower, CB_upper)
CB
}
basesMM = function(B, D, dd, ns, x)
{
# NB: needs to be modified if n_{j} is different for some j
Z0 = B %*% t(D) %*% solve(D %*% t(D))
X0 = outer(x, 1:(dd-1), '^')
Z = do.call('rbind', lapply(1:ns, function(i) Z0))
X = do.call('rbind', lapply(1:ns, function(i) X0))
return(list(X = X, Z = Z))
}
#########################
# Utility functions end #
#########################
# Simulate some data
set.seed(2020)
xmin = 1
xmax = 9
m = 100
x = seq(xmin, xmax, length = m)
ys = sin((x^2)/10)
ns = 3
y = ys + rnorm(m) * 0.2
yl = c(-2 + y, -0 + y, 2 + y)
sb = factor(rep(1:3, each = m))
dat = data.frame(y = yl, x = rep(x, ns), sub = sb)
# Boundary conditions
bx = c(0, 10)
by = c(-0, -0)
xfine = seq(bx[1], bx[2], len = m * 2)
# Create bases
# see https://onlinelibrary.wiley.com/doi/10.1002/wics.125
bdeg = 3
nseg = 25
dx = (bx[2] - bx[1]) /nseg
knots = seq(bx[1] - bdeg * dx, bx[2] + bdeg * dx, by = dx)
B0 = bbase(x, bx[1], bx[2], nseg, bdeg)
nb = ncol(B0)
Gi = bbase(bx, bx[1], bx[2], nseg, bdeg)
Bf = bbase(xfine, bx[1], bx[2], nseg, bdeg)
# Penalty stuffs
dd = 3
D = diff(diag(1, nb), diff = dd)
kap = 1e8
# Augmented matrix
Bp = rbind(B0, Gi)
# Mixed model representation for lme
# see https://www.researchgate.net/publication/290086196_Twenty_years_of_P-splines
yp = do.call('c', lapply(split(dat, dat$sub), function (x) c(x$y, by)))
datMM = data.frame(y = yp)
mmBases = basesMM(Bp, D, dd, ns, x = c(x, bx))
datMM$X = mmBases$X
datMM$Z = mmBases$Z
datMM$w = c(rep(1, m), 1/kap, 1/kap)
datMM$Id = factor(rep(1, ns * (m + 2)))
datMM$sb = factor(rep(1:ns, each = m + 2))
# lme fit:
# https://www.researchgate.net/publication/8159699_Simple_fitting_of_subject-specific_curves_for_longitudinal_data
# https://stat.ethz.ch/pipermail/r-help/2006-January/087023.html
# https://stats.stackexchange.com/questions/30970/understanding-the-linear-mixed-effects-model-equation-and-fitting-a-random-effec
fit = lme(y ~ X, random = list(Id = pdIdent(~ Z - 1), sb = pdIdent( ~ w - 1)), data = datMM, weights = ~w)
# Variance components
s2 = fit$sigm ^ 2
s2.alpha = s2 * exp(2 * unlist(fit$modelStruct)[1])
# Extract coefficients + get fit + value at boundaries
X0 = datMM$X[1:(m+2), ]
Z0 = datMM$Z[1:(m+2), ]
beta.hat = fit$coef$fixed
b.hat = fit$coef$random
f.hat = cbind(1, X0[1:m, ]) %*% beta.hat + Z0[1:m, ] %*% t(b.hat$Id)
f.hatfine = cbind(1,basesMM(Bf, D, dd, ns = 1, x = xfine)$X) %*% beta.hat + basesMM(Bf, D, dd, ns = 1, x = xfine)$Z %*% t(b.hat$Id)
f.cnt = cbind(1, X0[-c(1:m), ]) %*% beta.hat + Z0[-c(1:m), ] %*% t(b.hat$Id)
fit_bands = Conf_Bands(cbind(1, X0[1:m, ]) , Z0[1:m, ], f.hat, s2, s2.alpha)
# Plots fits
par(mfrow = c(2, 1), mar = rep(2, 4))
plot(rep(x,ns), yl, xlim = range(c(x, bx) + c(-0.5, 0.5)), main = 'Fitted curves', col = as.numeric(dat$sub), pch = 16)
abline(h = 0, lty = 3)
lines(x, f.hat[1:m] + fit$coefficients$random$sb[1], col = 8, lwd = 2)
lines(x, f.hat[1:m] + fit$coefficients$random$sb[2], col = 8, lwd = 2)
lines(x, f.hat[1:m] + fit$coefficients$random$sb[3], col = 8, lwd = 2)
# Plot smooths
plot(x, f.hat, type = 'l', main = 'Smooth-term', xlim = range(c(x, bx) + c(-0.5, 0.5)), ylim = range(fit_bands + c(-0.5, 0.5)))
rug(knots[knots <= bx[2] & knots >= bx[1]])
polygon(x = c(x, rev(x)), y = c(fit_bands[, 1], rev(fit_bands[, 2])), lty = 0, col = scales::alpha('black', alpha = 0.25))
abline(h = by)
points(bx, f.cnt, pch = 16)
lines(xfine, f.hatfine, col = 2, lty = 2)
legend('topleft', legend = c('Smooth', 'Extrapolation', 'Constraint'), col = c(1, 2, 1), lty = c(1, 2, 0), pch = c(-1, -1, 16))
I hope everything here is correct (if you find some mistake, things that are not clear or suggestions please let me know). Finally, I hope my answer will be useful.
|
Smoothing splines with a boundary constraint
This might be a bit long so I will reply in another answer.
Following the comment to my previous answer I will attempt a solution to the following problem: fit an additive model with a shared smooth t
|
40,553
|
What is an induced probability function?
|
Not quite. The setting is a probability space $(\Omega,\mathfrak{F},\mathbb{P})$ and a measurable function $X$ whose domain is $\Omega$ and whose codomain usually is $\mathbb{R}$ with its Borel sigma-algebra $\mathfrak{B}$ (but generally could be any measurable space).
$X$ induces a probability distribution $\mathbb{P}_X$ as the push-forward of $\mathbb{P}$ via $X$, sometimes written $X_{*}\mathbb{P},$ defined as
$$\mathbb{P}_X(E) = (X_{*})\mathbb{P}(E) = \mathbb{P}(X^{-1}(E)) = \mathbb{P}\left(\{\omega\in\Omega\mid X(\omega)\in E\}\right)$$
for any event $E\in\mathfrak{B}.$
Let's do a simple example. Let $\Omega$ be the set of the three possible ways a flipped coin may land: heads, tails, or on its edge. Let its sigma-algebra $\mathfrak{F}$ consist of all subsets of $\Omega.$ Let the probability distribution $\mathbb{P}$ assign the value $p$ to $\{\text{Heads}\},$ $1-p$ to $\{\text{Tails}\},$ and $0$ to $\{\text{Side}\}.$ This determines $\mathbb P$ on every subset of $\Omega$ according to the laws of probability.
The function $X:\Omega\to\mathbb{R}$ that equals $1$ for $\omega=\text{Heads}$ and otherwise equals $0$ is the indicator of $\text{Heads}.$ $X$ is obviously measurable (because every subset of $\Omega$ is measurable). To figure out what $\mathbb{P}_X$ is, let $E\subset\mathfrak{B}$ be a Borel-measurable set. $X_{*}\mathbb{P}(E)$ is the sum of up to three values: $p$ if $X(\text{Heads})\in E,$ plus $1-p$ if $X(\text{Tails})\in E,$ plus $0$ if $X(\text{Side})\in E$.
One convenient way to express $\mathbb{P}_X$ uses the "one-point" measures $\delta_a$ defined on the Borel sets of $\mathbb{R}.$ These assign the value $1$ to an event $E$ when $a\in E$ and otherwise assign the value $0.$ It's easy to check that they are indeed measures.
The random variable $X$ thereby pushes $\mathbb P$ into the induced measure (or "induced probability function") $$\mathbb{P}_X = (1-p)\delta_0 + p\delta_1.$$
Another description of the induced measure considers only events of the form $E(x)=(-\infty, x]$ for $x\in \mathbb{R},$ because these determine the entire Borel sigma algebra of $\mathbb R.$ The formula
$$F_X: x\to \mathbb{P}_X(E(x)) = \mathbb{P}(X\le x) = \mathbb{P}\left(\{\omega\in\Omega\mid X(\omega)\le x\}\right)$$
defines a function on $\mathbb R,$ the cumulative distribution function of $X.$ It equals $0$ for $x\lt 0,$ jumps up to a constant value of $1-p$ for $0\le x \lt 1,$ and then jumps (by an amount $p$) up to $1$ for $x\ge 1.$
In this example of a Bernoulli$(p)$ random variable, please notice that
$X$ is neither an injection nor a surjection from $\Omega$ to $\mathbb R.$ Its image is merely the set $\{0,1\}.$
$F_X$ is neither an injection nor a surjection from $\Omega$ to the set of possible probabilities $[0,1].$ Its image is the set $\{0,1-p,1\}.$
|
What is an induced probability function?
|
Not quite. The setting is a probability space $(\Omega,\mathfrak{F},\mathbb{P})$ and a measurable function $X$ whose domain is $\Omega$ and whose codomain usually is $\mathbb{R}$ with its Borel sigma
|
What is an induced probability function?
Not quite. The setting is a probability space $(\Omega,\mathfrak{F},\mathbb{P})$ and a measurable function $X$ whose domain is $\Omega$ and whose codomain usually is $\mathbb{R}$ with its Borel sigma-algebra $\mathfrak{B}$ (but generally could be any measurable space).
$X$ induces a probability distribution $\mathbb{P}_X$ as the push-forward of $\mathbb{P}$ via $X$, sometimes written $X_{*}\mathbb{P},$ defined as
$$\mathbb{P}_X(E) = (X_{*})\mathbb{P}(E) = \mathbb{P}(X^{-1}(E)) = \mathbb{P}\left(\{\omega\in\Omega\mid X(\omega)\in E\}\right)$$
for any event $E\in\mathfrak{B}.$
Let's do a simple example. Let $\Omega$ be the set of the three possible ways a flipped coin may land: heads, tails, or on its edge. Let its sigma-algebra $\mathfrak{F}$ consist of all subsets of $\Omega.$ Let the probability distribution $\mathbb{P}$ assign the value $p$ to $\{\text{Heads}\},$ $1-p$ to $\{\text{Tails}\},$ and $0$ to $\{\text{Side}\}.$ This determines $\mathbb P$ on every subset of $\Omega$ according to the laws of probability.
The function $X:\Omega\to\mathbb{R}$ that equals $1$ for $\omega=\text{Heads}$ and otherwise equals $0$ is the indicator of $\text{Heads}.$ $X$ is obviously measurable (because every subset of $\Omega$ is measurable). To figure out what $\mathbb{P}_X$ is, let $E\subset\mathfrak{B}$ be a Borel-measurable set. $X_{*}\mathbb{P}(E)$ is the sum of up to three values: $p$ if $X(\text{Heads})\in E,$ plus $1-p$ if $X(\text{Tails})\in E,$ plus $0$ if $X(\text{Side})\in E$.
One convenient way to express $\mathbb{P}_X$ uses the "one-point" measures $\delta_a$ defined on the Borel sets of $\mathbb{R}.$ These assign the value $1$ to an event $E$ when $a\in E$ and otherwise assign the value $0.$ It's easy to check that they are indeed measures.
The random variable $X$ thereby pushes $\mathbb P$ into the induced measure (or "induced probability function") $$\mathbb{P}_X = (1-p)\delta_0 + p\delta_1.$$
Another description of the induced measure considers only events of the form $E(x)=(-\infty, x]$ for $x\in \mathbb{R},$ because these determine the entire Borel sigma algebra of $\mathbb R.$ The formula
$$F_X: x\to \mathbb{P}_X(E(x)) = \mathbb{P}(X\le x) = \mathbb{P}\left(\{\omega\in\Omega\mid X(\omega)\le x\}\right)$$
defines a function on $\mathbb R,$ the cumulative distribution function of $X.$ It equals $0$ for $x\lt 0,$ jumps up to a constant value of $1-p$ for $0\le x \lt 1,$ and then jumps (by an amount $p$) up to $1$ for $x\ge 1.$
In this example of a Bernoulli$(p)$ random variable, please notice that
$X$ is neither an injection nor a surjection from $\Omega$ to $\mathbb R.$ Its image is merely the set $\{0,1\}.$
$F_X$ is neither an injection nor a surjection from $\Omega$ to the set of possible probabilities $[0,1].$ Its image is the set $\{0,1-p,1\}.$
|
What is an induced probability function?
Not quite. The setting is a probability space $(\Omega,\mathfrak{F},\mathbb{P})$ and a measurable function $X$ whose domain is $\Omega$ and whose codomain usually is $\mathbb{R}$ with its Borel sigma
|
40,554
|
AIC can recommend an overfitting model?
|
AIC can most definitely select an overfit model, because you e.g.
only assess overfit models (so one of those gets selected), or
you offer up an overfit model vs. an inappropriate model (seems to be your example)/a very overfit model/an underfit model, or
you compare more than one model via AIC (the more models the worse this gets) and by testing several you end up overfitting via the model selection.
While AIC attempts to balance fit to the training data vs. model complexity, there is nothing inherent to it that would provide a guarantee that model selection would result in an non-overfit model (of course, it penalizes model complexity more than if you selected solely on fit to the training data, so in that sense overfitting ought to be a bit more avoided). In fact (see point 3 above), the very fact of doing model selection involves the potential for overfitting to the data on which you select the model and model averaging (and various other approaches) have been proposed to avoid this issue (see e.g. Model Selection and Multimodel Inference by Burnham and Anderson).
|
AIC can recommend an overfitting model?
|
AIC can most definitely select an overfit model, because you e.g.
only assess overfit models (so one of those gets selected), or
you offer up an overfit model vs. an inappropriate model (seems to be
|
AIC can recommend an overfitting model?
AIC can most definitely select an overfit model, because you e.g.
only assess overfit models (so one of those gets selected), or
you offer up an overfit model vs. an inappropriate model (seems to be your example)/a very overfit model/an underfit model, or
you compare more than one model via AIC (the more models the worse this gets) and by testing several you end up overfitting via the model selection.
While AIC attempts to balance fit to the training data vs. model complexity, there is nothing inherent to it that would provide a guarantee that model selection would result in an non-overfit model (of course, it penalizes model complexity more than if you selected solely on fit to the training data, so in that sense overfitting ought to be a bit more avoided). In fact (see point 3 above), the very fact of doing model selection involves the potential for overfitting to the data on which you select the model and model averaging (and various other approaches) have been proposed to avoid this issue (see e.g. Model Selection and Multimodel Inference by Burnham and Anderson).
|
AIC can recommend an overfitting model?
AIC can most definitely select an overfit model, because you e.g.
only assess overfit models (so one of those gets selected), or
you offer up an overfit model vs. an inappropriate model (seems to be
|
40,555
|
Can random forest detect squared terms?
|
Indeed it can. Here are some simulated data with a squared relationship between the predictor and the response, and the fit from a Random Forest:
R code:
nn <- 1e4
set.seed(1)
xx <- runif(nn)
yy <- xx^2+rnorm(nn,0,0.1)
plot(xx,yy,pch=19,cex=0.6,col="lightgray")
library(randomForest)
model <- randomForest(yy~xx)
xx_pred <- seq(0,1,by=.01)
lines(xx_pred,predict(model,newdata=data.frame(xx=xx_pred)),col="red",lwd=2)
As to how the RF does this: remember that it is just a collection of classification and regression trees. Each separate tree (based on a bootstrap sample of the data, and a subset of predictors) will use a different cutoff of the predictor and output a different value for the response for low vs. high predictor values. On average, the fitted reponse for high predictor values will deviate more from the average than for low predictor values, and thus model the nonlinear relationship.
There are also RF implementations that fit linear models on the predictor in the leaves. These can of course also model nonlinearities, by fitting different slopes for different values of the predictor.
|
Can random forest detect squared terms?
|
Indeed it can. Here are some simulated data with a squared relationship between the predictor and the response, and the fit from a Random Forest:
R code:
nn <- 1e4
set.seed(1)
xx <- runif(nn)
yy <- x
|
Can random forest detect squared terms?
Indeed it can. Here are some simulated data with a squared relationship between the predictor and the response, and the fit from a Random Forest:
R code:
nn <- 1e4
set.seed(1)
xx <- runif(nn)
yy <- xx^2+rnorm(nn,0,0.1)
plot(xx,yy,pch=19,cex=0.6,col="lightgray")
library(randomForest)
model <- randomForest(yy~xx)
xx_pred <- seq(0,1,by=.01)
lines(xx_pred,predict(model,newdata=data.frame(xx=xx_pred)),col="red",lwd=2)
As to how the RF does this: remember that it is just a collection of classification and regression trees. Each separate tree (based on a bootstrap sample of the data, and a subset of predictors) will use a different cutoff of the predictor and output a different value for the response for low vs. high predictor values. On average, the fitted reponse for high predictor values will deviate more from the average than for low predictor values, and thus model the nonlinear relationship.
There are also RF implementations that fit linear models on the predictor in the leaves. These can of course also model nonlinearities, by fitting different slopes for different values of the predictor.
|
Can random forest detect squared terms?
Indeed it can. Here are some simulated data with a squared relationship between the predictor and the response, and the fit from a Random Forest:
R code:
nn <- 1e4
set.seed(1)
xx <- runif(nn)
yy <- x
|
40,556
|
Specifying model in glmer() - interaction terms
|
First, note that A*B is just shorthand for A + B + A:B and it does not make sense to specify a model with only the interaction term, as in your last model. That is, when including an interaction, as a general rule you also need to include the main effects for each variable involved in the interaction. In other words you should either fit A + B if you don't want an interaction or A*B ( or A + B + A:B) if you do want to include the interaction.
Second, note that, in the presence of an interaction, the meaning of the main effects change. Without an interaction, each main effect is interpreted as the association of a 1 unit change (or the difference compared to the reference level, in the case of a categorical variable) with the outcome, leaving the other covariates unchanged. However, in the presence of an interaction, each main effect is interpreted as the association of a 1 unit change (or the difference compared to the reference level, in the case of a categorical variable) with the outcome, when the other variable that is involved in the interaction is zero (or at its reference level in the case of a categorical variable). This is why the estimates and their p values for the main effects are different after including them in an interaction: they are testing different things.
From the output of the first three models, we see from mod3 that the interaction term is not significant. This usually means that you can safely drop the interaction. I say "usually" because p values are also related to sample size, so if you have strong theoretical reasons for including the interaction, and you only have a small sample, then you should retain it. A power analysis before conducting the experiment/study to determine adequate sample size is the best way to proceed, so if you are going to follow up with a further study then I would strongly suggest that. Assuming that statistical power isn't the issue, then as mentioned above you can drop the interaction and proceed with the model that contains both main effects. Presumably you have good reason for wanting to include them in the first place. Model selection based on p values is a highly dubious procedure, and it is far better to use your knowledge of the subject matter to select the best model. So a lot will depend on your research question. For example if you are mainly interested in understanding how Time is associated with the probability of response, then it is possible that sex is a confounder so you would definitely want to include it.
Lastly, note that time often has a non linear association with outcomes so you might want to include non linear terms (such as a quadratic term) for it, or use splines. You might also want to allow the association between time and the probability of response to be different within ID and Location, by including random slopes for it.
|
Specifying model in glmer() - interaction terms
|
First, note that A*B is just shorthand for A + B + A:B and it does not make sense to specify a model with only the interaction term, as in your last model. That is, when including an interaction, as
|
Specifying model in glmer() - interaction terms
First, note that A*B is just shorthand for A + B + A:B and it does not make sense to specify a model with only the interaction term, as in your last model. That is, when including an interaction, as a general rule you also need to include the main effects for each variable involved in the interaction. In other words you should either fit A + B if you don't want an interaction or A*B ( or A + B + A:B) if you do want to include the interaction.
Second, note that, in the presence of an interaction, the meaning of the main effects change. Without an interaction, each main effect is interpreted as the association of a 1 unit change (or the difference compared to the reference level, in the case of a categorical variable) with the outcome, leaving the other covariates unchanged. However, in the presence of an interaction, each main effect is interpreted as the association of a 1 unit change (or the difference compared to the reference level, in the case of a categorical variable) with the outcome, when the other variable that is involved in the interaction is zero (or at its reference level in the case of a categorical variable). This is why the estimates and their p values for the main effects are different after including them in an interaction: they are testing different things.
From the output of the first three models, we see from mod3 that the interaction term is not significant. This usually means that you can safely drop the interaction. I say "usually" because p values are also related to sample size, so if you have strong theoretical reasons for including the interaction, and you only have a small sample, then you should retain it. A power analysis before conducting the experiment/study to determine adequate sample size is the best way to proceed, so if you are going to follow up with a further study then I would strongly suggest that. Assuming that statistical power isn't the issue, then as mentioned above you can drop the interaction and proceed with the model that contains both main effects. Presumably you have good reason for wanting to include them in the first place. Model selection based on p values is a highly dubious procedure, and it is far better to use your knowledge of the subject matter to select the best model. So a lot will depend on your research question. For example if you are mainly interested in understanding how Time is associated with the probability of response, then it is possible that sex is a confounder so you would definitely want to include it.
Lastly, note that time often has a non linear association with outcomes so you might want to include non linear terms (such as a quadratic term) for it, or use splines. You might also want to allow the association between time and the probability of response to be different within ID and Location, by including random slopes for it.
|
Specifying model in glmer() - interaction terms
First, note that A*B is just shorthand for A + B + A:B and it does not make sense to specify a model with only the interaction term, as in your last model. That is, when including an interaction, as
|
40,557
|
Plausibility, Possibility, and Probability
|
Possibility and probability can be examined together
This answer is based on material in the paper O'Neill (2014) which gives a detailed exposition of the relationship between probability and possibility for the assistance of statistics students and practitioners. Some of the material in this answer is copied from that source without further citation/attribution. I recommend you have a look at that paper to get a more detailed exposition of the material I discuss in this answer. You can also find a related answer here.
I am not aware of any formal definition of "plausibility" beyond observing that it is sometimes used as a quantification of probability. However, I can tell you the relationship between the other two concepts. Possibility is a slightly different concept than probability, which demarcates all the outcomes that can occur within a wider space. There is an alternative algebra for "possibility measures", with a different kind of additivity property that reflects the fact that the possibility measure is "compositional". A possibility measure can be applied over the set of all events on an arbitrary set of outcomes, but if the class of events has the required structure for a probability measure, then we can also have a probability measure over the same class of events, which allows us to examine the interaction of the two measures. The two measures interact on the basis of a simple and intuitive rule: every impossible event has zero probability. (Note that the converse is not always true.)
There is a large literature looking at mathematical representations of possibility (known as "possibility theory"), which is closely related to fuzzy set theory. Overviews of this literature can be found in Yager (ed) (1982), Kacprzyk and Orlovski (eds) (1987), Dubois and Prade (1988), Terano, Asai and Sugeno (1992) and Zadeh and Kacprzyk (1992). A useful analysis of the relationship between possibility and probability representations can be found in Dubois and Prade (1993).
Formal presentation: Suppose we let $\Omega_*$ denote some set of outcomes and let $\mathscr{G}_*$ be a class of events with sufficient structure to allow a probability measure (i.e., it is a sigma-field). Suppose we have a possibility measure $\mathbb{pos}$ and a probability measure $\mathbb{P}$ on this class of events, giving rise to a possibility/probability space $(\Omega_*, \mathscr{G}_*, \mathbb{pos}, \mathbb{P})$. This space allows us to examine the interaction between possibility and probability. The starting point is our basic axiom (that I will call the "axiom of correspondence"), which says that every impossible event must have zero probability:
Axiom of correspondence: For all events $\mathcal{E} \in \mathscr{G}_*$ we have:$$\mathbb{pos}(\mathcal{E}) = 0
\quad \quad \implies \quad \quad \mathbb{P}(\mathcal{E}) = 0.$$
It is useful to narrow this down to look at only the outcomes that are possible. The set of possible outcomes (which I will call the possibility space) is given by $\Omega \equiv \{ \omega \in \Omega_* | \mathbb{pos}( \{ \omega \} ) > 0 \}$, and this gives rise to the subclass of events $\mathscr{G}$ on $\Omega$. All outcomes outside the possiblity space (i.e., in $\Omega_* - \Omega$) are impossible. It is easy to use the properties of the possibility measure, and the above axiom, to show the following:
Events outside the possibility space: For all events $\mathcal{E} \subseteq \Omega_* - \Omega$ we have $\mathbb{pos}(\mathcal{E}) = \mathbb{P}(\mathcal{E}) = 0$; all these events are impossible and have probability zero.
Once the possibility space is defined, it is possible to rewrite the axiom of correspondence in the more compact form show below. This version states that the possibility space is an almost sure event.
Axiom of correspondence (alternative form): We have $\mathbb{P}(\Omega) = 1$.
If you have done a foundational course in probability theory, you will recognise this as the norming axiom of probability, which applies to the "sample space". Thus, this latter version of the axiom of correspondence is the justification for why we would ordinarily just start on the space $\Omega$ in applications of probability theory. Ordinarily, we would take this set of possible outcomes to be our "sample space" and we would form the probability space $(\Omega, \mathscr{G}, \mathbb{P})$ using that space. In this formulation, every outcome in the sample space is considered to be a possible outcome (since that space is just the possibility space). Any outcomes outside this are impossible, and so they have zero probability, and can safely be ignored.
Since probability theory forms the basis for statistical analysis, this essentially means that we can ignore a set of outcomes with probability zero, and concentrate attention on some sample space that occurs almost surely. Removal of impossible events is a natural part of this reduction.
As you can see, there is a clear formal interaction between the concepts of "possibility" and "probability". This addresses the common confusion that students have when they deal with continuous random variables, where we have outcomes that have zero probability density (and therefore zero probability), but which are nonetheless possible outcomes. If you read some of the literature on possibility theory you will see that it allows the possibility measure to be "fuzzy", in the sense that it can take on values other than zero and one. However, it is instructive to consider the case where this measure is restricted to values of zero or one, to get a basic intuition for the interaction of possibility and probability.
|
Plausibility, Possibility, and Probability
|
Possibility and probability can be examined together
This answer is based on material in the paper O'Neill (2014) which gives a detailed exposition of the relationship between probability and possibil
|
Plausibility, Possibility, and Probability
Possibility and probability can be examined together
This answer is based on material in the paper O'Neill (2014) which gives a detailed exposition of the relationship between probability and possibility for the assistance of statistics students and practitioners. Some of the material in this answer is copied from that source without further citation/attribution. I recommend you have a look at that paper to get a more detailed exposition of the material I discuss in this answer. You can also find a related answer here.
I am not aware of any formal definition of "plausibility" beyond observing that it is sometimes used as a quantification of probability. However, I can tell you the relationship between the other two concepts. Possibility is a slightly different concept than probability, which demarcates all the outcomes that can occur within a wider space. There is an alternative algebra for "possibility measures", with a different kind of additivity property that reflects the fact that the possibility measure is "compositional". A possibility measure can be applied over the set of all events on an arbitrary set of outcomes, but if the class of events has the required structure for a probability measure, then we can also have a probability measure over the same class of events, which allows us to examine the interaction of the two measures. The two measures interact on the basis of a simple and intuitive rule: every impossible event has zero probability. (Note that the converse is not always true.)
There is a large literature looking at mathematical representations of possibility (known as "possibility theory"), which is closely related to fuzzy set theory. Overviews of this literature can be found in Yager (ed) (1982), Kacprzyk and Orlovski (eds) (1987), Dubois and Prade (1988), Terano, Asai and Sugeno (1992) and Zadeh and Kacprzyk (1992). A useful analysis of the relationship between possibility and probability representations can be found in Dubois and Prade (1993).
Formal presentation: Suppose we let $\Omega_*$ denote some set of outcomes and let $\mathscr{G}_*$ be a class of events with sufficient structure to allow a probability measure (i.e., it is a sigma-field). Suppose we have a possibility measure $\mathbb{pos}$ and a probability measure $\mathbb{P}$ on this class of events, giving rise to a possibility/probability space $(\Omega_*, \mathscr{G}_*, \mathbb{pos}, \mathbb{P})$. This space allows us to examine the interaction between possibility and probability. The starting point is our basic axiom (that I will call the "axiom of correspondence"), which says that every impossible event must have zero probability:
Axiom of correspondence: For all events $\mathcal{E} \in \mathscr{G}_*$ we have:$$\mathbb{pos}(\mathcal{E}) = 0
\quad \quad \implies \quad \quad \mathbb{P}(\mathcal{E}) = 0.$$
It is useful to narrow this down to look at only the outcomes that are possible. The set of possible outcomes (which I will call the possibility space) is given by $\Omega \equiv \{ \omega \in \Omega_* | \mathbb{pos}( \{ \omega \} ) > 0 \}$, and this gives rise to the subclass of events $\mathscr{G}$ on $\Omega$. All outcomes outside the possiblity space (i.e., in $\Omega_* - \Omega$) are impossible. It is easy to use the properties of the possibility measure, and the above axiom, to show the following:
Events outside the possibility space: For all events $\mathcal{E} \subseteq \Omega_* - \Omega$ we have $\mathbb{pos}(\mathcal{E}) = \mathbb{P}(\mathcal{E}) = 0$; all these events are impossible and have probability zero.
Once the possibility space is defined, it is possible to rewrite the axiom of correspondence in the more compact form show below. This version states that the possibility space is an almost sure event.
Axiom of correspondence (alternative form): We have $\mathbb{P}(\Omega) = 1$.
If you have done a foundational course in probability theory, you will recognise this as the norming axiom of probability, which applies to the "sample space". Thus, this latter version of the axiom of correspondence is the justification for why we would ordinarily just start on the space $\Omega$ in applications of probability theory. Ordinarily, we would take this set of possible outcomes to be our "sample space" and we would form the probability space $(\Omega, \mathscr{G}, \mathbb{P})$ using that space. In this formulation, every outcome in the sample space is considered to be a possible outcome (since that space is just the possibility space). Any outcomes outside this are impossible, and so they have zero probability, and can safely be ignored.
Since probability theory forms the basis for statistical analysis, this essentially means that we can ignore a set of outcomes with probability zero, and concentrate attention on some sample space that occurs almost surely. Removal of impossible events is a natural part of this reduction.
As you can see, there is a clear formal interaction between the concepts of "possibility" and "probability". This addresses the common confusion that students have when they deal with continuous random variables, where we have outcomes that have zero probability density (and therefore zero probability), but which are nonetheless possible outcomes. If you read some of the literature on possibility theory you will see that it allows the possibility measure to be "fuzzy", in the sense that it can take on values other than zero and one. However, it is instructive to consider the case where this measure is restricted to values of zero or one, to get a basic intuition for the interaction of possibility and probability.
|
Plausibility, Possibility, and Probability
Possibility and probability can be examined together
This answer is based on material in the paper O'Neill (2014) which gives a detailed exposition of the relationship between probability and possibil
|
40,558
|
Dense vs Sequential Layers in Keras
|
In Keras, "dense" usually refers to a single layer, whereas "sequential" usually refers to an entire model, not just one layer. So I'm not sure the comparison between "Dense vs. Sequential" makes sense.
Sequential refers to the way you build models in Keras using the sequential api (from keras.models import Sequential), where you build the neural network one layer at at time, in sequence: Input layer, hidden layer 1, hidden layer 2, etc...output layer. This is straightforward and intuitive, but puts limitations on the types of networks you can build.
Contrast this to the functional api (from keras.models import Model), where you can build acyclic graphs, shared layers, etc....but where you have to specify a lot of the parameters yourself (e.g. how layers should be connected, which one is the input and which one is the output, etc...)
"Dense" refers to the types of neurons and connections used in that particular layer, and specifically to a standard fully connected layer, as opposed to an LSTM layer, a CNN layer (different types of neurons compared to dense), or a layer with Dropout (same neurons, but different connectivity compared to Dense).
Different types of layers can coexist in the same network, e.g. :
from keras.models import Sequential
model = Sequential()
model.add(LSTM(50, activation='relu', return_sequences=True, input_shape=(n_steps,n_features)))
model.add(LSTM(50, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
|
Dense vs Sequential Layers in Keras
|
In Keras, "dense" usually refers to a single layer, whereas "sequential" usually refers to an entire model, not just one layer. So I'm not sure the comparison between "Dense vs. Sequential" makes sens
|
Dense vs Sequential Layers in Keras
In Keras, "dense" usually refers to a single layer, whereas "sequential" usually refers to an entire model, not just one layer. So I'm not sure the comparison between "Dense vs. Sequential" makes sense.
Sequential refers to the way you build models in Keras using the sequential api (from keras.models import Sequential), where you build the neural network one layer at at time, in sequence: Input layer, hidden layer 1, hidden layer 2, etc...output layer. This is straightforward and intuitive, but puts limitations on the types of networks you can build.
Contrast this to the functional api (from keras.models import Model), where you can build acyclic graphs, shared layers, etc....but where you have to specify a lot of the parameters yourself (e.g. how layers should be connected, which one is the input and which one is the output, etc...)
"Dense" refers to the types of neurons and connections used in that particular layer, and specifically to a standard fully connected layer, as opposed to an LSTM layer, a CNN layer (different types of neurons compared to dense), or a layer with Dropout (same neurons, but different connectivity compared to Dense).
Different types of layers can coexist in the same network, e.g. :
from keras.models import Sequential
model = Sequential()
model.add(LSTM(50, activation='relu', return_sequences=True, input_shape=(n_steps,n_features)))
model.add(LSTM(50, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
|
Dense vs Sequential Layers in Keras
In Keras, "dense" usually refers to a single layer, whereas "sequential" usually refers to an entire model, not just one layer. So I'm not sure the comparison between "Dense vs. Sequential" makes sens
|
40,559
|
Dense vs Sequential Layers in Keras
|
Sequential is not a layer, it is a model. In sequential models, you stack up multiple same/or different layers where one's output goes into another ahead. This is the default structure with neural nets. Dense is a layer type (fully connected layer). There are others such as Convolutional, Pooling, LSTM etc.
|
Dense vs Sequential Layers in Keras
|
Sequential is not a layer, it is a model. In sequential models, you stack up multiple same/or different layers where one's output goes into another ahead. This is the default structure with neural net
|
Dense vs Sequential Layers in Keras
Sequential is not a layer, it is a model. In sequential models, you stack up multiple same/or different layers where one's output goes into another ahead. This is the default structure with neural nets. Dense is a layer type (fully connected layer). There are others such as Convolutional, Pooling, LSTM etc.
|
Dense vs Sequential Layers in Keras
Sequential is not a layer, it is a model. In sequential models, you stack up multiple same/or different layers where one's output goes into another ahead. This is the default structure with neural net
|
40,560
|
How can one implement PCA using gradient descent?
|
We can pose PCA as a variance maximization problem. These are some of the hints:
The objective is to find the directions in which the variance, $\Bbb E(\vec X \vec X^T)$, is maximum.
Let $\vec w$ denote the unit vector direction along which the variance is maximum. The variance along this direction is given by:
\begin{aligned} \sigma_{\vec{\omega}^{2}}^{2} &=\frac{1}{n} \sum_{i}(\overrightarrow{x_{i}} \cdot \vec{w})^{2} \\ &=\frac{1}{n}(\mathbf{x} \mathbf{w})^{T}(\mathbf{x} \mathbf{w}) \\ &=\frac{1}{n} \mathbf{w}^{T} \mathbf{x}^{T} \mathbf{x} \mathbf{w} \\ &=\mathbf{w}^{T} \frac{\mathbf{x}^{T} \mathbf{x}}{n} \mathbf{w} \\ &=\mathbf{w}^{T} \mathbf{v} \mathbf{w} \end{aligned}
We can take gradients wrt to $\vec w$ and set it zero to find the optimum values.
$$
\begin{aligned} \mathscr{L}(\mathbf{w}, \lambda) & \equiv \sigma_{\mathrm{w}}^{2}-\lambda\left(\mathbf{w}^{T} \mathbf{w}-1\right) \\ \frac{\partial L}{\partial \lambda} &=\mathbf{w}^{T} \mathbf{w}-1 \\ \frac{\partial L}{\partial \mathbf{w}} &=2 \mathbf{v} \mathbf{w}-2 \lambda \mathbf{w} \end{aligned}
$$
Here $\mathscr{L}$ is the modified objective with Lagrange multipliers (they are required to ensure $\vec w$ is a unit vector)
Setting the derivatives to zero, gives you an eigenvalue problem!
If one wants to solve it using Gradient Descent, because we have obtained the gradients here, we can solve minimizing $\mathscr{L}$ using gradient descent as well.
Update:
The above problem is concave:
Hessian is
$$ \begin{bmatrix}
0 & -2 \bf{w}^T \\
-2 \bf{w} & 2 \bf{v} - 2 \lambda \Bbb I
\end{bmatrix} $$
The determinant of this Hessian can be computed as $$
{\rm det}\left|\begin{matrix} A & B \\ C & D \end{matrix}\right| =
{\rm det}|D|\,{\rm det} \left|A - BD^{-1} C\right| $$
The determinant is -ve as the expression $$- {\rm det}(2 \bf{v} - 2 \lambda \Bbb I) {\rm det} (2 \bf{w}^T (2 \bf{v} - 2 \lambda \Bbb I)^{-1} 2 \bf{w} ) $$ is always negative. Therefore, it will converge to a maxima.
|
How can one implement PCA using gradient descent?
|
We can pose PCA as a variance maximization problem. These are some of the hints:
The objective is to find the directions in which the variance, $\Bbb E(\vec X \vec X^T)$, is maximum.
Let $\vec w$ den
|
How can one implement PCA using gradient descent?
We can pose PCA as a variance maximization problem. These are some of the hints:
The objective is to find the directions in which the variance, $\Bbb E(\vec X \vec X^T)$, is maximum.
Let $\vec w$ denote the unit vector direction along which the variance is maximum. The variance along this direction is given by:
\begin{aligned} \sigma_{\vec{\omega}^{2}}^{2} &=\frac{1}{n} \sum_{i}(\overrightarrow{x_{i}} \cdot \vec{w})^{2} \\ &=\frac{1}{n}(\mathbf{x} \mathbf{w})^{T}(\mathbf{x} \mathbf{w}) \\ &=\frac{1}{n} \mathbf{w}^{T} \mathbf{x}^{T} \mathbf{x} \mathbf{w} \\ &=\mathbf{w}^{T} \frac{\mathbf{x}^{T} \mathbf{x}}{n} \mathbf{w} \\ &=\mathbf{w}^{T} \mathbf{v} \mathbf{w} \end{aligned}
We can take gradients wrt to $\vec w$ and set it zero to find the optimum values.
$$
\begin{aligned} \mathscr{L}(\mathbf{w}, \lambda) & \equiv \sigma_{\mathrm{w}}^{2}-\lambda\left(\mathbf{w}^{T} \mathbf{w}-1\right) \\ \frac{\partial L}{\partial \lambda} &=\mathbf{w}^{T} \mathbf{w}-1 \\ \frac{\partial L}{\partial \mathbf{w}} &=2 \mathbf{v} \mathbf{w}-2 \lambda \mathbf{w} \end{aligned}
$$
Here $\mathscr{L}$ is the modified objective with Lagrange multipliers (they are required to ensure $\vec w$ is a unit vector)
Setting the derivatives to zero, gives you an eigenvalue problem!
If one wants to solve it using Gradient Descent, because we have obtained the gradients here, we can solve minimizing $\mathscr{L}$ using gradient descent as well.
Update:
The above problem is concave:
Hessian is
$$ \begin{bmatrix}
0 & -2 \bf{w}^T \\
-2 \bf{w} & 2 \bf{v} - 2 \lambda \Bbb I
\end{bmatrix} $$
The determinant of this Hessian can be computed as $$
{\rm det}\left|\begin{matrix} A & B \\ C & D \end{matrix}\right| =
{\rm det}|D|\,{\rm det} \left|A - BD^{-1} C\right| $$
The determinant is -ve as the expression $$- {\rm det}(2 \bf{v} - 2 \lambda \Bbb I) {\rm det} (2 \bf{w}^T (2 \bf{v} - 2 \lambda \Bbb I)^{-1} 2 \bf{w} ) $$ is always negative. Therefore, it will converge to a maxima.
|
How can one implement PCA using gradient descent?
We can pose PCA as a variance maximization problem. These are some of the hints:
The objective is to find the directions in which the variance, $\Bbb E(\vec X \vec X^T)$, is maximum.
Let $\vec w$ den
|
40,561
|
Reference request - Computer Vision Book
|
To the best of my knowledge there is not yet a comprehensive academic computer vision textbook (as of 2019) that has been written that incorporates the deep learning.
It's useful to separate the discussion and formulation of a problem from algorithms deployed to solve it. @shimao makes good point that often earlier methods are recycled with new deep learning components. Goodfellow et al., and Bishop's book are good books on deep learning and machine learning, respectively but they don't talk much about computer vision problems. By that I mean the tasks the occupy a lot of the computer vision research like noise filtering, 3D reconstruction, image registration, computational photography, structure from motion, etc.
To better understand CV, Szeliski's book is still quite good, although it is a high level survey. Topics which CNNs dominate such as segmentation and recognition are only two chapters in that book so there's lots of interesting material.
I think the following books are also useful and worth having a look at although they cover classical methods:
Prince, Simon JD. Computer vision: models, learning, and inference.
Cambridge University Press, 2012.
-
Hartley, Richard, and Andrew Zisserman. Multiple view geometry in
computer vision. Cambridge university press, 2003.
-
Gonzalez, Rafael and Woods, Richard. Digital Image Processing.
Pearson Higher Ed, 2011.
If you are really interested in CV, I think it can also be interesting to learn about human visual perception and optics as well as related fields like robotics, computer graphics and medical/scientific imaging.
|
Reference request - Computer Vision Book
|
To the best of my knowledge there is not yet a comprehensive academic computer vision textbook (as of 2019) that has been written that incorporates the deep learning.
It's useful to separate the disc
|
Reference request - Computer Vision Book
To the best of my knowledge there is not yet a comprehensive academic computer vision textbook (as of 2019) that has been written that incorporates the deep learning.
It's useful to separate the discussion and formulation of a problem from algorithms deployed to solve it. @shimao makes good point that often earlier methods are recycled with new deep learning components. Goodfellow et al., and Bishop's book are good books on deep learning and machine learning, respectively but they don't talk much about computer vision problems. By that I mean the tasks the occupy a lot of the computer vision research like noise filtering, 3D reconstruction, image registration, computational photography, structure from motion, etc.
To better understand CV, Szeliski's book is still quite good, although it is a high level survey. Topics which CNNs dominate such as segmentation and recognition are only two chapters in that book so there's lots of interesting material.
I think the following books are also useful and worth having a look at although they cover classical methods:
Prince, Simon JD. Computer vision: models, learning, and inference.
Cambridge University Press, 2012.
-
Hartley, Richard, and Andrew Zisserman. Multiple view geometry in
computer vision. Cambridge university press, 2003.
-
Gonzalez, Rafael and Woods, Richard. Digital Image Processing.
Pearson Higher Ed, 2011.
If you are really interested in CV, I think it can also be interesting to learn about human visual perception and optics as well as related fields like robotics, computer graphics and medical/scientific imaging.
|
Reference request - Computer Vision Book
To the best of my knowledge there is not yet a comprehensive academic computer vision textbook (as of 2019) that has been written that incorporates the deep learning.
It's useful to separate the disc
|
40,562
|
Reference request - Computer Vision Book
|
To add to @MachineEpsilon's answer. Deep learning has become the de-facto tool for tasks like segmentation and object detection. And is starting to take over in domains like 3D reconstruction. Nevertheless, it is still an ongoing process.
You still need to have a good knowledge of projective geometry (Hartley and Zimmerman's book) to solve tasks like metrology (performing accurate measurements on images). There are situations where you want to extract some features (like edges or contours). Also, matching and tracking planar objects with feature based methods are still a very good option, and easier to set up, compared to fine tuning a neural network.
This techniques are also at the core of many SLAM approaches.
My point is: there are still many relevant use cases where either deep learning does not provide a (satisfactory) solution, or where traditional approaches are easier to use. If only because there are well-proven software libraries like OpenCV.
|
Reference request - Computer Vision Book
|
To add to @MachineEpsilon's answer. Deep learning has become the de-facto tool for tasks like segmentation and object detection. And is starting to take over in domains like 3D reconstruction. Neverth
|
Reference request - Computer Vision Book
To add to @MachineEpsilon's answer. Deep learning has become the de-facto tool for tasks like segmentation and object detection. And is starting to take over in domains like 3D reconstruction. Nevertheless, it is still an ongoing process.
You still need to have a good knowledge of projective geometry (Hartley and Zimmerman's book) to solve tasks like metrology (performing accurate measurements on images). There are situations where you want to extract some features (like edges or contours). Also, matching and tracking planar objects with feature based methods are still a very good option, and easier to set up, compared to fine tuning a neural network.
This techniques are also at the core of many SLAM approaches.
My point is: there are still many relevant use cases where either deep learning does not provide a (satisfactory) solution, or where traditional approaches are easier to use. If only because there are well-proven software libraries like OpenCV.
|
Reference request - Computer Vision Book
To add to @MachineEpsilon's answer. Deep learning has become the de-facto tool for tasks like segmentation and object detection. And is starting to take over in domains like 3D reconstruction. Neverth
|
40,563
|
Name and interpretation of "$h(x)$" in exponential family
|
The $h(x)$ function in the exponential family is known as the "underlying measure." It serves to ensure $x$ is in the right space. For many functions, this correction is unnecessary (i.e. it is set to $1$ or $1/\sqrt{2}$). It does play a strong role in defining many functions, however. Since the role is function-specific beyond the definition above, I will link to a part of the Wikipedia page for "exponential family" with a few helpful examples (in table form) of the role of $h(x)$ in common distributions.
Link:
https://www.wikiwand.com/en/Exponential_family#/Table_of_distributions
|
Name and interpretation of "$h(x)$" in exponential family
|
The $h(x)$ function in the exponential family is known as the "underlying measure." It serves to ensure $x$ is in the right space. For many functions, this correction is unnecessary (i.e. it is set to
|
Name and interpretation of "$h(x)$" in exponential family
The $h(x)$ function in the exponential family is known as the "underlying measure." It serves to ensure $x$ is in the right space. For many functions, this correction is unnecessary (i.e. it is set to $1$ or $1/\sqrt{2}$). It does play a strong role in defining many functions, however. Since the role is function-specific beyond the definition above, I will link to a part of the Wikipedia page for "exponential family" with a few helpful examples (in table form) of the role of $h(x)$ in common distributions.
Link:
https://www.wikiwand.com/en/Exponential_family#/Table_of_distributions
|
Name and interpretation of "$h(x)$" in exponential family
The $h(x)$ function in the exponential family is known as the "underlying measure." It serves to ensure $x$ is in the right space. For many functions, this correction is unnecessary (i.e. it is set to
|
40,564
|
How to include an interaction with a quadratic term? [closed]
|
It's the same formula (meaning that the models are equivalent), just the R notation is different.
Here is an example with random data:
x1 <- rnorm(100)
x2 <- rnorm(100)
y <- x1 + x2 + x2**2 + x1*x2 + rnorm(100)
fit <- lm(y ~ x1 + x2 + I(x2^2) + x1:x2 + x1:I(x2^2))
fit <- lm(y ~ x1 + x2 + I(x2^2) + x1:(x2 + I(x2^2)))
fit <- lm(y ~ x1 + x2 + I(x2*x2) + x1:(x2 + I(x2*x2)))
All three of these produce these same results where x1 is interacted with both x2 and the squared version of x2:
Residuals:
Min 1Q Median 3Q Max
-2.12678 -0.64983 0.03115 0.59760 2.26080
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.11838 0.12757 -0.928 0.356
x1 0.95627 0.13901 6.879 6.61e-10 ***
x2 1.04394 0.09099 11.473 < 2e-16 ***
I(x2 * x2) 0.94417 0.06015 15.698 < 2e-16 ***
x1:x2 1.05098 0.12875 8.163 1.45e-12 ***
x1:I(x2 * x2) 0.05926 0.09656 0.614 0.541
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.003 on 94 degrees of freedom
Multiple R-squared: 0.8412, Adjusted R-squared: 0.8328
F-statistic: 99.59 on 5 and 94 DF, p-value: < 2.2e-16
|
How to include an interaction with a quadratic term? [closed]
|
It's the same formula (meaning that the models are equivalent), just the R notation is different.
Here is an example with random data:
x1 <- rnorm(100)
x2 <- rnorm(100)
y <- x1 + x2 + x2**2 + x1*x2 +
|
How to include an interaction with a quadratic term? [closed]
It's the same formula (meaning that the models are equivalent), just the R notation is different.
Here is an example with random data:
x1 <- rnorm(100)
x2 <- rnorm(100)
y <- x1 + x2 + x2**2 + x1*x2 + rnorm(100)
fit <- lm(y ~ x1 + x2 + I(x2^2) + x1:x2 + x1:I(x2^2))
fit <- lm(y ~ x1 + x2 + I(x2^2) + x1:(x2 + I(x2^2)))
fit <- lm(y ~ x1 + x2 + I(x2*x2) + x1:(x2 + I(x2*x2)))
All three of these produce these same results where x1 is interacted with both x2 and the squared version of x2:
Residuals:
Min 1Q Median 3Q Max
-2.12678 -0.64983 0.03115 0.59760 2.26080
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.11838 0.12757 -0.928 0.356
x1 0.95627 0.13901 6.879 6.61e-10 ***
x2 1.04394 0.09099 11.473 < 2e-16 ***
I(x2 * x2) 0.94417 0.06015 15.698 < 2e-16 ***
x1:x2 1.05098 0.12875 8.163 1.45e-12 ***
x1:I(x2 * x2) 0.05926 0.09656 0.614 0.541
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.003 on 94 degrees of freedom
Multiple R-squared: 0.8412, Adjusted R-squared: 0.8328
F-statistic: 99.59 on 5 and 94 DF, p-value: < 2.2e-16
|
How to include an interaction with a quadratic term? [closed]
It's the same formula (meaning that the models are equivalent), just the R notation is different.
Here is an example with random data:
x1 <- rnorm(100)
x2 <- rnorm(100)
y <- x1 + x2 + x2**2 + x1*x2 +
|
40,565
|
Difference between Granger causality and Instantaneous causality?
|
I was looking for the answer to this same question and I found it on the book Introduction to Modern Time Series Analysis (second edition) by Gebhard Kirchgassner, Jurgen Wolters and Uwe Hassler on page 97.
Granger Causality: x granger causes y if a model that uses current and past values of x and current and past values of y to predict future values of y has smaller forecast error than a model than only uses current and past values of y to predict y. In other words, Granger causality answers the following question: does the past of variable x help improve the prediction of future values of y?
Instantaneous Causality: x instantaneously Granger causes y if a model that uses current, past and future values of x and current and past values of y to predict y has smaller forecast error than a model than only uses current and past values of x and current and past values of y. In other words, Instantaneous granger causality answers the question: does knowing the future of x help me better predict the future of y? If I know that x is going to do, does it help me know what y is going to know?
I know this is an old question, but I thought I would answer it in case someone else is struggling as I was with this.
The book goes deeply into the math of these two metrics, so please take a look at it if you want a more formal answer.
|
Difference between Granger causality and Instantaneous causality?
|
I was looking for the answer to this same question and I found it on the book Introduction to Modern Time Series Analysis (second edition) by Gebhard Kirchgassner, Jurgen Wolters and Uwe Hassler on pa
|
Difference between Granger causality and Instantaneous causality?
I was looking for the answer to this same question and I found it on the book Introduction to Modern Time Series Analysis (second edition) by Gebhard Kirchgassner, Jurgen Wolters and Uwe Hassler on page 97.
Granger Causality: x granger causes y if a model that uses current and past values of x and current and past values of y to predict future values of y has smaller forecast error than a model than only uses current and past values of y to predict y. In other words, Granger causality answers the following question: does the past of variable x help improve the prediction of future values of y?
Instantaneous Causality: x instantaneously Granger causes y if a model that uses current, past and future values of x and current and past values of y to predict y has smaller forecast error than a model than only uses current and past values of x and current and past values of y. In other words, Instantaneous granger causality answers the question: does knowing the future of x help me better predict the future of y? If I know that x is going to do, does it help me know what y is going to know?
I know this is an old question, but I thought I would answer it in case someone else is struggling as I was with this.
The book goes deeply into the math of these two metrics, so please take a look at it if you want a more formal answer.
|
Difference between Granger causality and Instantaneous causality?
I was looking for the answer to this same question and I found it on the book Introduction to Modern Time Series Analysis (second edition) by Gebhard Kirchgassner, Jurgen Wolters and Uwe Hassler on pa
|
40,566
|
What makes a Random Forest random besides bootstrapping and random sampling of features?
|
If we set aside the discrepancies arising from roundoff error, the remaining differences originate in the treatment of ties. Class sklearn.ensemble.RandomForestClassifier is composed of many instances of sklearn.tree.DecisionTreeClassifier (you can verify this by reading the source). If we read the documentation for sklearn.tree.DecisionTreeClassifier, we find that there is some non-determinism in how the trees are built, even when using all features. This is because of how the fit method handles ties.
The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data and max_features=n_features, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting, random_state has to be fixed.
In most cases, this is roundoff error. Whenever comparing equality of floats, you want to use something like np.isclose, and not ==. Using == is the way of madness.
import numpy as np
np.isclose(pred_1, pred_2)
array([ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, False, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True])
For some reason, only the 34th entry is mismatched in a way that is not accounted for by numerical error.
mistake = np.where(np.logical_not(np.isclose(pred_1, pred_2)))
mistake
# array([34])
pred_1[mistake]
# array([33.54285714])
pred_2[mistake]
# array([31.82857143])
If I fix the seed used for the models, this discrepancy disappears. It may re-appear if you choose a different pair of seeds. I don't know.
model3 = RandomForestRegressor(bootstrap=False, max_features=1.0, max_depth=3, random_state=13)
model4 = RandomForestRegressor(bootstrap=False, max_features=1.0, max_depth=3, random_state=14)
pred_3 = model3.fit(X_train, y_train).predict(X_test)
pred_4 = model4.fit(X_train, y_train).predict(X_test)
np.isclose(pred_3, pred_4).all()
# True
See also: How does a Decision Tree model choose thresholds in scikit-learn?
|
What makes a Random Forest random besides bootstrapping and random sampling of features?
|
If we set aside the discrepancies arising from roundoff error, the remaining differences originate in the treatment of ties. Class sklearn.ensemble.RandomForestClassifier is composed of many instances
|
What makes a Random Forest random besides bootstrapping and random sampling of features?
If we set aside the discrepancies arising from roundoff error, the remaining differences originate in the treatment of ties. Class sklearn.ensemble.RandomForestClassifier is composed of many instances of sklearn.tree.DecisionTreeClassifier (you can verify this by reading the source). If we read the documentation for sklearn.tree.DecisionTreeClassifier, we find that there is some non-determinism in how the trees are built, even when using all features. This is because of how the fit method handles ties.
The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data and max_features=n_features, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting, random_state has to be fixed.
In most cases, this is roundoff error. Whenever comparing equality of floats, you want to use something like np.isclose, and not ==. Using == is the way of madness.
import numpy as np
np.isclose(pred_1, pred_2)
array([ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, False, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True])
For some reason, only the 34th entry is mismatched in a way that is not accounted for by numerical error.
mistake = np.where(np.logical_not(np.isclose(pred_1, pred_2)))
mistake
# array([34])
pred_1[mistake]
# array([33.54285714])
pred_2[mistake]
# array([31.82857143])
If I fix the seed used for the models, this discrepancy disappears. It may re-appear if you choose a different pair of seeds. I don't know.
model3 = RandomForestRegressor(bootstrap=False, max_features=1.0, max_depth=3, random_state=13)
model4 = RandomForestRegressor(bootstrap=False, max_features=1.0, max_depth=3, random_state=14)
pred_3 = model3.fit(X_train, y_train).predict(X_test)
pred_4 = model4.fit(X_train, y_train).predict(X_test)
np.isclose(pred_3, pred_4).all()
# True
See also: How does a Decision Tree model choose thresholds in scikit-learn?
|
What makes a Random Forest random besides bootstrapping and random sampling of features?
If we set aside the discrepancies arising from roundoff error, the remaining differences originate in the treatment of ties. Class sklearn.ensemble.RandomForestClassifier is composed of many instances
|
40,567
|
Var(XY), if X and Y are independent random variables [duplicate]
|
You can follow Henry's comments to arrive at the answer. However, another way to come to the answer is to use the fact that if $X$ and $Y$ are independent, then $Y | X = Y$ and $X |Y = X$.
By iterated expectations and variance expressions
\begin{align*}
\text{Var}(XY) & = \text{Var}[\,\text{E}(XY|X)\,] + \text{E}[\,\text{Var}(XY|X) \,]\\
& = \text{Var}[\,X\, \text{E}(Y|X)\,] + E[\,X^2\, \text{Var}(Y|X)\,]\\
& = \text{Var}[\,X\, \text{E}(Y)\,] + E[\,X^2\, \text{Var}(Y)\,]\\
& = E(Y)^2\, \text{Var}(X) + \text{Var}(Y) E(X^2)\,.
\end{align*}
|
Var(XY), if X and Y are independent random variables [duplicate]
|
You can follow Henry's comments to arrive at the answer. However, another way to come to the answer is to use the fact that if $X$ and $Y$ are independent, then $Y | X = Y$ and $X |Y = X$.
By iterated
|
Var(XY), if X and Y are independent random variables [duplicate]
You can follow Henry's comments to arrive at the answer. However, another way to come to the answer is to use the fact that if $X$ and $Y$ are independent, then $Y | X = Y$ and $X |Y = X$.
By iterated expectations and variance expressions
\begin{align*}
\text{Var}(XY) & = \text{Var}[\,\text{E}(XY|X)\,] + \text{E}[\,\text{Var}(XY|X) \,]\\
& = \text{Var}[\,X\, \text{E}(Y|X)\,] + E[\,X^2\, \text{Var}(Y|X)\,]\\
& = \text{Var}[\,X\, \text{E}(Y)\,] + E[\,X^2\, \text{Var}(Y)\,]\\
& = E(Y)^2\, \text{Var}(X) + \text{Var}(Y) E(X^2)\,.
\end{align*}
|
Var(XY), if X and Y are independent random variables [duplicate]
You can follow Henry's comments to arrive at the answer. However, another way to come to the answer is to use the fact that if $X$ and $Y$ are independent, then $Y | X = Y$ and $X |Y = X$.
By iterated
|
40,568
|
Understanding semi supervised technique called mean teachers
|
I'll answer it for in case someone ever stumbles upon the need to understand the same issues. I mailed Antti Tarvainen, the author of the paper and this is what he responded:
Hi,
Thanks for the questions. I perhaps tried to pack too much into that figure so it may be hard to interpret our meaning. I will try to clarify.
(a) The model (Ma) is learning to classify datapoints that are somewhere between the DL1 and DL2 as negative class which is something that we do not want
Pretty much so. By chance, the model may have actually learned exactly the true function we are trying to model. (By true function I mean the actual real-world phenomenon we are trying to model. It's often called target function, but that would be confusing in this context.) But it is unlikely, because there is an infinite number of alternative functions that fit the data. Therefore, we want to regularize, i.e. make the model more likely to learn likely true functions. The following four subfigures attempt to explain how noise- and consistency-based regularization schemes such as mean teacher perform useful regularization.
(b) We augmented the dataset by creating new points (small blue dots) by adding noise to DL1 and DL2. We assigned class 1 to these new datapoints and now when we train the model (Mb) it learns to be somewhat non variant around DL1 and DL2 in order to predict class 1 for the small blue dots
Yes.
(c) An unlabeled example DU1 has entered the picture. Model (Mc1) is predicting it as class 1 but near the boundary between class 1 and class 2. Mc1 = The thin, pointed grey curve. Now, we augment the dataset by adding noise to unlabeled DU1 and create unlabeled datapoints (small black dots) and train the model (Mc2) with an additional L2 (or anything measuring consistency between two outputs) loss between the predictions of noisy unlabeled datapoints and DU1. Now, it learns to have a smooth boundary at the top of the curve rather than a pointed curve that it was learning earlier. Mc2 = thick grey curve. The DU1 is still predicted near to the prediction boundary. Why is this model better than the model in (b)?
Yes, that's correct. (Sorry for the very vague "gray curve" in the caption, and congratulations for deciphering it correctly. The thin curve is the teacher and the thick the student.)
I really should have drawn two figures like (c) (and (b) too): one where the model happens to be "lucky" and one where it happens to be "unlucky". As it is, I only drew an unlucky (c), which makes this confusing. Being unlucky motivates (d) and (e), but then (c) is indeed worse than (b).
The reason (c) is very unlucky is because it happens to peak right where the unlabeled example is. If it peaked at some other place, the noise around the unlabeled data point would smooth that peak, and bring it closer to the prediction of the unlabeled data point, and also the true value.
So sometimes a (c) is better than a (b), sometimes it is worse. We don't want to depend on luck, which motivates (d) and (e).
(d) Did not get it. Please explain
In (c) we were unlucky because we happened to pick the worst possible target. We don't want to be unlucky, so we sample the target prediction many times. We do this by adding noise to the teacher inputs too. Then the model learns to smooth the predictions around unlabeled data points towards the expected prediction in that neighborhood. The expected prediction is less noisy and probably a better target than any single prediction.
(e) Did not get it. Please explain
In (d) we reduced target noise by averaging over the neighborhood of each unlabeled data point (so in the space of input dimensions). But there's also noise in the model parameters. We can reduce our dependence on luck by averaging over model parameters. The way mean teacher does this is by averaging the model parameters over training steps.
(Arguably, dropout also adds noise to the model parameters, and thus is another way of improving expected model estimates. But in the paper we consider it a form of input noise, rather than parameter noise.)
I hope this clarifies it.
|
Understanding semi supervised technique called mean teachers
|
I'll answer it for in case someone ever stumbles upon the need to understand the same issues. I mailed Antti Tarvainen, the author of the paper and this is what he responded:
Hi,
Thanks for the questi
|
Understanding semi supervised technique called mean teachers
I'll answer it for in case someone ever stumbles upon the need to understand the same issues. I mailed Antti Tarvainen, the author of the paper and this is what he responded:
Hi,
Thanks for the questions. I perhaps tried to pack too much into that figure so it may be hard to interpret our meaning. I will try to clarify.
(a) The model (Ma) is learning to classify datapoints that are somewhere between the DL1 and DL2 as negative class which is something that we do not want
Pretty much so. By chance, the model may have actually learned exactly the true function we are trying to model. (By true function I mean the actual real-world phenomenon we are trying to model. It's often called target function, but that would be confusing in this context.) But it is unlikely, because there is an infinite number of alternative functions that fit the data. Therefore, we want to regularize, i.e. make the model more likely to learn likely true functions. The following four subfigures attempt to explain how noise- and consistency-based regularization schemes such as mean teacher perform useful regularization.
(b) We augmented the dataset by creating new points (small blue dots) by adding noise to DL1 and DL2. We assigned class 1 to these new datapoints and now when we train the model (Mb) it learns to be somewhat non variant around DL1 and DL2 in order to predict class 1 for the small blue dots
Yes.
(c) An unlabeled example DU1 has entered the picture. Model (Mc1) is predicting it as class 1 but near the boundary between class 1 and class 2. Mc1 = The thin, pointed grey curve. Now, we augment the dataset by adding noise to unlabeled DU1 and create unlabeled datapoints (small black dots) and train the model (Mc2) with an additional L2 (or anything measuring consistency between two outputs) loss between the predictions of noisy unlabeled datapoints and DU1. Now, it learns to have a smooth boundary at the top of the curve rather than a pointed curve that it was learning earlier. Mc2 = thick grey curve. The DU1 is still predicted near to the prediction boundary. Why is this model better than the model in (b)?
Yes, that's correct. (Sorry for the very vague "gray curve" in the caption, and congratulations for deciphering it correctly. The thin curve is the teacher and the thick the student.)
I really should have drawn two figures like (c) (and (b) too): one where the model happens to be "lucky" and one where it happens to be "unlucky". As it is, I only drew an unlucky (c), which makes this confusing. Being unlucky motivates (d) and (e), but then (c) is indeed worse than (b).
The reason (c) is very unlucky is because it happens to peak right where the unlabeled example is. If it peaked at some other place, the noise around the unlabeled data point would smooth that peak, and bring it closer to the prediction of the unlabeled data point, and also the true value.
So sometimes a (c) is better than a (b), sometimes it is worse. We don't want to depend on luck, which motivates (d) and (e).
(d) Did not get it. Please explain
In (c) we were unlucky because we happened to pick the worst possible target. We don't want to be unlucky, so we sample the target prediction many times. We do this by adding noise to the teacher inputs too. Then the model learns to smooth the predictions around unlabeled data points towards the expected prediction in that neighborhood. The expected prediction is less noisy and probably a better target than any single prediction.
(e) Did not get it. Please explain
In (d) we reduced target noise by averaging over the neighborhood of each unlabeled data point (so in the space of input dimensions). But there's also noise in the model parameters. We can reduce our dependence on luck by averaging over model parameters. The way mean teacher does this is by averaging the model parameters over training steps.
(Arguably, dropout also adds noise to the model parameters, and thus is another way of improving expected model estimates. But in the paper we consider it a form of input noise, rather than parameter noise.)
I hope this clarifies it.
|
Understanding semi supervised technique called mean teachers
I'll answer it for in case someone ever stumbles upon the need to understand the same issues. I mailed Antti Tarvainen, the author of the paper and this is what he responded:
Hi,
Thanks for the questi
|
40,569
|
Regression - variance of predictions much lower than variance of target
|
My answer will focus on the baseline OLS case, but the mechanics are similar for techniques like Lasso (although I'll admit that I do not know how $R^2$ is computed for such methods). Also, my answer relates to in-sample fit.
Recall that $R^2$ is defined as (also recall that the mean of the fitted values equals the mean of the $y$, $\bar y=\bar{\hat{y}}$)
$$
R^2=\frac{(\hat y-\bar y)'(\hat y-\bar y)}{(y-\bar y)'(y-\bar y)},
$$
which we may rewrite into the ratio of variance explained to variance of the dependent variable,
$$
R^2=\frac{\frac{1}{n-1}\sum_i(\hat y_i-\bar y)^2}{\frac{1}{n-1}\sum_i( y_i-\bar y)^2}=\frac{\hat\sigma^2_{\hat y}}{\hat\sigma^2_{y}},
$$
So, when you have a low $R^2$, that is tantamount to saying that the standard deviation of the predictions is less than the standard deviation of the target variable. A fortiori, if you "sacrifice" $R^2$, that ratio can only decrease further.
Here is a little graphical illustration, in which both the $y_i$ (blue) and the fitted values (salmon) are projected onto the y-axis, for a dataset in which $R^2$ is relatively low. We observe that the variation of the fitted values is, as expected, smaller.
|
Regression - variance of predictions much lower than variance of target
|
My answer will focus on the baseline OLS case, but the mechanics are similar for techniques like Lasso (although I'll admit that I do not know how $R^2$ is computed for such methods). Also, my answer
|
Regression - variance of predictions much lower than variance of target
My answer will focus on the baseline OLS case, but the mechanics are similar for techniques like Lasso (although I'll admit that I do not know how $R^2$ is computed for such methods). Also, my answer relates to in-sample fit.
Recall that $R^2$ is defined as (also recall that the mean of the fitted values equals the mean of the $y$, $\bar y=\bar{\hat{y}}$)
$$
R^2=\frac{(\hat y-\bar y)'(\hat y-\bar y)}{(y-\bar y)'(y-\bar y)},
$$
which we may rewrite into the ratio of variance explained to variance of the dependent variable,
$$
R^2=\frac{\frac{1}{n-1}\sum_i(\hat y_i-\bar y)^2}{\frac{1}{n-1}\sum_i( y_i-\bar y)^2}=\frac{\hat\sigma^2_{\hat y}}{\hat\sigma^2_{y}},
$$
So, when you have a low $R^2$, that is tantamount to saying that the standard deviation of the predictions is less than the standard deviation of the target variable. A fortiori, if you "sacrifice" $R^2$, that ratio can only decrease further.
Here is a little graphical illustration, in which both the $y_i$ (blue) and the fitted values (salmon) are projected onto the y-axis, for a dataset in which $R^2$ is relatively low. We observe that the variation of the fitted values is, as expected, smaller.
|
Regression - variance of predictions much lower than variance of target
My answer will focus on the baseline OLS case, but the mechanics are similar for techniques like Lasso (although I'll admit that I do not know how $R^2$ is computed for such methods). Also, my answer
|
40,570
|
What is the best point forecast for lognormally distributed data?
|
It is a standard result from introductory statistics that the expectation of a distribution is the one number summary that will minimize the expected squared error. The expectation of the lognormal distribution with log-mean $\mu$ and log-variance $\sigma^2$ is $\exp\big(\mu+\frac{\sigma^2}{2}\big)$.
It is almost as well known that the median of a distribution is the one number summary that will minimize the expected absolute error (Hanley et al., 2001, The American Statistician). The median of the lognormal distribution with log-mean $\mu$ and log-variance $\sigma^2$ is $\exp(\mu)$.
Since the MASE is simply a scaled MAE, the point forecast that minimizes the expected MAE will also minimize the expected MASE.
It turns out that the loss $\Big|\ln\big(\frac{y}{\hat{y}}\big)\Big|$ is also minimized in expectation by the median of the distribution (Kuketayev, 2015, "Optimal Point Forecasts for Certain Bank Deposit Series" in the 21st Federal Forecasters Conference: Are Forecasts Accurate? Does it Matter?), so the point forecast that minimizes the expected MAE will also minimize this loss function in expectation.
The MAPE is a bit more tricky. Per Gneiting (2011, JASA, p. 748 with $\beta=-1$), the point forecast minimizing the expected MAPE for a density $f$ is the median of a distribution with density proportional to $\frac{1}{y}f(y)$. Now, the lognormal distribution with log-mean $\mu$ and log-variance $\sigma^2$ is $\exp(\mu)$ has density
$$ f(y) = \frac{1}{y\sigma\sqrt{2\pi}}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg). $$
Therefore the density we are interested in is
$$ \frac{1}{y}f(y) = \frac{1}{y^2\sigma\sqrt{2\pi}}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg)\propto\frac{1}{y^2}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg). $$
(Since we are only interested in the distribution up to a proportionality factor, we can disregard the constant multiplier.)
Now, set
$$ m := \exp(\mu-\sigma^2). $$
We claim that $m$ is the median of $\frac{1}{y}f(y)$, i.e., the point forecast minimizing the expected MAPE, which we were looking for. (Coincidentally, $m$ is also the mode of the original lognormal distribution. This relationship does not hold for other strictly positive distributions, e.g., the gamma.)
To prove that $m$ is the median we are looking for, we note that
$$ \int_a^b \frac{1}{y^2}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg)\,dy = \sqrt{\frac{\pi}{2}}\sigma\exp\Big(\frac{\sigma^2}{2-\mu}\Big)\text{erf}\bigg(\frac{-\mu+\sigma^2+\ln y}{\sqrt{2}\sigma}\bigg)\bigg|_{y=a}^b, $$
where $\text{erf}$ denotes the error function, which has the following properties:
$$ \lim_{x\to-\infty}\text{erf}(x)=-1, \quad\text{erf}(0)=0, \quad \lim_{x\to\infty}\text{erf}(x)=1. $$
Substituting the limits into the integral, we obtain that
$$ \int_0^m\frac{1}{y^2}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg)\,dy=\int_m^\infty\frac{1}{y^2}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg)\,dy. $$
Since the proportionality factors do not involve $m$, this yields that
$$ \int_0^m \frac{1}{y}f(y)\,dy = \int_m^\infty \frac{1}{y}f(y)\,dy $$
as required.
|
What is the best point forecast for lognormally distributed data?
|
It is a standard result from introductory statistics that the expectation of a distribution is the one number summary that will minimize the expected squared error. The expectation of the lognormal di
|
What is the best point forecast for lognormally distributed data?
It is a standard result from introductory statistics that the expectation of a distribution is the one number summary that will minimize the expected squared error. The expectation of the lognormal distribution with log-mean $\mu$ and log-variance $\sigma^2$ is $\exp\big(\mu+\frac{\sigma^2}{2}\big)$.
It is almost as well known that the median of a distribution is the one number summary that will minimize the expected absolute error (Hanley et al., 2001, The American Statistician). The median of the lognormal distribution with log-mean $\mu$ and log-variance $\sigma^2$ is $\exp(\mu)$.
Since the MASE is simply a scaled MAE, the point forecast that minimizes the expected MAE will also minimize the expected MASE.
It turns out that the loss $\Big|\ln\big(\frac{y}{\hat{y}}\big)\Big|$ is also minimized in expectation by the median of the distribution (Kuketayev, 2015, "Optimal Point Forecasts for Certain Bank Deposit Series" in the 21st Federal Forecasters Conference: Are Forecasts Accurate? Does it Matter?), so the point forecast that minimizes the expected MAE will also minimize this loss function in expectation.
The MAPE is a bit more tricky. Per Gneiting (2011, JASA, p. 748 with $\beta=-1$), the point forecast minimizing the expected MAPE for a density $f$ is the median of a distribution with density proportional to $\frac{1}{y}f(y)$. Now, the lognormal distribution with log-mean $\mu$ and log-variance $\sigma^2$ is $\exp(\mu)$ has density
$$ f(y) = \frac{1}{y\sigma\sqrt{2\pi}}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg). $$
Therefore the density we are interested in is
$$ \frac{1}{y}f(y) = \frac{1}{y^2\sigma\sqrt{2\pi}}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg)\propto\frac{1}{y^2}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg). $$
(Since we are only interested in the distribution up to a proportionality factor, we can disregard the constant multiplier.)
Now, set
$$ m := \exp(\mu-\sigma^2). $$
We claim that $m$ is the median of $\frac{1}{y}f(y)$, i.e., the point forecast minimizing the expected MAPE, which we were looking for. (Coincidentally, $m$ is also the mode of the original lognormal distribution. This relationship does not hold for other strictly positive distributions, e.g., the gamma.)
To prove that $m$ is the median we are looking for, we note that
$$ \int_a^b \frac{1}{y^2}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg)\,dy = \sqrt{\frac{\pi}{2}}\sigma\exp\Big(\frac{\sigma^2}{2-\mu}\Big)\text{erf}\bigg(\frac{-\mu+\sigma^2+\ln y}{\sqrt{2}\sigma}\bigg)\bigg|_{y=a}^b, $$
where $\text{erf}$ denotes the error function, which has the following properties:
$$ \lim_{x\to-\infty}\text{erf}(x)=-1, \quad\text{erf}(0)=0, \quad \lim_{x\to\infty}\text{erf}(x)=1. $$
Substituting the limits into the integral, we obtain that
$$ \int_0^m\frac{1}{y^2}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg)\,dy=\int_m^\infty\frac{1}{y^2}\exp\bigg(-\frac{(\ln y-\mu)^2}{2\sigma^2}\bigg)\,dy. $$
Since the proportionality factors do not involve $m$, this yields that
$$ \int_0^m \frac{1}{y}f(y)\,dy = \int_m^\infty \frac{1}{y}f(y)\,dy $$
as required.
|
What is the best point forecast for lognormally distributed data?
It is a standard result from introductory statistics that the expectation of a distribution is the one number summary that will minimize the expected squared error. The expectation of the lognormal di
|
40,571
|
What is the best point forecast for lognormally distributed data?
|
My answers are for a distribution with known parameters. It's not specific to lognormal, but obviously applies to it too. The integrals use the lower limit of integration 0, but you can change them to $-\infty$ then they'll work for any distribution with finite mean, not only the ones with positive domain such as lognormal.
The optimal point forecast is $\hat x$, the PDF and CDF are $f(x),F(x)$, loss function is $C(x,\hat x)$.
MSE, the optimal forecast is mean $\hat x = E[x]$.
$$C(x,\hat x)=(x-\hat x)^2$$
First order condition (FOC) for minimum expected cost:
$$\frac \partial {\partial \hat x}E[C(x,\hat x)]=E[-2(x-\hat x)]=0$$
$$\hat x=E[x]=\mu$$
2,3. MAE & MASE, the optimal forecast is median $F(\hat x)=1/2$.
$$C(x,\hat x)=|x-\hat x|$$
First order condition (FOC) for minimum expected cost:
$$\frac \partial {\partial \hat x}E[C(x,\hat x)]=
\frac \partial {\partial \hat x}\left(\int_0^{\hat x}(\hat x-x)dF(x)
+\int_{\hat x}^{\infty}( x-\hat x)dF(x)\right)\\
=F(\hat x)-(1-F(\hat x))=0$$
$$F(\hat x)=\frac 1 2$$
So, $\hat x$ is the median.
MAPE, the optimal forecast is median $G(\hat x)=1/2$, where $dG = cdF/x$ for some constant $c$.
$$C(x,\hat x)=|1-\frac{\hat x} x |$$
First order condition (FOC) for minimum expected cost:
$$\frac \partial {\partial \hat x}E[C(x,\hat x)]=
\frac \partial {\partial \hat x}\left(\int_0^{\hat x}(\frac{\hat x-x} x)dF(x)
+\int_{\hat x}^{\infty}( \frac{x-\hat x} x)dF(x)\right)\\
=\frac 1 {\hat x}(G(\hat x)-(1-G(\hat x)))=0$$
$$G(\hat x)=\frac 1 2$$
So, $\hat x$ is the median of $G$.
Finite mean
It is important to note that distributions with undefined mean such as Cauchy will not have a good answer for MSE. This is a very serious problem in business forecasting for it is not obvious that every real life distribution has a mean. It can be argued that some distributions can have a very fat tail, so fat that in fact the mean is undefined.
In these cases there is no optimal point forecast with MSE.
|
What is the best point forecast for lognormally distributed data?
|
My answers are for a distribution with known parameters. It's not specific to lognormal, but obviously applies to it too. The integrals use the lower limit of integration 0, but you can change them to
|
What is the best point forecast for lognormally distributed data?
My answers are for a distribution with known parameters. It's not specific to lognormal, but obviously applies to it too. The integrals use the lower limit of integration 0, but you can change them to $-\infty$ then they'll work for any distribution with finite mean, not only the ones with positive domain such as lognormal.
The optimal point forecast is $\hat x$, the PDF and CDF are $f(x),F(x)$, loss function is $C(x,\hat x)$.
MSE, the optimal forecast is mean $\hat x = E[x]$.
$$C(x,\hat x)=(x-\hat x)^2$$
First order condition (FOC) for minimum expected cost:
$$\frac \partial {\partial \hat x}E[C(x,\hat x)]=E[-2(x-\hat x)]=0$$
$$\hat x=E[x]=\mu$$
2,3. MAE & MASE, the optimal forecast is median $F(\hat x)=1/2$.
$$C(x,\hat x)=|x-\hat x|$$
First order condition (FOC) for minimum expected cost:
$$\frac \partial {\partial \hat x}E[C(x,\hat x)]=
\frac \partial {\partial \hat x}\left(\int_0^{\hat x}(\hat x-x)dF(x)
+\int_{\hat x}^{\infty}( x-\hat x)dF(x)\right)\\
=F(\hat x)-(1-F(\hat x))=0$$
$$F(\hat x)=\frac 1 2$$
So, $\hat x$ is the median.
MAPE, the optimal forecast is median $G(\hat x)=1/2$, where $dG = cdF/x$ for some constant $c$.
$$C(x,\hat x)=|1-\frac{\hat x} x |$$
First order condition (FOC) for minimum expected cost:
$$\frac \partial {\partial \hat x}E[C(x,\hat x)]=
\frac \partial {\partial \hat x}\left(\int_0^{\hat x}(\frac{\hat x-x} x)dF(x)
+\int_{\hat x}^{\infty}( \frac{x-\hat x} x)dF(x)\right)\\
=\frac 1 {\hat x}(G(\hat x)-(1-G(\hat x)))=0$$
$$G(\hat x)=\frac 1 2$$
So, $\hat x$ is the median of $G$.
Finite mean
It is important to note that distributions with undefined mean such as Cauchy will not have a good answer for MSE. This is a very serious problem in business forecasting for it is not obvious that every real life distribution has a mean. It can be argued that some distributions can have a very fat tail, so fat that in fact the mean is undefined.
In these cases there is no optimal point forecast with MSE.
|
What is the best point forecast for lognormally distributed data?
My answers are for a distribution with known parameters. It's not specific to lognormal, but obviously applies to it too. The integrals use the lower limit of integration 0, but you can change them to
|
40,572
|
The distribution of the product of a Bernoulli & an exponential random variable
|
I don't know why you call this an invalid CDF.
You have a mixed random variable $Y$:
$$Y=XZ=\begin{cases}0&,\text{ if }Z=0\\X&,\text{ if }Z=1\end{cases}$$
So the distribution function of $Y$ must be
\begin{align}
P(Y\le y)&=P(XZ\le y)
\\\\&=P(XZ\le y\mid Z=0)P(Z=0)+P(XZ\le y\mid Z=1)P(Z=1)
\\\\&=\begin{cases}P(Z=0)+P(X\le y)P(Z=1)&,\text{ if }y\ge0 \\0&,\text{ if }y<0\end{cases}
\\\\&=\begin{cases}0.55+0.45(1-e^{-cy})&,\text{ if }y\ge0\\ 0&,\text{ if }y<0\end{cases}
\end{align}
That is,
$$F(y)=P(Y\le y)=\begin{cases}1-0.45e^{-cy}&,\text{ if }y\ge0\\ 0&,\text{ if }y<0\end{cases}$$
Taking $c=1$, the plot of $F(y)$ looks like
If you check the conditions of a valid CDF, you will see that $F$ satisfies all those conditions.
|
The distribution of the product of a Bernoulli & an exponential random variable
|
I don't know why you call this an invalid CDF.
You have a mixed random variable $Y$:
$$Y=XZ=\begin{cases}0&,\text{ if }Z=0\\X&,\text{ if }Z=1\end{cases}$$
So the distribution function of $Y$ must be
|
The distribution of the product of a Bernoulli & an exponential random variable
I don't know why you call this an invalid CDF.
You have a mixed random variable $Y$:
$$Y=XZ=\begin{cases}0&,\text{ if }Z=0\\X&,\text{ if }Z=1\end{cases}$$
So the distribution function of $Y$ must be
\begin{align}
P(Y\le y)&=P(XZ\le y)
\\\\&=P(XZ\le y\mid Z=0)P(Z=0)+P(XZ\le y\mid Z=1)P(Z=1)
\\\\&=\begin{cases}P(Z=0)+P(X\le y)P(Z=1)&,\text{ if }y\ge0 \\0&,\text{ if }y<0\end{cases}
\\\\&=\begin{cases}0.55+0.45(1-e^{-cy})&,\text{ if }y\ge0\\ 0&,\text{ if }y<0\end{cases}
\end{align}
That is,
$$F(y)=P(Y\le y)=\begin{cases}1-0.45e^{-cy}&,\text{ if }y\ge0\\ 0&,\text{ if }y<0\end{cases}$$
Taking $c=1$, the plot of $F(y)$ looks like
If you check the conditions of a valid CDF, you will see that $F$ satisfies all those conditions.
|
The distribution of the product of a Bernoulli & an exponential random variable
I don't know why you call this an invalid CDF.
You have a mixed random variable $Y$:
$$Y=XZ=\begin{cases}0&,\text{ if }Z=0\\X&,\text{ if }Z=1\end{cases}$$
So the distribution function of $Y$ must be
|
40,573
|
The distribution of the product of a Bernoulli & an exponential random variable
|
The first three central moments of $Y$ are as follows:
The expectation of $Y$ can be written as:
$$\displaystyle \mu_{{1}}\, = \, E(Y) \, = \, 0 \, (1-p) \, + \, \int_{0}^{\infty }\!\, p \, y \, c \,{{\rm e}^{- c \,y}}\,{\rm d}y \,= \, \frac {p}{ c }$$
The variance of $Y$ can be writt
en as:
$$\displaystyle \mu_{{2}}\, = \, Var(Y) \, = \, (1-p) \, \left(0-\frac{p}{ c }\right) ^2 +\int_{0}^{\infty }\!p \, \left( y-{\frac {p}{ c }} \right) ^{2} c \,{{\rm e}^{- c \,y}}\,{\rm d}y \, = \,-{\frac {p \left( p-2 \right) }{{ c }^{2}}}$$
The third central moment can be written as:
$$\displaystyle \mu_{{3}}\, = \, (1-p) \, \left(0-\frac{p}{ c }\right) ^3 +\int_{0}^{\infty }\!p \, \left( y-{\frac {p}{ c }} \right) ^{3} c \,{{\rm e}^{- c \,y}}\,{\rm d}y \, = \,2\,{\frac {p \left( {p}^{2}-3\,p+3 \right) }{{ c }^{3}}}$$
with $p = Pr(Z=1)$ and $(1-p) = Pr(Z=0)$.
|
The distribution of the product of a Bernoulli & an exponential random variable
|
The first three central moments of $Y$ are as follows:
The expectation of $Y$ can be written as:
$$\displaystyle \mu_{{1}}\, = \, E(Y) \, = \, 0 \, (1-p) \, + \, \int_{0}^{\infty }\!\, p \, y \, c
|
The distribution of the product of a Bernoulli & an exponential random variable
The first three central moments of $Y$ are as follows:
The expectation of $Y$ can be written as:
$$\displaystyle \mu_{{1}}\, = \, E(Y) \, = \, 0 \, (1-p) \, + \, \int_{0}^{\infty }\!\, p \, y \, c \,{{\rm e}^{- c \,y}}\,{\rm d}y \,= \, \frac {p}{ c }$$
The variance of $Y$ can be writt
en as:
$$\displaystyle \mu_{{2}}\, = \, Var(Y) \, = \, (1-p) \, \left(0-\frac{p}{ c }\right) ^2 +\int_{0}^{\infty }\!p \, \left( y-{\frac {p}{ c }} \right) ^{2} c \,{{\rm e}^{- c \,y}}\,{\rm d}y \, = \,-{\frac {p \left( p-2 \right) }{{ c }^{2}}}$$
The third central moment can be written as:
$$\displaystyle \mu_{{3}}\, = \, (1-p) \, \left(0-\frac{p}{ c }\right) ^3 +\int_{0}^{\infty }\!p \, \left( y-{\frac {p}{ c }} \right) ^{3} c \,{{\rm e}^{- c \,y}}\,{\rm d}y \, = \,2\,{\frac {p \left( {p}^{2}-3\,p+3 \right) }{{ c }^{3}}}$$
with $p = Pr(Z=1)$ and $(1-p) = Pr(Z=0)$.
|
The distribution of the product of a Bernoulli & an exponential random variable
The first three central moments of $Y$ are as follows:
The expectation of $Y$ can be written as:
$$\displaystyle \mu_{{1}}\, = \, E(Y) \, = \, 0 \, (1-p) \, + \, \int_{0}^{\infty }\!\, p \, y \, c
|
40,574
|
Can anyone help explain this basic example of posterior
|
Sorry for being confusing! The joint posterior distribution on $(\xi_1,\xi_2)$ is
$$\pi(\xi_1,\xi_2|x)\propto \exp\{-(x-\xi_1)^2/2\}\pi_1(2\xi_1)\pi_2(2\xi_2)$$
Therefore the marginal posterior on $\xi_2$ is given by the marginal of the above, up to a constant, that is,
$$\pi(\xi_2|x)\propto \int\exp\{-(x-\xi_1)^2/2\}\pi_1(2\xi_1)\pi_2(2\xi_2)\,\text{d}\xi_1$$
which does not depend on $x$. This is a case, albeit an artificial case, when the posterior and the prior are equal.
|
Can anyone help explain this basic example of posterior
|
Sorry for being confusing! The joint posterior distribution on $(\xi_1,\xi_2)$ is
$$\pi(\xi_1,\xi_2|x)\propto \exp\{-(x-\xi_1)^2/2\}\pi_1(2\xi_1)\pi_2(2\xi_2)$$
Therefore the marginal posterior on $\x
|
Can anyone help explain this basic example of posterior
Sorry for being confusing! The joint posterior distribution on $(\xi_1,\xi_2)$ is
$$\pi(\xi_1,\xi_2|x)\propto \exp\{-(x-\xi_1)^2/2\}\pi_1(2\xi_1)\pi_2(2\xi_2)$$
Therefore the marginal posterior on $\xi_2$ is given by the marginal of the above, up to a constant, that is,
$$\pi(\xi_2|x)\propto \int\exp\{-(x-\xi_1)^2/2\}\pi_1(2\xi_1)\pi_2(2\xi_2)\,\text{d}\xi_1$$
which does not depend on $x$. This is a case, albeit an artificial case, when the posterior and the prior are equal.
|
Can anyone help explain this basic example of posterior
Sorry for being confusing! The joint posterior distribution on $(\xi_1,\xi_2)$ is
$$\pi(\xi_1,\xi_2|x)\propto \exp\{-(x-\xi_1)^2/2\}\pi_1(2\xi_1)\pi_2(2\xi_2)$$
Therefore the marginal posterior on $\x
|
40,575
|
Can anyone help explain this basic example of posterior
|
I am not providing another answer to the question, but instead another example of the same sort as presented in the question. The example I present came up in the course of my research (as part of a Gibbs sampler within a Dirichlet process mixture model); in that sense it is not "contrived."
There is one observation $x$ drawn from an order-statistic distribution. The probability density for the $k$-th order statistic from a sample of $n$ iid draws from the uniform distribution on the unit interval is given by
\begin{equation}
p(x|k,n) = \textsf{Beta}(x|k,n-k+1) ,
\end{equation}
where
\begin{align}
n &\in \{1, 2, \ldots \} \\
k &\in \{1, \ldots, n\} .
\end{align}
Let the prior for $(k,n)$ be given by $p(k|n)\,p(n)$, where $p(k|n) = 1/n$ for all $k$ and $p(n)$ is an arbitrary distribution.
The posterior distribution for $(k,n)$ is characterized by
\begin{equation}
p(k,n|x) \propto p(x|k,n)\,p(k,n) = \frac{\textsf{Beta}(x|k,n-k+1)}{k}\,p(n) .
\end{equation}
The marginal posterior for $n$ can be computed by integrating out (i.e., summing out) $k$:
\begin{equation}
p(n|x) = \sum_{k=1}^n \frac{\textsf{Beta}(x|k,n-k+1)}{k}\,p(n) = p(n) .
\end{equation}
We see that the posterior distribution for $n$ is unchanged from its prior distribution.
The result is delivered by the adding-up property of order statistics. For example, suppose $n$ draws are made from the uniform distribution and sorted in order. Then one of the sorted draws is chosen at random (i.e., with probability $1/n$). This random choice "undoes" the effect of sorting so the distribution of the draw chosen via this mechanism is simply the underlying distribution, which in this case is the uniform distribution.
For completeness, note that the posterior distribution for $k$ conditional on $n$ can be expressed in terms of the following density:
\begin{equation}
p(k|x,n) = \textsf{Binomial}(k-1|n-1,x) .
\end{equation}
|
Can anyone help explain this basic example of posterior
|
I am not providing another answer to the question, but instead another example of the same sort as presented in the question. The example I present came up in the course of my research (as part of a G
|
Can anyone help explain this basic example of posterior
I am not providing another answer to the question, but instead another example of the same sort as presented in the question. The example I present came up in the course of my research (as part of a Gibbs sampler within a Dirichlet process mixture model); in that sense it is not "contrived."
There is one observation $x$ drawn from an order-statistic distribution. The probability density for the $k$-th order statistic from a sample of $n$ iid draws from the uniform distribution on the unit interval is given by
\begin{equation}
p(x|k,n) = \textsf{Beta}(x|k,n-k+1) ,
\end{equation}
where
\begin{align}
n &\in \{1, 2, \ldots \} \\
k &\in \{1, \ldots, n\} .
\end{align}
Let the prior for $(k,n)$ be given by $p(k|n)\,p(n)$, where $p(k|n) = 1/n$ for all $k$ and $p(n)$ is an arbitrary distribution.
The posterior distribution for $(k,n)$ is characterized by
\begin{equation}
p(k,n|x) \propto p(x|k,n)\,p(k,n) = \frac{\textsf{Beta}(x|k,n-k+1)}{k}\,p(n) .
\end{equation}
The marginal posterior for $n$ can be computed by integrating out (i.e., summing out) $k$:
\begin{equation}
p(n|x) = \sum_{k=1}^n \frac{\textsf{Beta}(x|k,n-k+1)}{k}\,p(n) = p(n) .
\end{equation}
We see that the posterior distribution for $n$ is unchanged from its prior distribution.
The result is delivered by the adding-up property of order statistics. For example, suppose $n$ draws are made from the uniform distribution and sorted in order. Then one of the sorted draws is chosen at random (i.e., with probability $1/n$). This random choice "undoes" the effect of sorting so the distribution of the draw chosen via this mechanism is simply the underlying distribution, which in this case is the uniform distribution.
For completeness, note that the posterior distribution for $k$ conditional on $n$ can be expressed in terms of the following density:
\begin{equation}
p(k|x,n) = \textsf{Binomial}(k-1|n-1,x) .
\end{equation}
|
Can anyone help explain this basic example of posterior
I am not providing another answer to the question, but instead another example of the same sort as presented in the question. The example I present came up in the course of my research (as part of a G
|
40,576
|
Reparameterization trick for gamma distribution
|
The answer is "yes" in one sense and "no" in another sense.
Suppose $X \sim \operatorname{Gamma}(\alpha,\beta)$. Let $F_{\alpha,\beta}$ denote the cdf of a Gamma distribution. Then define $\epsilon = \Phi^{-1}[F_{\alpha,\beta}(X)]$. Then if you simulate $\epsilon \sim N(0,1)$ you can get the relevant gamma distribution by setting $X = F^{-1}_{\alpha,\beta}[\Phi(\epsilon)]$. This is a consequence of the probability integral transform. Additionally, the transform $T(\epsilon ; \alpha, \beta) = F^{-1}_{\alpha,\beta}[\Phi(\epsilon)]$ is differentiable. So you could use this idea with the reparametrization trick, at least in principle, to improve your stochastic variational inference. This implies that, in a liberal sense, the answer is "yes, there is a reparameterization trick", and in fact there is one for essentially any family of continuous distributions. If this seems sort of ad-hoc, notice that if you apply this trick with the Gaussian family in place of the gamma, you get back exactly the usual reparameterization trick.
In a more restrictive sense, I would say the answer is "no". The function $F^{-1}$ above is not available in closed form, so things are not so convenient to the point where we might disqualify this approach. Alternatively, there is no reason to restrict ourselves to $\epsilon \sim N(0,1)$, and we might just ask for $\epsilon \sim Q$ for some standard distribution $Q$ that is easy to sample from, such that $T(\epsilon; \alpha, \beta) \sim \text{Gamma}(\alpha,\beta)$ where $T$ is also easy to compute.
If you find such a transformation $T$ and standard distribution $Q$, let me know because I would be interested in it. The main problem is the shape parameter $\alpha$. If I know $\alpha$, then I can take $T(\epsilon; \alpha, \beta) = \epsilon \beta$ and set $\epsilon \sim \mbox{Gamma}(\alpha,1)$, because the gamma family with know $\alpha$ is a scale family. The shape parameter does not have any nice algebraic properties as far as I know, aside from the fact that $X_1 + X_2 \sim \mbox{Gamma}(\alpha_1 + \alpha_2, 1)$ provided that $X_i \sim \mbox{Gamma}(\alpha_i, 1)$ and they are independent. It's not clear how to take advantage of this. A negative result for us is that, if such a convenient $T$ existed, then R would probably use that to sample from the a generic gamma distribution, but instead it uses rejection sampling.
|
Reparameterization trick for gamma distribution
|
The answer is "yes" in one sense and "no" in another sense.
Suppose $X \sim \operatorname{Gamma}(\alpha,\beta)$. Let $F_{\alpha,\beta}$ denote the cdf of a Gamma distribution. Then define $\epsilon =
|
Reparameterization trick for gamma distribution
The answer is "yes" in one sense and "no" in another sense.
Suppose $X \sim \operatorname{Gamma}(\alpha,\beta)$. Let $F_{\alpha,\beta}$ denote the cdf of a Gamma distribution. Then define $\epsilon = \Phi^{-1}[F_{\alpha,\beta}(X)]$. Then if you simulate $\epsilon \sim N(0,1)$ you can get the relevant gamma distribution by setting $X = F^{-1}_{\alpha,\beta}[\Phi(\epsilon)]$. This is a consequence of the probability integral transform. Additionally, the transform $T(\epsilon ; \alpha, \beta) = F^{-1}_{\alpha,\beta}[\Phi(\epsilon)]$ is differentiable. So you could use this idea with the reparametrization trick, at least in principle, to improve your stochastic variational inference. This implies that, in a liberal sense, the answer is "yes, there is a reparameterization trick", and in fact there is one for essentially any family of continuous distributions. If this seems sort of ad-hoc, notice that if you apply this trick with the Gaussian family in place of the gamma, you get back exactly the usual reparameterization trick.
In a more restrictive sense, I would say the answer is "no". The function $F^{-1}$ above is not available in closed form, so things are not so convenient to the point where we might disqualify this approach. Alternatively, there is no reason to restrict ourselves to $\epsilon \sim N(0,1)$, and we might just ask for $\epsilon \sim Q$ for some standard distribution $Q$ that is easy to sample from, such that $T(\epsilon; \alpha, \beta) \sim \text{Gamma}(\alpha,\beta)$ where $T$ is also easy to compute.
If you find such a transformation $T$ and standard distribution $Q$, let me know because I would be interested in it. The main problem is the shape parameter $\alpha$. If I know $\alpha$, then I can take $T(\epsilon; \alpha, \beta) = \epsilon \beta$ and set $\epsilon \sim \mbox{Gamma}(\alpha,1)$, because the gamma family with know $\alpha$ is a scale family. The shape parameter does not have any nice algebraic properties as far as I know, aside from the fact that $X_1 + X_2 \sim \mbox{Gamma}(\alpha_1 + \alpha_2, 1)$ provided that $X_i \sim \mbox{Gamma}(\alpha_i, 1)$ and they are independent. It's not clear how to take advantage of this. A negative result for us is that, if such a convenient $T$ existed, then R would probably use that to sample from the a generic gamma distribution, but instead it uses rejection sampling.
|
Reparameterization trick for gamma distribution
The answer is "yes" in one sense and "no" in another sense.
Suppose $X \sim \operatorname{Gamma}(\alpha,\beta)$. Let $F_{\alpha,\beta}$ denote the cdf of a Gamma distribution. Then define $\epsilon =
|
40,577
|
Reparameterization trick for gamma distribution
|
In terms of the strategy from the VAE paper, no there's not a straightforward way of using a gamma distribution (as guy explained). However, there's a lot of research that's gone into other strategies for reparameterizing gammas to use in the VAE type framework:
This paper by Knowles https://arxiv.org/pdf/1509.01631.pdf
This paper from Blei's lab https://arxiv.org/pdf/1610.02287.pdf
This paper from Deepmind https://arxiv.org/pdf/1805.08498.pdf
This paper from Uber AI https://arxiv.org/pdf/1806.01851.pdf
You could alternatively use BBVI (here's the original Blei lab paper http://www.cs.columbia.edu/~blei/papers/RanganathGerrishBlei2014.pdf), as demonstrated in this tutorial http://ajbc.io/resources/bbvi_for_gammas.pdf
|
Reparameterization trick for gamma distribution
|
In terms of the strategy from the VAE paper, no there's not a straightforward way of using a gamma distribution (as guy explained). However, there's a lot of research that's gone into other strategies
|
Reparameterization trick for gamma distribution
In terms of the strategy from the VAE paper, no there's not a straightforward way of using a gamma distribution (as guy explained). However, there's a lot of research that's gone into other strategies for reparameterizing gammas to use in the VAE type framework:
This paper by Knowles https://arxiv.org/pdf/1509.01631.pdf
This paper from Blei's lab https://arxiv.org/pdf/1610.02287.pdf
This paper from Deepmind https://arxiv.org/pdf/1805.08498.pdf
This paper from Uber AI https://arxiv.org/pdf/1806.01851.pdf
You could alternatively use BBVI (here's the original Blei lab paper http://www.cs.columbia.edu/~blei/papers/RanganathGerrishBlei2014.pdf), as demonstrated in this tutorial http://ajbc.io/resources/bbvi_for_gammas.pdf
|
Reparameterization trick for gamma distribution
In terms of the strategy from the VAE paper, no there's not a straightforward way of using a gamma distribution (as guy explained). However, there's a lot of research that's gone into other strategies
|
40,578
|
MAP estimation as regularisation of MLE
|
Maximum likelihood method aims at finding model parameters that best match some data:
$$
\theta_{ML}=\mathrm{argmax}_\theta \,p(x|y,\theta)
$$
Maximum likelihood does not use any prior knowledge about the expected distribution of the parameters $\theta$ and thus may overfit to the particular data $x$, $y$.
Maximum a-posteriori (MAP) method adds a prior distribution of the parameters $\theta$:
$$
\theta_{MAP}=\mathrm{argmax}_\theta \, p(x|y,\theta)p(\theta)
$$
The optimal solution must still match the data but it has also to conform to your prior knowledge about the parameter distribution.
How is this related to adding a regularizer term to a loss function?
Instead of optimizing the posterior directly, one often optimizes negative of the logarithm instead:
$$
\begin{align}
\theta_{MAP}&=\mathrm{argmin}_\theta \, -\log p(x|y,\theta)p(\theta) \\
&=\mathrm{argmin}_\theta \, -(\log p(x|y,\theta) + \log p(\theta))
\end{align}
$$
Assuming you want the parameters $\theta$ to be normally distributed around zero, you get $\log p(\theta) \propto ||\theta||_2^2$.
|
MAP estimation as regularisation of MLE
|
Maximum likelihood method aims at finding model parameters that best match some data:
$$
\theta_{ML}=\mathrm{argmax}_\theta \,p(x|y,\theta)
$$
Maximum likelihood does not use any prior knowledge about
|
MAP estimation as regularisation of MLE
Maximum likelihood method aims at finding model parameters that best match some data:
$$
\theta_{ML}=\mathrm{argmax}_\theta \,p(x|y,\theta)
$$
Maximum likelihood does not use any prior knowledge about the expected distribution of the parameters $\theta$ and thus may overfit to the particular data $x$, $y$.
Maximum a-posteriori (MAP) method adds a prior distribution of the parameters $\theta$:
$$
\theta_{MAP}=\mathrm{argmax}_\theta \, p(x|y,\theta)p(\theta)
$$
The optimal solution must still match the data but it has also to conform to your prior knowledge about the parameter distribution.
How is this related to adding a regularizer term to a loss function?
Instead of optimizing the posterior directly, one often optimizes negative of the logarithm instead:
$$
\begin{align}
\theta_{MAP}&=\mathrm{argmin}_\theta \, -\log p(x|y,\theta)p(\theta) \\
&=\mathrm{argmin}_\theta \, -(\log p(x|y,\theta) + \log p(\theta))
\end{align}
$$
Assuming you want the parameters $\theta$ to be normally distributed around zero, you get $\log p(\theta) \propto ||\theta||_2^2$.
|
MAP estimation as regularisation of MLE
Maximum likelihood method aims at finding model parameters that best match some data:
$$
\theta_{ML}=\mathrm{argmax}_\theta \,p(x|y,\theta)
$$
Maximum likelihood does not use any prior knowledge about
|
40,579
|
How to you measure the accuracy of a model that gives quantile forecasts or distributions of forecasts?
|
The standard approach is to use probability scoring. See Gneiting and Katzfuss (2014) for some of the mathematical background.
One example of a probability scoring measure is quantile scoring based on the pinball loss function. For each time period throughout the forecast horizon, you compute the $0.01, 0.02, \dots, 0.99$ quantiles --- call these $q_1,\dots,q_{99}$, with $q_0=-\infty$ or the natural lower bound, and $q_{100}=\infty$ or the natural upper bound. These 99 values then define (approximately) the forecast densities.
For a quantile forecast $q_a$ with $a/100$ as the target quantile, the pinball loss $L$ is defined as:
$$
L(q_a, y) = \begin{cases}
(1 - a/100) (q_a - y), & \text{if $y< q_a$};\\
a/100 (y - q_a), & \text{if $y\ge q_a$};
\end{cases}
$$
where $y$ is the observation used for verification, and $a = 1, 2, \dots, 99$.
Note that $L(q_{50},y)$ is equal to $0.5 |q_{50}-y|$, half the value of the absolute error. For other quantiles, the loss is not symmetric.
To evaluate the full predictive densities, this score is then averaged over all target quantiles, from 0.01 to 0.99, for all time periods over all forecast horizons. The lower the score, the better the forecasts are.
|
How to you measure the accuracy of a model that gives quantile forecasts or distributions of forecas
|
The standard approach is to use probability scoring. See Gneiting and Katzfuss (2014) for some of the mathematical background.
One example of a probability scoring measure is quantile scoring based on
|
How to you measure the accuracy of a model that gives quantile forecasts or distributions of forecasts?
The standard approach is to use probability scoring. See Gneiting and Katzfuss (2014) for some of the mathematical background.
One example of a probability scoring measure is quantile scoring based on the pinball loss function. For each time period throughout the forecast horizon, you compute the $0.01, 0.02, \dots, 0.99$ quantiles --- call these $q_1,\dots,q_{99}$, with $q_0=-\infty$ or the natural lower bound, and $q_{100}=\infty$ or the natural upper bound. These 99 values then define (approximately) the forecast densities.
For a quantile forecast $q_a$ with $a/100$ as the target quantile, the pinball loss $L$ is defined as:
$$
L(q_a, y) = \begin{cases}
(1 - a/100) (q_a - y), & \text{if $y< q_a$};\\
a/100 (y - q_a), & \text{if $y\ge q_a$};
\end{cases}
$$
where $y$ is the observation used for verification, and $a = 1, 2, \dots, 99$.
Note that $L(q_{50},y)$ is equal to $0.5 |q_{50}-y|$, half the value of the absolute error. For other quantiles, the loss is not symmetric.
To evaluate the full predictive densities, this score is then averaged over all target quantiles, from 0.01 to 0.99, for all time periods over all forecast horizons. The lower the score, the better the forecasts are.
|
How to you measure the accuracy of a model that gives quantile forecasts or distributions of forecas
The standard approach is to use probability scoring. See Gneiting and Katzfuss (2014) for some of the mathematical background.
One example of a probability scoring measure is quantile scoring based on
|
40,580
|
Puzzled by definition of sufficient statistics
|
All those interpretations seem to be a variation of expressing the same thing:
The independence of the distribution of the sample $X$ on a true parameter $\theta$ and the statistic $T$.
Which means that the sample $X$, conditional on $T$, does not tell any more information (beyond the information from the statistic $T$) about the parameter $\theta$ because in terms of the frequency/probability distribution of the possible observed samples $X$ there is no difference that will point out anything about $\theta$)
About sufficient statistics
You might be interested to read two works from R.A. Fisher which I believe are very good for didactic purposes (and also good for getting to know the classics)
A mathematical examination of the methods of determining the accuracy of an observation by the mean error, and the mean square error. (1920)
Here Fisher compares different statistics to estimate the $\sigma$ parameter of the normal distribution.
He expresses the relative sampling variance (relative standard error) for different forms of deviation. That is the mean error $\sigma_1 = \sqrt{\frac{2}{\pi}}\sum (|x-\bar{x}|)$, the mean square error, $\sigma_2 = \sum (x-\bar{x})^2$, and any variants of sums of error employing any other power $\sigma_p$.
He finds out that the mean square error has the lowest relative standard error, and he explores further the special properties of the mean squared error.
He then expresses the distribution/frequency of the one statistic based on the other statistic and observes that the mean squared error, $\sigma_2$, is special because the distribution of $\sigma_1$ or other $\sigma_p$, conditional on $\sigma_2$, does not depend on $\sigma$. That means that no other statistic $\sigma_p$ will be able to tell anything more about the parameter $\sigma$ than what $\sigma_2$ tells about $\sigma$.
He mentions the iso-surface of the statistic corresponding to the iso-surface of the likelihood function and derives for the mean error statistic, $\sigma_1$, (which are not n-spheres but multidimensional polytopes) that this coincides with the Laplace distribution (a little bit analogous how Gauss derived the normal distribution based on the root mean square statistic)
Theory of statistical estimation (1925)
Here Fisher explains several concepts such as consistency and efficiency. In relation to the concept of sufficiency he explains
the 'factorization theorem'
and the fact that a sufficient statistic, if it exists, will be a solution of the equations to obtain the maximum likelihood.
The explanation of sufficiency is particular clear by the use of the Poisson distribution as an example. The probability distribution function for a single observation $x$ is $$f(x) = e^{-\lambda} \frac{\lambda^x}{x!}$$ and the joint distribution of $n$ independent observations is $\lbrace x_1, x_2,...,x_n \rbrace$: $$f(x_1,...,x_n) = e^{-n\lambda} \frac{\left( \lambda \right)^{n\bar{x}}}{\left( n\bar{x} \right)!}$$ which can be written as $$f(x_1,...,x_n) = e^{-n\lambda} \frac{\lambda^{n\bar{x}}}{x_1!x_2!...x_n!} $$ and factorized into $$f(\bar{x}) \cdot f(x_1,...,x_n |\bar{x}) = e^{-n\lambda} \frac{(n\lambda)^{n\bar{x}}}{\left( n\bar{x} \right)!} \cdot \frac{\left( n\bar{x} \right) !}{n^{n\bar{x}}x_1!x_2!...x_n!} $$ which is the multiplication of (1) the distribution function of the statistic $f(\bar{x})$ and (2) the distribution function of the partitioning of $f(\bar{x})$ into $x_1,...,x_n$ which you could intuitively see as a conditional distribution density $f(x_1,...,x_n |\bar{x})$. Note that the latter term does not depend on $\lambda$.
Related to your two interpretations
If the PDF $f(x_1,...,x_n |\bar{x})$ is independent from $\theta$ then so should be the integrated probability (the CDF): $$\begin{multline}P(a_1<X_1<b_1,...,a_n<X_n<b_n |\bar{x}) = \\ = \int_{x_1 = a_1}^{x_1 = b_1} ... \int_{x_n =a_n}^{x_n = b_n} f(x_1,...,x_n |\bar{x}) d x_1 d x_2 ... d x_n\end{multline}$$
You suggest to just use $\frac{f_{X,T}(x,t)}{f_T(t)}$ but sometimes it might not always be so easy to make such expression. The factorization works already if you can split the likelihood into: $$f(x_1,...,x_n|\theta) = h(x_1,...,x_n) \cdot g(T(x)|\theta) $$ where only the factor $g(T(x)|\theta)$ depends solely on the parameter(s) $\theta$ and the statistic $T(x)$. Now note that it doesn't really matter how you express $h(x)$ you can just as well express this function in terms of other coordinates $y$ that relate to $x$ as long as the part is independent from $\theta$.
For instance the factorization with the Poisson distribution could have already been finished by writing: $$f(x_1,...,x_n) = \underbrace{e^{-n\lambda} \lambda^{n\bar{x}} \vphantom{\frac{1}{x_1!x_2!...x_n!}}}_{g(T(x)|\theta)} \cdot \underbrace{\frac{1}{x_1!x_2!...x_n!}}_{h_x(x_1,...,x_n)}$$ were the first term only depends on $\bar{x}$ and $\lambda$ and the second term does not depend on $\lambda$. So there is no need to look further for $\frac{f_{X,T}(x,t)}{f_T(t)}$
In this second example there is also a drop of one variable. You do not have $Y_1...Y_n$ but one less $Y_2..Y_n$. One example where this could be useful is when you use $T = \max \lbrace X_i \rbrace$ as statistic for a sample from the uniform distribution, $X_i \sim U(0,\theta)$. If you denote $Y_i$ the i-th largest from the $X_i$ then it is very easy to express the conditional probability distribution $P(Y_i \vert T)$. But to express $P(X_i \vert T)$ is a bit more difficult (see Conditional distribution of $(X_1,\cdots,X_n)\mid X_{(n)}$ where $X_i$'s are i.i.d $\mathcal U(0,\theta)$ variables ).
What your textbook says
Note that your textbook already explains why it is giving these alternative interpretations.
In the case of sampling from a probability density function, the
meaning of the term "the conditional distribution of $X_1, ... , X_n$
given $S=s $" that appears in Definition 15 may not be obvious since
then $P[S=s]=0$
and the alternative interpretations do not relate so much to 'the concept of sufficiency' but more to 'the concept of a probability density function not directly expressing probabilities'.
The expression in terms of the cumulative density function (which does relate to a probability) is one way to circumvent it.
The expression in terms of the transformation is a particular way to express the partitioning theorem. Note that $f(X_1,...,X_n)$ is dependent on $\theta$ but $f(Y_2,...,Y_n)$, where the $T$ term is separated, is independent from $\theta$ (e.g. the example in the book shows that for normal distributed variables, with unknown mean $\mu$ and known variance 1, the distribution of $Y_i = X_i-X_1$ is according to $Y_i \sim N(0,2)$, thus independent from the $\mu$).
A variation of the second interpretation (which dealt with the trouble that $f(X_1,...,X_n)$ is not independent from $\theta$) could be to show that $f(X_1,...,X_n)$ is independent from $\theta$ when constrained to the iso-surface where the sufficient statistic is constant.
This is sort of the geometrical interpretation that Fisher had. I am not sure why they use the more confusing interpretation. Possibly one may not see this interpretation, a sort of conditional probability density function that is analogous to a conditional probability, as theoretically clean.
About the expression $\frac{f_{X,T}(\mathbf{x},t)}{f_T(t)}$
Note that $f_{X,T}(\mathbf{x},t)$ is not easy to express since, $T$ depends on $\mathbf{X}$, and not every combination of $\mathbf{x}$ and $t$ is possible (so you are dealing with some function that is only non-zero on some surface in the space $\mathbf{X},T$ where $t$ and $\mathbf{x}$ are correctly related).
If you drop one of the variables in the vector $\mathbf{x}$ then it does become more suitable and this is very close to the conversion to parameters $y$ where you also have one number less.
However this devision is not too strange. The sufficient statistic is the one for which the distribution function $f_{X,T}(\mathbf{x},t)$ is constant (for different $\mathbf{x}$ the probability density $f_\mathbf{X}(\mathbf{x})$ is the same, constant, if $T$ is the same) so you should be able to divide it out (but the same works with any other function $g(t,\theta)$ it doesn't necessarily need to be the probability distribution $f_T(t,\theta)$.
|
Puzzled by definition of sufficient statistics
|
All those interpretations seem to be a variation of expressing the same thing:
The independence of the distribution of the sample $X$ on a true parameter $\theta$ and the statistic $T$.
Which means th
|
Puzzled by definition of sufficient statistics
All those interpretations seem to be a variation of expressing the same thing:
The independence of the distribution of the sample $X$ on a true parameter $\theta$ and the statistic $T$.
Which means that the sample $X$, conditional on $T$, does not tell any more information (beyond the information from the statistic $T$) about the parameter $\theta$ because in terms of the frequency/probability distribution of the possible observed samples $X$ there is no difference that will point out anything about $\theta$)
About sufficient statistics
You might be interested to read two works from R.A. Fisher which I believe are very good for didactic purposes (and also good for getting to know the classics)
A mathematical examination of the methods of determining the accuracy of an observation by the mean error, and the mean square error. (1920)
Here Fisher compares different statistics to estimate the $\sigma$ parameter of the normal distribution.
He expresses the relative sampling variance (relative standard error) for different forms of deviation. That is the mean error $\sigma_1 = \sqrt{\frac{2}{\pi}}\sum (|x-\bar{x}|)$, the mean square error, $\sigma_2 = \sum (x-\bar{x})^2$, and any variants of sums of error employing any other power $\sigma_p$.
He finds out that the mean square error has the lowest relative standard error, and he explores further the special properties of the mean squared error.
He then expresses the distribution/frequency of the one statistic based on the other statistic and observes that the mean squared error, $\sigma_2$, is special because the distribution of $\sigma_1$ or other $\sigma_p$, conditional on $\sigma_2$, does not depend on $\sigma$. That means that no other statistic $\sigma_p$ will be able to tell anything more about the parameter $\sigma$ than what $\sigma_2$ tells about $\sigma$.
He mentions the iso-surface of the statistic corresponding to the iso-surface of the likelihood function and derives for the mean error statistic, $\sigma_1$, (which are not n-spheres but multidimensional polytopes) that this coincides with the Laplace distribution (a little bit analogous how Gauss derived the normal distribution based on the root mean square statistic)
Theory of statistical estimation (1925)
Here Fisher explains several concepts such as consistency and efficiency. In relation to the concept of sufficiency he explains
the 'factorization theorem'
and the fact that a sufficient statistic, if it exists, will be a solution of the equations to obtain the maximum likelihood.
The explanation of sufficiency is particular clear by the use of the Poisson distribution as an example. The probability distribution function for a single observation $x$ is $$f(x) = e^{-\lambda} \frac{\lambda^x}{x!}$$ and the joint distribution of $n$ independent observations is $\lbrace x_1, x_2,...,x_n \rbrace$: $$f(x_1,...,x_n) = e^{-n\lambda} \frac{\left( \lambda \right)^{n\bar{x}}}{\left( n\bar{x} \right)!}$$ which can be written as $$f(x_1,...,x_n) = e^{-n\lambda} \frac{\lambda^{n\bar{x}}}{x_1!x_2!...x_n!} $$ and factorized into $$f(\bar{x}) \cdot f(x_1,...,x_n |\bar{x}) = e^{-n\lambda} \frac{(n\lambda)^{n\bar{x}}}{\left( n\bar{x} \right)!} \cdot \frac{\left( n\bar{x} \right) !}{n^{n\bar{x}}x_1!x_2!...x_n!} $$ which is the multiplication of (1) the distribution function of the statistic $f(\bar{x})$ and (2) the distribution function of the partitioning of $f(\bar{x})$ into $x_1,...,x_n$ which you could intuitively see as a conditional distribution density $f(x_1,...,x_n |\bar{x})$. Note that the latter term does not depend on $\lambda$.
Related to your two interpretations
If the PDF $f(x_1,...,x_n |\bar{x})$ is independent from $\theta$ then so should be the integrated probability (the CDF): $$\begin{multline}P(a_1<X_1<b_1,...,a_n<X_n<b_n |\bar{x}) = \\ = \int_{x_1 = a_1}^{x_1 = b_1} ... \int_{x_n =a_n}^{x_n = b_n} f(x_1,...,x_n |\bar{x}) d x_1 d x_2 ... d x_n\end{multline}$$
You suggest to just use $\frac{f_{X,T}(x,t)}{f_T(t)}$ but sometimes it might not always be so easy to make such expression. The factorization works already if you can split the likelihood into: $$f(x_1,...,x_n|\theta) = h(x_1,...,x_n) \cdot g(T(x)|\theta) $$ where only the factor $g(T(x)|\theta)$ depends solely on the parameter(s) $\theta$ and the statistic $T(x)$. Now note that it doesn't really matter how you express $h(x)$ you can just as well express this function in terms of other coordinates $y$ that relate to $x$ as long as the part is independent from $\theta$.
For instance the factorization with the Poisson distribution could have already been finished by writing: $$f(x_1,...,x_n) = \underbrace{e^{-n\lambda} \lambda^{n\bar{x}} \vphantom{\frac{1}{x_1!x_2!...x_n!}}}_{g(T(x)|\theta)} \cdot \underbrace{\frac{1}{x_1!x_2!...x_n!}}_{h_x(x_1,...,x_n)}$$ were the first term only depends on $\bar{x}$ and $\lambda$ and the second term does not depend on $\lambda$. So there is no need to look further for $\frac{f_{X,T}(x,t)}{f_T(t)}$
In this second example there is also a drop of one variable. You do not have $Y_1...Y_n$ but one less $Y_2..Y_n$. One example where this could be useful is when you use $T = \max \lbrace X_i \rbrace$ as statistic for a sample from the uniform distribution, $X_i \sim U(0,\theta)$. If you denote $Y_i$ the i-th largest from the $X_i$ then it is very easy to express the conditional probability distribution $P(Y_i \vert T)$. But to express $P(X_i \vert T)$ is a bit more difficult (see Conditional distribution of $(X_1,\cdots,X_n)\mid X_{(n)}$ where $X_i$'s are i.i.d $\mathcal U(0,\theta)$ variables ).
What your textbook says
Note that your textbook already explains why it is giving these alternative interpretations.
In the case of sampling from a probability density function, the
meaning of the term "the conditional distribution of $X_1, ... , X_n$
given $S=s $" that appears in Definition 15 may not be obvious since
then $P[S=s]=0$
and the alternative interpretations do not relate so much to 'the concept of sufficiency' but more to 'the concept of a probability density function not directly expressing probabilities'.
The expression in terms of the cumulative density function (which does relate to a probability) is one way to circumvent it.
The expression in terms of the transformation is a particular way to express the partitioning theorem. Note that $f(X_1,...,X_n)$ is dependent on $\theta$ but $f(Y_2,...,Y_n)$, where the $T$ term is separated, is independent from $\theta$ (e.g. the example in the book shows that for normal distributed variables, with unknown mean $\mu$ and known variance 1, the distribution of $Y_i = X_i-X_1$ is according to $Y_i \sim N(0,2)$, thus independent from the $\mu$).
A variation of the second interpretation (which dealt with the trouble that $f(X_1,...,X_n)$ is not independent from $\theta$) could be to show that $f(X_1,...,X_n)$ is independent from $\theta$ when constrained to the iso-surface where the sufficient statistic is constant.
This is sort of the geometrical interpretation that Fisher had. I am not sure why they use the more confusing interpretation. Possibly one may not see this interpretation, a sort of conditional probability density function that is analogous to a conditional probability, as theoretically clean.
About the expression $\frac{f_{X,T}(\mathbf{x},t)}{f_T(t)}$
Note that $f_{X,T}(\mathbf{x},t)$ is not easy to express since, $T$ depends on $\mathbf{X}$, and not every combination of $\mathbf{x}$ and $t$ is possible (so you are dealing with some function that is only non-zero on some surface in the space $\mathbf{X},T$ where $t$ and $\mathbf{x}$ are correctly related).
If you drop one of the variables in the vector $\mathbf{x}$ then it does become more suitable and this is very close to the conversion to parameters $y$ where you also have one number less.
However this devision is not too strange. The sufficient statistic is the one for which the distribution function $f_{X,T}(\mathbf{x},t)$ is constant (for different $\mathbf{x}$ the probability density $f_\mathbf{X}(\mathbf{x})$ is the same, constant, if $T$ is the same) so you should be able to divide it out (but the same works with any other function $g(t,\theta)$ it doesn't necessarily need to be the probability distribution $f_T(t,\theta)$.
|
Puzzled by definition of sufficient statistics
All those interpretations seem to be a variation of expressing the same thing:
The independence of the distribution of the sample $X$ on a true parameter $\theta$ and the statistic $T$.
Which means th
|
40,581
|
Puzzled by definition of sufficient statistics
|
It's a somewhat stronger condition to state that the joint probability distribution function (aka $F_X = P(X_1 < x_1, \ldots, X_n < x_n)$) doesn't depend on $\theta$. I could for instance have a modified normal ($\mu$,1) distribution whose probability density function is $f_x = \phi(x)$ if $x \ne \mu$, and $1,000$ when $X=\mu$. That 0-measure delta function doesn't go away by conditioning on $\bar{X}$. This is a case where $\bar{X}$ is a sufficient statistic not because $f_{X|{\bar{X}}}$ doesn't depend on $\mu$ but because $F_{X|\bar{X}}$ doesn't depend on $\mu$.
|
Puzzled by definition of sufficient statistics
|
It's a somewhat stronger condition to state that the joint probability distribution function (aka $F_X = P(X_1 < x_1, \ldots, X_n < x_n)$) doesn't depend on $\theta$. I could for instance have a modif
|
Puzzled by definition of sufficient statistics
It's a somewhat stronger condition to state that the joint probability distribution function (aka $F_X = P(X_1 < x_1, \ldots, X_n < x_n)$) doesn't depend on $\theta$. I could for instance have a modified normal ($\mu$,1) distribution whose probability density function is $f_x = \phi(x)$ if $x \ne \mu$, and $1,000$ when $X=\mu$. That 0-measure delta function doesn't go away by conditioning on $\bar{X}$. This is a case where $\bar{X}$ is a sufficient statistic not because $f_{X|{\bar{X}}}$ doesn't depend on $\mu$ but because $F_{X|\bar{X}}$ doesn't depend on $\mu$.
|
Puzzled by definition of sufficient statistics
It's a somewhat stronger condition to state that the joint probability distribution function (aka $F_X = P(X_1 < x_1, \ldots, X_n < x_n)$) doesn't depend on $\theta$. I could for instance have a modif
|
40,582
|
Deterministic or stochastic universe in frequentist statistics?
|
I respectfully disagree with the answer by Frans; there is nothing in the frequentist methodology that takes any position on whether data-generating processes, as modeled by statistics, are deterministic or not. (While Wikipedia is a useful source for some statistical material, this unreferenced sentence does not hold any weight for me.) Frequentism defines the "probability" of an event in the context of a repeatable sequence of trials as the limiting relative frequency of that event over a sequence of trials.$^\dagger$ Hence, within this framework, the application of probabilistic models implies only that the user is satisfied that the event can be placed in a sequence of theoretically repeatable trials. The occurrence of events in the sequence, and the existence of a limiting relative frequency, are not affected by whether the process is deterministic or not.
In view of this, within the frequentist paradigm, one should not imbue references to "probability" or "stochastic" with any non-deterministic implications (in a metaphysical sense). Within this paradigm the notion of probability refers merely to limiting relative frequencies of events, and a "stochastic" model is just one that is described with the use of probability (i.e., with the appeal to limiting relative frequencies of events). A frequentist statistical model is only "non-deterministic" in a mathematical sense ---i.e., that the specification of the parameters does not logically imply the outcome of the individual random variable. (Or to put it another way, the limiting relative frequency of an event does not logically imply the occurrence or non-occurrence of the event at any particular point in the sequence.)
One could believe in randomness in the universe, or determinism, and apply frequentist methods and interpretations. Under this paradigm the observable values are considered to be results from a repeatable experiment and thus they are considered to be contained within a (hypothetical) infinite sequence, with limiting empirical distribution described by one or more "parameters". These latter objects are treated as "unknown constants" even if the user adopts a non-deterministic aleatory view which holds that the parameter is non-deterministic.
$^\dagger$ My view is that this frequentist definition is problematic, since it takes the concept of a repeatable experiment to be preliminary to probability, and it therefore has trouble explaining the conditions that constitute repeatability (since it cannot appeal to any probabilistic condition). This notion is actually well-described within Bayesian theory by the concept of an exchangeable sequence of values, where the condition of exchangeability corresponds to repeatability. Within the Bayesian framework the representation theorem of de Finetti then establishes that the probability corresponds to the limiting relative frequency as a mathematical consequence of exchangeability, rather than as a definition.
|
Deterministic or stochastic universe in frequentist statistics?
|
I respectfully disagree with the answer by Frans; there is nothing in the frequentist methodology that takes any position on whether data-generating processes, as modeled by statistics, are determinis
|
Deterministic or stochastic universe in frequentist statistics?
I respectfully disagree with the answer by Frans; there is nothing in the frequentist methodology that takes any position on whether data-generating processes, as modeled by statistics, are deterministic or not. (While Wikipedia is a useful source for some statistical material, this unreferenced sentence does not hold any weight for me.) Frequentism defines the "probability" of an event in the context of a repeatable sequence of trials as the limiting relative frequency of that event over a sequence of trials.$^\dagger$ Hence, within this framework, the application of probabilistic models implies only that the user is satisfied that the event can be placed in a sequence of theoretically repeatable trials. The occurrence of events in the sequence, and the existence of a limiting relative frequency, are not affected by whether the process is deterministic or not.
In view of this, within the frequentist paradigm, one should not imbue references to "probability" or "stochastic" with any non-deterministic implications (in a metaphysical sense). Within this paradigm the notion of probability refers merely to limiting relative frequencies of events, and a "stochastic" model is just one that is described with the use of probability (i.e., with the appeal to limiting relative frequencies of events). A frequentist statistical model is only "non-deterministic" in a mathematical sense ---i.e., that the specification of the parameters does not logically imply the outcome of the individual random variable. (Or to put it another way, the limiting relative frequency of an event does not logically imply the occurrence or non-occurrence of the event at any particular point in the sequence.)
One could believe in randomness in the universe, or determinism, and apply frequentist methods and interpretations. Under this paradigm the observable values are considered to be results from a repeatable experiment and thus they are considered to be contained within a (hypothetical) infinite sequence, with limiting empirical distribution described by one or more "parameters". These latter objects are treated as "unknown constants" even if the user adopts a non-deterministic aleatory view which holds that the parameter is non-deterministic.
$^\dagger$ My view is that this frequentist definition is problematic, since it takes the concept of a repeatable experiment to be preliminary to probability, and it therefore has trouble explaining the conditions that constitute repeatability (since it cannot appeal to any probabilistic condition). This notion is actually well-described within Bayesian theory by the concept of an exchangeable sequence of values, where the condition of exchangeability corresponds to repeatability. Within the Bayesian framework the representation theorem of de Finetti then establishes that the probability corresponds to the limiting relative frequency as a mathematical consequence of exchangeability, rather than as a definition.
|
Deterministic or stochastic universe in frequentist statistics?
I respectfully disagree with the answer by Frans; there is nothing in the frequentist methodology that takes any position on whether data-generating processes, as modeled by statistics, are determinis
|
40,583
|
Deterministic or stochastic universe in frequentist statistics?
|
Interesting question!
I would say from a statistical modelling perspective, all data are assumed to come from a combination of systematic components and a stochastic component, which means that data-generating processes as modeled by statistics, are assumed to be non-deterministic in nature. Wikipedia even states that:
A statistical model is a special class of mathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic.
I will also give you my perspective as a former biologist, where we usually explain that statistical modelling (at least frequentist) is something like the following:
Which ironically you could interpret as the stochastic part being only 'apparently stochastic', since it relies on things that we did not or cannot measure.$^*$ Even 'random mutations' are eventually caused by a large sum of things we cannot measure, such as exposure to sunlight, failure of repair mechanisms, etc., but for all practical intents and purposes, it might as well be non-deterministic.
However, even if we could measure all those things perfectly, all down to the molecular level, we would have to concede that effects on a molecular scale are in turn influenced by things on a quantum scale, and... well,
there are limits to the precision with which quantities can be measured (uncertainty principle).
We run into the uncertainty principle, which would imply that in the end, the processes we model with statistics are random (if my understanding of it is correct).
That being said, a model is just that, a model. And I don't think statistics as a field takes a point of view on the nature of the universe. After all, that is not our field of research. At best you could argue that you implicitly assume the data-generating process is random by making use of statistical models.
$^*$ In other words, the sum of a large number of unobserved, independent effects create an (apparently) random deviation from the systematic effects. (In case of a large sum of uniformly distributed effects this nicely explains why we often assume normality, but I'm getting off track.)
Where in the methodology does it matter?
Well, basically for everything! A model without an stochastic part is not considered a statistical model. This even applies to those who do not consider machine learning and statistics to be the same. Even from the point of stochastic optimization (the name gives it away a bit), a model cannot be trained further from new data if the loss function is already exactly zero.
|
Deterministic or stochastic universe in frequentist statistics?
|
Interesting question!
I would say from a statistical modelling perspective, all data are assumed to come from a combination of systematic components and a stochastic component, which means that data-
|
Deterministic or stochastic universe in frequentist statistics?
Interesting question!
I would say from a statistical modelling perspective, all data are assumed to come from a combination of systematic components and a stochastic component, which means that data-generating processes as modeled by statistics, are assumed to be non-deterministic in nature. Wikipedia even states that:
A statistical model is a special class of mathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic.
I will also give you my perspective as a former biologist, where we usually explain that statistical modelling (at least frequentist) is something like the following:
Which ironically you could interpret as the stochastic part being only 'apparently stochastic', since it relies on things that we did not or cannot measure.$^*$ Even 'random mutations' are eventually caused by a large sum of things we cannot measure, such as exposure to sunlight, failure of repair mechanisms, etc., but for all practical intents and purposes, it might as well be non-deterministic.
However, even if we could measure all those things perfectly, all down to the molecular level, we would have to concede that effects on a molecular scale are in turn influenced by things on a quantum scale, and... well,
there are limits to the precision with which quantities can be measured (uncertainty principle).
We run into the uncertainty principle, which would imply that in the end, the processes we model with statistics are random (if my understanding of it is correct).
That being said, a model is just that, a model. And I don't think statistics as a field takes a point of view on the nature of the universe. After all, that is not our field of research. At best you could argue that you implicitly assume the data-generating process is random by making use of statistical models.
$^*$ In other words, the sum of a large number of unobserved, independent effects create an (apparently) random deviation from the systematic effects. (In case of a large sum of uniformly distributed effects this nicely explains why we often assume normality, but I'm getting off track.)
Where in the methodology does it matter?
Well, basically for everything! A model without an stochastic part is not considered a statistical model. This even applies to those who do not consider machine learning and statistics to be the same. Even from the point of stochastic optimization (the name gives it away a bit), a model cannot be trained further from new data if the loss function is already exactly zero.
|
Deterministic or stochastic universe in frequentist statistics?
Interesting question!
I would say from a statistical modelling perspective, all data are assumed to come from a combination of systematic components and a stochastic component, which means that data-
|
40,584
|
How to overcome the computational cost of the KNN algorithm?
|
There's a large literature on speeding up nearest neighbor search, as well as numerous software libraries (you might consider using one of these instead of re-implementing from scratch). I'll describe some general classes of solutions below. Most of these methods aim to reduce the amount of computation by exploiting some kind of structure in the data.
Parallelization
One approach is to parallelize the computation (e.g. using a cluster, GPU, or multiple cores on a single machine). Rather than reducing the amount of computation, this strategy breaks the problem into multiple pieces that are solved simultaneously on different processing units. For example, Garcia et al. (2008) report large speedups by parallelizing brute force search using GPUs.
Exact space partitioning methods
This class of methods aims to reduce the number of distance computations needed to find nearest neighbors. The data points are used to build a tree structure that hierarchically partitions the data space. The tree can efficiently identify some data points that cannot be nearest neighbors of a new query point, so distances to these points don't need to be computed. Many algorithms exist using different types of trees. Common variants include KD trees, metric/ball trees, and cover trees. For example, see Kibriya and Frank (2007). Tree-based methods can be used to compute exact solutions (i.e. nearest neighbors match those found by brute force search). This approach is efficient for low dimensional data, but performance can degrade in high dimensional settings. There are also tree-based methods for approximate nearest neighbor search (see below).
Approximate nearest neighbor search
Larger speedups can be achieved (particularly for high dimensional data) if one is willing to settle for approximate nearest neighbors rather than exact nearest neighbors. This is often sufficient in practice, especially for large datasets. Many different strategies have been proposed for approximate nearest neighbor search. See Muja and Lowe (2014) for a review.
Methods based on locality sensitive hashing (LSH) are particularly popular (e.g. see Andoni and Indyk 2006). The idea here is to use specialized hash functions that efficiently map data points to discrete buckets, such that similar points end up in the same bucket. One can search for neighbors among the subset of training points that are mapped to the same buckets as the query point. The hash functions should be tailored to the distance metric used to define nearest neighbors.
There are also approximate nearest neighbor search algorithms based on space-partitioning trees and nearest neighbor graphs. For example, see Muja and Lowe (2014) and Liu et al. (2005).
Dimensionality reduction
Dimensionality reduction maps high dimensional data points to a lower dimensional space. Searching for neighbors in the lower dimensional space is faster because distance computations operate on fewer dimensions. Of course, one must take into account the computational cost of the mapping itself (which depends strongly on the method used). Note that nearest neighbor search in the low dimensional space is equivalent to using a different distance metric.
In some cases (depending on the method and structure of the data), it's possible to reduce the dimensionality such that nearest neighbors in the low dimensional space approximately match those in the high dimensional space. For supervised learning (e.g. KNN regression and classification), matching neighbors in the high dimensional space typically doesn't matter. Rather, the goal is to learn a function with good generalization performance. Dimensionality reduction can sometimes increase generalization performance by acting as a form of regularization, or by providing a space where distances are more meaningful to the problem.
There's a large literature on dimensionality reduction including linear, nonlinear, supervised, and unspervised methods. PCA is often the first thing people try because it's a standard method, works well in many cases, and scales efficiently to large datasets. But, whether it (or another method) will work well depends on the problem.
For an example application to nearest neighbor search, see Degalla and Bostrom (2006), comparing PCA and random projections for nearest neighbor classification.
Feature selection
Feature selection methods retain a subset of the input features and discard the rest. Typically, labels/target values from the training set are used to select features relevant for solving a classification or regression problem. Similar to dimensionality reduction, feature selection speeds up nearest neighbor search by reducing the number of dimensions. It is not generally aimed toward preserving distances.
Nearest prototypes
The idea here is to compress the training set into a smaller number of representative points (called prototypes). This speeds up neighbor search by reducing the number of points to search over. It can also have a denoising effect for supervised learning problems. Garcia et al. (2012) review various approaches.
Combining methods
Many of the techniques described above could be combined to yield even further speedups. For example, Pan and Manochoa (2011) and Gieseke et al. (2014) use GPUs to parallelize locality sensitive hashing and tree-based methods, respectively.
References
Andoni and Indyk (2006). Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions.
Deegalla and Bostrom (2006). Reducing high-dimensional data by principal component analysis vs. random projection for nearest neighbor classification.
Garcia et al. (2008). Fast k nearest neighbor search using GPU.
Garcia et al. (2012). Prototype selection for nearest neighbor classification: Taxonomy and empirical study.
Gieseke et al. (2014). Buffer kd trees: processing massive nearest neighbor queries on GPUs.
Kibriya and Frank (2007). An empirical comparison of exact nearest neighbour algorithms.
Liu et al. (2005). An investigation of practical approximate nearest neighbor algorithms.
Muja and Lowe (2014). Scalable nearest neighbor algorithms for high dimensional data.
Pan and Manocha (2011). Fast GPU-based locality sensitive hashing for k-nearest neighbor computation.
|
How to overcome the computational cost of the KNN algorithm?
|
There's a large literature on speeding up nearest neighbor search, as well as numerous software libraries (you might consider using one of these instead of re-implementing from scratch). I'll describe
|
How to overcome the computational cost of the KNN algorithm?
There's a large literature on speeding up nearest neighbor search, as well as numerous software libraries (you might consider using one of these instead of re-implementing from scratch). I'll describe some general classes of solutions below. Most of these methods aim to reduce the amount of computation by exploiting some kind of structure in the data.
Parallelization
One approach is to parallelize the computation (e.g. using a cluster, GPU, or multiple cores on a single machine). Rather than reducing the amount of computation, this strategy breaks the problem into multiple pieces that are solved simultaneously on different processing units. For example, Garcia et al. (2008) report large speedups by parallelizing brute force search using GPUs.
Exact space partitioning methods
This class of methods aims to reduce the number of distance computations needed to find nearest neighbors. The data points are used to build a tree structure that hierarchically partitions the data space. The tree can efficiently identify some data points that cannot be nearest neighbors of a new query point, so distances to these points don't need to be computed. Many algorithms exist using different types of trees. Common variants include KD trees, metric/ball trees, and cover trees. For example, see Kibriya and Frank (2007). Tree-based methods can be used to compute exact solutions (i.e. nearest neighbors match those found by brute force search). This approach is efficient for low dimensional data, but performance can degrade in high dimensional settings. There are also tree-based methods for approximate nearest neighbor search (see below).
Approximate nearest neighbor search
Larger speedups can be achieved (particularly for high dimensional data) if one is willing to settle for approximate nearest neighbors rather than exact nearest neighbors. This is often sufficient in practice, especially for large datasets. Many different strategies have been proposed for approximate nearest neighbor search. See Muja and Lowe (2014) for a review.
Methods based on locality sensitive hashing (LSH) are particularly popular (e.g. see Andoni and Indyk 2006). The idea here is to use specialized hash functions that efficiently map data points to discrete buckets, such that similar points end up in the same bucket. One can search for neighbors among the subset of training points that are mapped to the same buckets as the query point. The hash functions should be tailored to the distance metric used to define nearest neighbors.
There are also approximate nearest neighbor search algorithms based on space-partitioning trees and nearest neighbor graphs. For example, see Muja and Lowe (2014) and Liu et al. (2005).
Dimensionality reduction
Dimensionality reduction maps high dimensional data points to a lower dimensional space. Searching for neighbors in the lower dimensional space is faster because distance computations operate on fewer dimensions. Of course, one must take into account the computational cost of the mapping itself (which depends strongly on the method used). Note that nearest neighbor search in the low dimensional space is equivalent to using a different distance metric.
In some cases (depending on the method and structure of the data), it's possible to reduce the dimensionality such that nearest neighbors in the low dimensional space approximately match those in the high dimensional space. For supervised learning (e.g. KNN regression and classification), matching neighbors in the high dimensional space typically doesn't matter. Rather, the goal is to learn a function with good generalization performance. Dimensionality reduction can sometimes increase generalization performance by acting as a form of regularization, or by providing a space where distances are more meaningful to the problem.
There's a large literature on dimensionality reduction including linear, nonlinear, supervised, and unspervised methods. PCA is often the first thing people try because it's a standard method, works well in many cases, and scales efficiently to large datasets. But, whether it (or another method) will work well depends on the problem.
For an example application to nearest neighbor search, see Degalla and Bostrom (2006), comparing PCA and random projections for nearest neighbor classification.
Feature selection
Feature selection methods retain a subset of the input features and discard the rest. Typically, labels/target values from the training set are used to select features relevant for solving a classification or regression problem. Similar to dimensionality reduction, feature selection speeds up nearest neighbor search by reducing the number of dimensions. It is not generally aimed toward preserving distances.
Nearest prototypes
The idea here is to compress the training set into a smaller number of representative points (called prototypes). This speeds up neighbor search by reducing the number of points to search over. It can also have a denoising effect for supervised learning problems. Garcia et al. (2012) review various approaches.
Combining methods
Many of the techniques described above could be combined to yield even further speedups. For example, Pan and Manochoa (2011) and Gieseke et al. (2014) use GPUs to parallelize locality sensitive hashing and tree-based methods, respectively.
References
Andoni and Indyk (2006). Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions.
Deegalla and Bostrom (2006). Reducing high-dimensional data by principal component analysis vs. random projection for nearest neighbor classification.
Garcia et al. (2008). Fast k nearest neighbor search using GPU.
Garcia et al. (2012). Prototype selection for nearest neighbor classification: Taxonomy and empirical study.
Gieseke et al. (2014). Buffer kd trees: processing massive nearest neighbor queries on GPUs.
Kibriya and Frank (2007). An empirical comparison of exact nearest neighbour algorithms.
Liu et al. (2005). An investigation of practical approximate nearest neighbor algorithms.
Muja and Lowe (2014). Scalable nearest neighbor algorithms for high dimensional data.
Pan and Manocha (2011). Fast GPU-based locality sensitive hashing for k-nearest neighbor computation.
|
How to overcome the computational cost of the KNN algorithm?
There's a large literature on speeding up nearest neighbor search, as well as numerous software libraries (you might consider using one of these instead of re-implementing from scratch). I'll describe
|
40,585
|
what does it mean by more "efficient" estimator
|
I am just wondering, when comparing two estimator says T1
and T2, what does it mean by saying T1 is more efficient than T2
https://en.wikipedia.org/wiki/Efficiency_(statistics)
For an unbiased estimator, efficiency is the precision of the estimator (reciprocal of the variance) divided by the upper bound of the precision
(which is the Fisher information). Equivalently, it's the lower bound on the variance (the Cramer-Rao bound) divided by the variance of the estimator.
The relative efficiency of two unbiased estimators is the ratio of their precisions (the bound cancelling out)
When you're dealing with biased estimators, relative efficiency is defined in terms of the ratio of
MSE.
Could someone give an easy but very concrete example.
Compare the sample mean ($\bar{x}$) and sample median ($\tilde{x}$) when trying to estimate $\mu$ at the normal.
They're both unbiased so we need the variance of each. The variance of the median for odd sample sizes can be written down from the variance of the $k$th order statistic but involves the cdf of the normal. In large samples $\frac{n}{\sigma^2}\text{ Var}(\tilde{x})$ approaches the asymptotic value reasonably quickly, so people tend to focus on the asymptotic relative efficiency.
The asymptotic relative efficiency of median vs mean as an estimator of $\mu$ at the normal is the ratio of variance of the mean to the (asymptotic) variance of the median when the sample is drawn from a normal population.
This is $\frac{\sigma^2/n}{2\pi \sigma^2/(4 n)} = 2/\pi\approx 0.64$
There's another example discussed here: Relative efficiency: mean deviation vs standard deviation
If we don't know θ, then how can we show one is smaller than the other in the above inequality.
Generally the MSE's will be some function of $\theta$ and $n$ (though they may be independent of $\theta$). So at any given $\theta$ you can compute their relative size.
Also I thought there is a SINGLE "true" value of the parameter θ, is it correct?
Yes, at least in the usual situations we'd be doing this in and assuming a frequentist framework.
then what does it mean by saying "for SOME value of θ" in the above statement [...] if there is only ONE, why it says "for SOME" value of θ
When you are comparing estimators you want ones that do well for every value of $\theta$. If you don't know what $\theta$ is (if you did, you wouldn't have to bother with estimators), it would be good if it worked well for whatever value you have.
|
what does it mean by more "efficient" estimator
|
I am just wondering, when comparing two estimator says T1
and T2, what does it mean by saying T1 is more efficient than T2
https://en.wikipedia.org/wiki/Efficiency_(statistics)
For an unbiased esti
|
what does it mean by more "efficient" estimator
I am just wondering, when comparing two estimator says T1
and T2, what does it mean by saying T1 is more efficient than T2
https://en.wikipedia.org/wiki/Efficiency_(statistics)
For an unbiased estimator, efficiency is the precision of the estimator (reciprocal of the variance) divided by the upper bound of the precision
(which is the Fisher information). Equivalently, it's the lower bound on the variance (the Cramer-Rao bound) divided by the variance of the estimator.
The relative efficiency of two unbiased estimators is the ratio of their precisions (the bound cancelling out)
When you're dealing with biased estimators, relative efficiency is defined in terms of the ratio of
MSE.
Could someone give an easy but very concrete example.
Compare the sample mean ($\bar{x}$) and sample median ($\tilde{x}$) when trying to estimate $\mu$ at the normal.
They're both unbiased so we need the variance of each. The variance of the median for odd sample sizes can be written down from the variance of the $k$th order statistic but involves the cdf of the normal. In large samples $\frac{n}{\sigma^2}\text{ Var}(\tilde{x})$ approaches the asymptotic value reasonably quickly, so people tend to focus on the asymptotic relative efficiency.
The asymptotic relative efficiency of median vs mean as an estimator of $\mu$ at the normal is the ratio of variance of the mean to the (asymptotic) variance of the median when the sample is drawn from a normal population.
This is $\frac{\sigma^2/n}{2\pi \sigma^2/(4 n)} = 2/\pi\approx 0.64$
There's another example discussed here: Relative efficiency: mean deviation vs standard deviation
If we don't know θ, then how can we show one is smaller than the other in the above inequality.
Generally the MSE's will be some function of $\theta$ and $n$ (though they may be independent of $\theta$). So at any given $\theta$ you can compute their relative size.
Also I thought there is a SINGLE "true" value of the parameter θ, is it correct?
Yes, at least in the usual situations we'd be doing this in and assuming a frequentist framework.
then what does it mean by saying "for SOME value of θ" in the above statement [...] if there is only ONE, why it says "for SOME" value of θ
When you are comparing estimators you want ones that do well for every value of $\theta$. If you don't know what $\theta$ is (if you did, you wouldn't have to bother with estimators), it would be good if it worked well for whatever value you have.
|
what does it mean by more "efficient" estimator
I am just wondering, when comparing two estimator says T1
and T2, what does it mean by saying T1 is more efficient than T2
https://en.wikipedia.org/wiki/Efficiency_(statistics)
For an unbiased esti
|
40,586
|
Reasonable hyperparameter range for Latent Dirichlet Allocation?
|
Choice of $\alpha$ and $\beta$ is indeed tricky, since it impacts the topic modeling results. The Gibbs sampling paper by Griffiths et al. gives some insight into this:
The value of $\beta$ thus affects the granularity of the model: a
corpus of documents can be sensibly factorized into a set of topics at
several different scales, and the particular scale assessed by the
model will be set by $\beta$. With scientific documents, a large value
of $\beta$ would lead the model to find a relatively small number of
topics, perhaps at the level of scientific disciplines, whereas
smaller values of $\beta$ will produce more topics that address
specific areas of research.
Eventually for scientific documents, the authors chose the following hyper-parameters, $\beta=0.1$ and $\alpha=50/T$. But they had a corpus of around $28K$ documents and a vocabulary of $20K$ words, and they tried several different values of $T: [50, 100, 200, 300, 400, 500, 600, 1000]$.
Regarding your data. I have no experience with analyzing financial text data, but for the choice of
$\alpha$ and $\beta$, I would ask myself the following questions:
Given my word vocabulary, do I expect my resultant topics to be sparse? For most cases, this is true. Hence, typically the topic prior is chosen to be sparse with $\beta < 1$.
Given the topics, do I expect the distribution of topics in each document to be sparse? That is, each document only represents a few topics. If yes, then $\alpha < 1$.
Answering the above questions may not be straight-forward with limited knowledge of the data. Since you have limited data, I would choose multiple values of $\alpha$ and $\beta$ - ranging from sparse to non-sparse priors - and find which one suits the dataset by computing the perplexity over some hold-out data. To put it more concretely:
Choose $\alpha_m$ from $[0.05, 0.1, 0.5, 1, 5, 10]$
Choose $\beta_m$ from $[0.05, 0.1, 0.5, 1, 5, 10]$
Run topic modeling on training data, with $(\alpha_m, \beta_m)$ pair
Find model perplexity on hold-out test data
Choose the value of $\alpha_m$ and $\beta_m$ with the minimum perplexity
Resources:
Latent Dirichlet Allocation: Original paper talks about perplexity.
Finding Scientific topics: Paper discussed above.
Rethinking LDA: Why priors matter: Paper discussing how prior structure affects topic modeling results.
|
Reasonable hyperparameter range for Latent Dirichlet Allocation?
|
Choice of $\alpha$ and $\beta$ is indeed tricky, since it impacts the topic modeling results. The Gibbs sampling paper by Griffiths et al. gives some insight into this:
The value of $\beta$ thus aff
|
Reasonable hyperparameter range for Latent Dirichlet Allocation?
Choice of $\alpha$ and $\beta$ is indeed tricky, since it impacts the topic modeling results. The Gibbs sampling paper by Griffiths et al. gives some insight into this:
The value of $\beta$ thus affects the granularity of the model: a
corpus of documents can be sensibly factorized into a set of topics at
several different scales, and the particular scale assessed by the
model will be set by $\beta$. With scientific documents, a large value
of $\beta$ would lead the model to find a relatively small number of
topics, perhaps at the level of scientific disciplines, whereas
smaller values of $\beta$ will produce more topics that address
specific areas of research.
Eventually for scientific documents, the authors chose the following hyper-parameters, $\beta=0.1$ and $\alpha=50/T$. But they had a corpus of around $28K$ documents and a vocabulary of $20K$ words, and they tried several different values of $T: [50, 100, 200, 300, 400, 500, 600, 1000]$.
Regarding your data. I have no experience with analyzing financial text data, but for the choice of
$\alpha$ and $\beta$, I would ask myself the following questions:
Given my word vocabulary, do I expect my resultant topics to be sparse? For most cases, this is true. Hence, typically the topic prior is chosen to be sparse with $\beta < 1$.
Given the topics, do I expect the distribution of topics in each document to be sparse? That is, each document only represents a few topics. If yes, then $\alpha < 1$.
Answering the above questions may not be straight-forward with limited knowledge of the data. Since you have limited data, I would choose multiple values of $\alpha$ and $\beta$ - ranging from sparse to non-sparse priors - and find which one suits the dataset by computing the perplexity over some hold-out data. To put it more concretely:
Choose $\alpha_m$ from $[0.05, 0.1, 0.5, 1, 5, 10]$
Choose $\beta_m$ from $[0.05, 0.1, 0.5, 1, 5, 10]$
Run topic modeling on training data, with $(\alpha_m, \beta_m)$ pair
Find model perplexity on hold-out test data
Choose the value of $\alpha_m$ and $\beta_m$ with the minimum perplexity
Resources:
Latent Dirichlet Allocation: Original paper talks about perplexity.
Finding Scientific topics: Paper discussed above.
Rethinking LDA: Why priors matter: Paper discussing how prior structure affects topic modeling results.
|
Reasonable hyperparameter range for Latent Dirichlet Allocation?
Choice of $\alpha$ and $\beta$ is indeed tricky, since it impacts the topic modeling results. The Gibbs sampling paper by Griffiths et al. gives some insight into this:
The value of $\beta$ thus aff
|
40,587
|
GAM with categorical variables - interpretation
|
In a factor by variable smooth, like other simple smooths, the bases for the smooths are subject to identifiability constraints. If you just naively computed the basis of the required dimension, and given the defaults for s(), you'd get 2 basis functions that are in the null space of the smoothness penalty:
a flat, horizontal function, and
a linear functions
Both are perfectly smooth and not penalised by the smoothness penalty as a result. The flat function is the same thing as the model intercept. The identifiability issues arises because you could add any value to the estimated coefficient for the intercept (constant) term and subtract the same value from the coefficient for the flat, horizontal basis function, and get the same fit but via a new model. As there are an infinite set of numbers you could add to the intercept you have an infinity of models.
This is not good, so to alleviate the issue an identifiability constraint is used. There are several such constraints but the one that leads to good confidence interval coverage properties is the sum-to-zero constraint. Over the range of the covariate, the smooth is constrained to sum to zero. This means it is centred about zero and this means the flat function is deleted from the basis of the smooth.
Now, in the case of factor by variables, because each smooth is centred about zero, the smooth itself contains no easy way to control for differences between the levels in terms of the mean response; say samples from condition F were on average having larger values of pr than condition G1. We'd want the spline for F to be shifted up by some constant amount relative to G1. That's what the parametric terms are and they come from the + as.factor(Abbr) term in the model formula. The parametric terms represent the deviation of the indicated group from the mean of the reference group (in your case the level not listed, F). If you didn't include this term in the model, then the smooths may become more wiggly as they try to account for the mean shifts of the groups, which is not something you want.
The other main type of smooth you might use for this kind of model is the random factor smooth basis bs = "fs". This basis/smooth includes intercepts for each level of the grouping factor and as such doesn't need the parametric terms.
The approximate significance of the smooths actually represents a test that the indicated smooth is actually a flat, zero, function. Or put another way, it is the smooth equivalent of a t or Wald Z test of the null hypothesis that a coefficient in a linear model or GLM is equal to zero (i.e. has no effect). There is strong evidence against the null for each of your smooths, which is reflected in the strong non-linearity of the estimated smooths and that the confidence intervals for the smooths do not include 0 for most of the range of Year.
|
GAM with categorical variables - interpretation
|
In a factor by variable smooth, like other simple smooths, the bases for the smooths are subject to identifiability constraints. If you just naively computed the basis of the required dimension, and g
|
GAM with categorical variables - interpretation
In a factor by variable smooth, like other simple smooths, the bases for the smooths are subject to identifiability constraints. If you just naively computed the basis of the required dimension, and given the defaults for s(), you'd get 2 basis functions that are in the null space of the smoothness penalty:
a flat, horizontal function, and
a linear functions
Both are perfectly smooth and not penalised by the smoothness penalty as a result. The flat function is the same thing as the model intercept. The identifiability issues arises because you could add any value to the estimated coefficient for the intercept (constant) term and subtract the same value from the coefficient for the flat, horizontal basis function, and get the same fit but via a new model. As there are an infinite set of numbers you could add to the intercept you have an infinity of models.
This is not good, so to alleviate the issue an identifiability constraint is used. There are several such constraints but the one that leads to good confidence interval coverage properties is the sum-to-zero constraint. Over the range of the covariate, the smooth is constrained to sum to zero. This means it is centred about zero and this means the flat function is deleted from the basis of the smooth.
Now, in the case of factor by variables, because each smooth is centred about zero, the smooth itself contains no easy way to control for differences between the levels in terms of the mean response; say samples from condition F were on average having larger values of pr than condition G1. We'd want the spline for F to be shifted up by some constant amount relative to G1. That's what the parametric terms are and they come from the + as.factor(Abbr) term in the model formula. The parametric terms represent the deviation of the indicated group from the mean of the reference group (in your case the level not listed, F). If you didn't include this term in the model, then the smooths may become more wiggly as they try to account for the mean shifts of the groups, which is not something you want.
The other main type of smooth you might use for this kind of model is the random factor smooth basis bs = "fs". This basis/smooth includes intercepts for each level of the grouping factor and as such doesn't need the parametric terms.
The approximate significance of the smooths actually represents a test that the indicated smooth is actually a flat, zero, function. Or put another way, it is the smooth equivalent of a t or Wald Z test of the null hypothesis that a coefficient in a linear model or GLM is equal to zero (i.e. has no effect). There is strong evidence against the null for each of your smooths, which is reflected in the strong non-linearity of the estimated smooths and that the confidence intervals for the smooths do not include 0 for most of the range of Year.
|
GAM with categorical variables - interpretation
In a factor by variable smooth, like other simple smooths, the bases for the smooths are subject to identifiability constraints. If you just naively computed the basis of the required dimension, and g
|
40,588
|
Implicit hypothesis testing: mean greater than variance and Delta Method
|
What distinguishes the first order test statistic (normal distribution) from the second order test statistic (Chi-squared distribution)
With the first order approximation you approximate the function $g(\theta)$ as a linear function of $\theta$. But this works only when there is actually a slope, that is when $g(\theta)' \neq 0$.
With the second order approximation you approximate the function $g(\theta)$ as a polynomial function (the square) of $\theta$, but this works only in a peak of the function $g(\theta)$.
I do not believe that this is applicable to your case and that you applied it correctly (It seems like you just took the square of the first order).
The image below might illustrate this intuitively:
In this example $Y=0.03 X^2$. And $X \sim N(20, \sigma)$ is normal distributed with $\sigma$ changing from $36$ to $4$ and $1$. Simulations are made for 600 data points to create the histograms (60 points are used to plot on top of the curve in the graph). In the image on the left, when $X$ has a wide distribution, you see that the distribution of $Y$ is not well approximated with a linear transformation (it is a bit skewed), but as the variance of $X$ decreases (the images on the right) then the distribution starts to resemble more and more a normal distribution.
So that is what the linear transformation does in the Delta method. But when the slope is zero then this linearization doesn't work and you need to use a second order approximation of the curve. This is illustrated below
What happens to the inequality sign ?
With $H_0 : \mu >\sigma$ you have a composed hypothesis instead of a simple hypothesis $H_0 : \mu = \sigma$. This is not easy to deal with and you will typically not be able to find a hypothesis test where the probability for type I error is equal for every value of parameters that are possible in the null hypothesis.
In this case when you use the boundary for $H_0 : \mu = \sigma$ then you will have a rejection (type I error) rate $\alpha$ when the hypothesis is true $\mu = \sigma$ but you get smaller rejection rates when $\mu>\sigma$.
Is this the right way to deal with such a hypothesis test ?
The Delta method is very easy to apply. But in this case you could also consider the statistic $T = \sqrt{n}\frac{\bar{x}}{s}$ which follows a non-central t-distribution with non centrality parameter $\sqrt{n}\frac{\mu}{\sigma}$.
Then you can use code for computing the non central t-distribution to compute boundaries more precisely than the delta method approximation.
# testing performance of statistic mean(y)/sd(y)
# in comparison to non-central t-distribution
set.seed(1)
n = 5
mu = 3
sigma = 3
dt <- 0.2 # historgram binsize
# doing simulations
mc_test <- sapply(1:10^6, FUN = function(x) {y <- rnorm(n,mean=mu,sd=sigma); sqrt(n)*mean(y)/sd(y)})
# computing and plotting histogram
h <- hist(mc_test,
breaks=seq(min(mc_test)-dt,max(mc_test)+dt,dt),
xlim=c(-3,10),
freq = FALSE,
ylab = bquote(t-dist(T, nu == .(n), ncp==1)),
xlab = bquote(T == bar(x)/s),
main = "histogram of simulations compared with non-central t-distribution", cex.main=1
)
# adding non central t-distribution to the plot
t <- seq(-3,10,0.01)
lines(t,dt(t,n-1,sqrt(n)),col=2)
ts <- seq(qt(0.95,n,sqrt(n)),10,0.01)
polygon(c(rev(ts),ts),c(0*dt(ts,n-1,sqrt(n)),dt(ts,n-1,sqrt(n))),
col = rgb(0,0,0,0.3), border = NA)
# verify/compute how often boundary is exceeded
sum(mc_test>qt(0.95,n-1,sqrt(n)))/10^6
Comparison of boundaries as function of the sample size $n$
Your statistic* $\frac{\sqrt{n} (\hat \mu - \hat \sigma)}{\hat \sigma \sqrt{1.5}} \sim N(0,1)$ leads to $\sqrt{n}\frac{\bar{x}}{s} \sim N(\sqrt{n},1.5)$ which is the asymptotic behaviour of the non central distribution that we derived:
#different values of n
n <- 3:200
# boundary based on t-distribution
bt <- qt(0.95,n-1,sqrt(n))
#boundary based on delta method
dt <- qnorm(0.95,sqrt(n),sqrt(1.5))
# plotting
plot(n,dt,type='l',
xlab = "n",ylab = "95% criterium" )
lines(n,bt,pch=21,col=2)
legend(0,16,c("t-distribution", "Delta-method"),box.col=0,col=c(1,2),lty=1,cex=0.7)
*In your computations the factor $3$ should be a factor $1.5$
$$[1,- \frac{1}{2\sigma}] \ \begin{bmatrix}\frac{\sigma^2}{n} & 0 \\0 & \frac{2 \sigma^4}{n} \end{bmatrix} \ [1,- \frac{1}{2\sigma}]^T
= \frac{1.5 \sigma^2}{n}$$
and also the square root term $\sqrt{n}$ should not be added (because you only have one measurement of $\hat\mu - \sqrt{\hat\sigma^2}$ ). You used a formula for the Delta method that incorporates a term $\sqrt{n}$ for multiple measurements, but you already accounted for multiple measurements when you expressed the variance of $\hat\mu$ and $\hat\sigma^2$.
|
Implicit hypothesis testing: mean greater than variance and Delta Method
|
What distinguishes the first order test statistic (normal distribution) from the second order test statistic (Chi-squared distribution)
With the first order approximation you approximate the function
|
Implicit hypothesis testing: mean greater than variance and Delta Method
What distinguishes the first order test statistic (normal distribution) from the second order test statistic (Chi-squared distribution)
With the first order approximation you approximate the function $g(\theta)$ as a linear function of $\theta$. But this works only when there is actually a slope, that is when $g(\theta)' \neq 0$.
With the second order approximation you approximate the function $g(\theta)$ as a polynomial function (the square) of $\theta$, but this works only in a peak of the function $g(\theta)$.
I do not believe that this is applicable to your case and that you applied it correctly (It seems like you just took the square of the first order).
The image below might illustrate this intuitively:
In this example $Y=0.03 X^2$. And $X \sim N(20, \sigma)$ is normal distributed with $\sigma$ changing from $36$ to $4$ and $1$. Simulations are made for 600 data points to create the histograms (60 points are used to plot on top of the curve in the graph). In the image on the left, when $X$ has a wide distribution, you see that the distribution of $Y$ is not well approximated with a linear transformation (it is a bit skewed), but as the variance of $X$ decreases (the images on the right) then the distribution starts to resemble more and more a normal distribution.
So that is what the linear transformation does in the Delta method. But when the slope is zero then this linearization doesn't work and you need to use a second order approximation of the curve. This is illustrated below
What happens to the inequality sign ?
With $H_0 : \mu >\sigma$ you have a composed hypothesis instead of a simple hypothesis $H_0 : \mu = \sigma$. This is not easy to deal with and you will typically not be able to find a hypothesis test where the probability for type I error is equal for every value of parameters that are possible in the null hypothesis.
In this case when you use the boundary for $H_0 : \mu = \sigma$ then you will have a rejection (type I error) rate $\alpha$ when the hypothesis is true $\mu = \sigma$ but you get smaller rejection rates when $\mu>\sigma$.
Is this the right way to deal with such a hypothesis test ?
The Delta method is very easy to apply. But in this case you could also consider the statistic $T = \sqrt{n}\frac{\bar{x}}{s}$ which follows a non-central t-distribution with non centrality parameter $\sqrt{n}\frac{\mu}{\sigma}$.
Then you can use code for computing the non central t-distribution to compute boundaries more precisely than the delta method approximation.
# testing performance of statistic mean(y)/sd(y)
# in comparison to non-central t-distribution
set.seed(1)
n = 5
mu = 3
sigma = 3
dt <- 0.2 # historgram binsize
# doing simulations
mc_test <- sapply(1:10^6, FUN = function(x) {y <- rnorm(n,mean=mu,sd=sigma); sqrt(n)*mean(y)/sd(y)})
# computing and plotting histogram
h <- hist(mc_test,
breaks=seq(min(mc_test)-dt,max(mc_test)+dt,dt),
xlim=c(-3,10),
freq = FALSE,
ylab = bquote(t-dist(T, nu == .(n), ncp==1)),
xlab = bquote(T == bar(x)/s),
main = "histogram of simulations compared with non-central t-distribution", cex.main=1
)
# adding non central t-distribution to the plot
t <- seq(-3,10,0.01)
lines(t,dt(t,n-1,sqrt(n)),col=2)
ts <- seq(qt(0.95,n,sqrt(n)),10,0.01)
polygon(c(rev(ts),ts),c(0*dt(ts,n-1,sqrt(n)),dt(ts,n-1,sqrt(n))),
col = rgb(0,0,0,0.3), border = NA)
# verify/compute how often boundary is exceeded
sum(mc_test>qt(0.95,n-1,sqrt(n)))/10^6
Comparison of boundaries as function of the sample size $n$
Your statistic* $\frac{\sqrt{n} (\hat \mu - \hat \sigma)}{\hat \sigma \sqrt{1.5}} \sim N(0,1)$ leads to $\sqrt{n}\frac{\bar{x}}{s} \sim N(\sqrt{n},1.5)$ which is the asymptotic behaviour of the non central distribution that we derived:
#different values of n
n <- 3:200
# boundary based on t-distribution
bt <- qt(0.95,n-1,sqrt(n))
#boundary based on delta method
dt <- qnorm(0.95,sqrt(n),sqrt(1.5))
# plotting
plot(n,dt,type='l',
xlab = "n",ylab = "95% criterium" )
lines(n,bt,pch=21,col=2)
legend(0,16,c("t-distribution", "Delta-method"),box.col=0,col=c(1,2),lty=1,cex=0.7)
*In your computations the factor $3$ should be a factor $1.5$
$$[1,- \frac{1}{2\sigma}] \ \begin{bmatrix}\frac{\sigma^2}{n} & 0 \\0 & \frac{2 \sigma^4}{n} \end{bmatrix} \ [1,- \frac{1}{2\sigma}]^T
= \frac{1.5 \sigma^2}{n}$$
and also the square root term $\sqrt{n}$ should not be added (because you only have one measurement of $\hat\mu - \sqrt{\hat\sigma^2}$ ). You used a formula for the Delta method that incorporates a term $\sqrt{n}$ for multiple measurements, but you already accounted for multiple measurements when you expressed the variance of $\hat\mu$ and $\hat\sigma^2$.
|
Implicit hypothesis testing: mean greater than variance and Delta Method
What distinguishes the first order test statistic (normal distribution) from the second order test statistic (Chi-squared distribution)
With the first order approximation you approximate the function
|
40,589
|
Implicit hypothesis testing: mean greater than variance and Delta Method
|
Few notes:
1)In last eqaution $(\hat \mu - \hat \sigma)$ should be squared.
2)Cov. matrix $\Sigma(\mu, \sigma^2) $ shouldn't depend on $n$, it is asymptotic entity. So as $\Gamma$ which in this case is equal $\Gamma = \frac{3 \sigma^2}{2}$.
So the final answer is $T_n = \frac{2n^2 (\hat \mu - \hat \sigma)^2}{3 \hat \sigma^2}$
|
Implicit hypothesis testing: mean greater than variance and Delta Method
|
Few notes:
1)In last eqaution $(\hat \mu - \hat \sigma)$ should be squared.
2)Cov. matrix $\Sigma(\mu, \sigma^2) $ shouldn't depend on $n$, it is asymptotic entity. So as $\Gamma$ which in this case
|
Implicit hypothesis testing: mean greater than variance and Delta Method
Few notes:
1)In last eqaution $(\hat \mu - \hat \sigma)$ should be squared.
2)Cov. matrix $\Sigma(\mu, \sigma^2) $ shouldn't depend on $n$, it is asymptotic entity. So as $\Gamma$ which in this case is equal $\Gamma = \frac{3 \sigma^2}{2}$.
So the final answer is $T_n = \frac{2n^2 (\hat \mu - \hat \sigma)^2}{3 \hat \sigma^2}$
|
Implicit hypothesis testing: mean greater than variance and Delta Method
Few notes:
1)In last eqaution $(\hat \mu - \hat \sigma)$ should be squared.
2)Cov. matrix $\Sigma(\mu, \sigma^2) $ shouldn't depend on $n$, it is asymptotic entity. So as $\Gamma$ which in this case
|
40,590
|
How does lifelines calculate baseline hazard in CoxPHFitter?
|
I'm the author of lifelines and can help you.
Does it mean that I can specify it as any functions and select the best one during experiments?
Not normally, no. (In fact, if you could, that would make the Cox model fully parametric, and not semi parametric). It being "unspecified" means it is non-parametric. If you strongly have a prior assumption on what the baseline might be, there are other models (like AFT models) that could be used.
To answer your second question, how lifelines calculates the baseline survival, we use the formula on page 15 here. In code, this is represented here.
|
How does lifelines calculate baseline hazard in CoxPHFitter?
|
I'm the author of lifelines and can help you.
Does it mean that I can specify it as any functions and select the best one during experiments?
Not normally, no. (In fact, if you could, that would mak
|
How does lifelines calculate baseline hazard in CoxPHFitter?
I'm the author of lifelines and can help you.
Does it mean that I can specify it as any functions and select the best one during experiments?
Not normally, no. (In fact, if you could, that would make the Cox model fully parametric, and not semi parametric). It being "unspecified" means it is non-parametric. If you strongly have a prior assumption on what the baseline might be, there are other models (like AFT models) that could be used.
To answer your second question, how lifelines calculates the baseline survival, we use the formula on page 15 here. In code, this is represented here.
|
How does lifelines calculate baseline hazard in CoxPHFitter?
I'm the author of lifelines and can help you.
Does it mean that I can specify it as any functions and select the best one during experiments?
Not normally, no. (In fact, if you could, that would mak
|
40,591
|
What is the probability that a student gets a better score than another on a test with randomly selected questions?
|
A dynamic program will make short work of this.
Suppose we administer all questions to the students and then randomly select a subset $\mathcal{I}$ of $k=10$ out of all $n=100$ questions. Let's define a random variable $X_i$ to compare the two students on question $i:$ set it to $1$ if student A is correct and student B not, $-1$ if student B is correct and student A not, and $0$ otherwise. The total
$$X_\mathcal{I} = \sum_{i\in\mathcal{I}} X_i$$
is the difference in scores for the questions in $\mathcal I.$ We wish to compute $\Pr(X_\mathcal{I} \gt 0).$ This probability is taken over the joint distribution of $\mathcal I$ and the $X_i.$
The distribution function of $X_i$ is readily calculated under the assumption the students respond independently:
$$\eqalign{
\Pr(X_i=1) &= P_{ai}(1-P_{bi}) \\
\Pr(X_i=-1) &= P_{bi}(1-P_{ai}) \\
\Pr(X_i=0) &= 1 - \Pr(X_i=1) - \Pr(X_i=0).
}$$
As a shorthand, let us call these probabilities $a_i,$ $b_i,$ and $d_i,$ respectively. Write
$$f_i(x) = a_i x + b_i x^{-1} + d_i.$$
This polynomial is a probability generating function for $X_i.$
Consider the rational function
$$\psi_n(x,t) = \prod_{i=1}^n \left(1 + t f_i(x)\right).$$
(Actually, $x^n\psi_n(x,t)$ is a polynomial: it's a pretty simple rational function.)
When $\psi_n$ is expanded as a polynomial in $t$, the coefficient of $t^k$ consists of the sum of all possible products of $k$ distinct $f_i(x).$ This will be a rational function with nonzero coefficients only for powers of $x$ from $x^{-k}$ through $x^k.$ Because $\mathcal{I}$ is selected uniformly at random, the coefficients of these powers of $x,$ when normalized to sum to unity, give the probability generating function for the difference in scores. The powers correspond to the size of $\mathcal{I}.$
The point of this analysis is that we may compute $\psi(x,t)$ easily and with reasonable efficiency: simply multiply the $n$ polynomials sequentially. Doing this requires retaining the coefficients of $1, t, \ldots, t^k$ in $\psi_j(x,t)$ for $j=0, 1, \ldots, n.$ (we may of course ignore all higher powers of $t$ that appear in any of these partial products). Accordingly, all the necessary information carried by $\psi_j(x,t)$ can be represented by a $2k+1\times n+1$ matrix, with rows indexed by the powers of $x$ (from $-k$ through $k$) and columns indexed by $0$ through $k$.
Each step of the computation requires work proportional to the size of this matrix, scaling as $O(k^2).$ Accounting for the number of steps, this is a $O(k^2n)$-time, $O(kn)$-space algorithm. That makes it quite fast for small $k.$ I have run it in R (not known for excessive speed) for $k$ up to $100$ and $n$ up to $10^5,$ where it takes nine seconds (on a single core). In the setting of the question with $n=100$ and $k=10,$ the computation takes $0.03$ seconds.
Here is an example where the $P_{ai}$ are uniform random values between $0$ and $1$ and the $P_{bi}$ are their squares (which are always less than the $P_{ai}$, thereby strongly favoring student A). I simulated 100,000 examinations, as summarized by this histogram of the net scores:
The blue bars indicate those results in which student A got a better score than B. The red dots are the result of the dynamic program. They agree beautifully with the simulation ($\chi^2$ test, $p=51\%$). Summing all the positive probabilities gives the answer in this case, $0.7526\ldots.$
Note that this calculation yields more than asked for: it produces the entire probability distribution of the difference in scores for all exams of $k$ or fewer randomly selected questions.
For those who wish a working implementation to use or port, here is the R code that produced the simulation (stored in the vector Simulation) and executed the dynamic program (with results in the array P). The repeat block at the end is there only to aggregate all unusually rare outcomes so that the $\chi^2$ test becomes obviously reliable. (In most situations this doesn't matter, but it keeps the sofware from complaining.)
n <- 100
k <- 10
p <- runif(n) # Student A's chances of answering correctly
q <- p^2 # Student B's chances of answering correctly
#
# Compute the full distribution.
#
system.time({
P <- matrix(0, 2*k+1, k+1) # Indexing from (-k,0) to (k,k)
rownames(P) <- (-k):k
colnames(P) <- 0:k
P[k+1, 1] <- 1
for (i in 1:n) {
a <- p[i] * (1 - q[i])
b <- q[i] * (1 - p[i])
d <- (1 - a - b)
P[, 1:k+1] <- P[, 1:k+1] +
a * rbind(0, P[-(2*k+1), 1:k]) +
b * rbind(P[-1, 1:k], 0) +
d * P[, 1:k]
}
P <- apply(P, 2, function(x) x / sum(x))
})
#
# Simulation to check.
#
n.sim <- 1e5
set.seed(17)
system.time(
Simulation <- replicate(n.sim, {
i <- sample.int(n, k)
sum(sign((runif(k) <= p[i]) - (runif(k) <= q[i]))) # Difference in scores, A-B
})
)
#
# Test the calculation.
#
counts <- tabulate(Simulation+k+1, nbins=2*k+1)
n <- sum(counts)
k.min <- 5
repeat {
probs <- P[, k+1]
i <- probs * n.sim >= k.min
z <- sum(probs[!i])
if (z * n >= 5) break
if (k.min * (2*k+1) >= n) break
k.min <- ceiling(k.min * 3/2)
}
probs <- c(z, probs[i])
counts <- c(sum(counts[!i]), counts[i])
chisq.test(counts, p=probs)
#
# The answer.
#
sum(P[(1:k) + k+1, k+1]) # Chance that A-B is positive
|
What is the probability that a student gets a better score than another on a test with randomly sele
|
A dynamic program will make short work of this.
Suppose we administer all questions to the students and then randomly select a subset $\mathcal{I}$ of $k=10$ out of all $n=100$ questions. Let's defi
|
What is the probability that a student gets a better score than another on a test with randomly selected questions?
A dynamic program will make short work of this.
Suppose we administer all questions to the students and then randomly select a subset $\mathcal{I}$ of $k=10$ out of all $n=100$ questions. Let's define a random variable $X_i$ to compare the two students on question $i:$ set it to $1$ if student A is correct and student B not, $-1$ if student B is correct and student A not, and $0$ otherwise. The total
$$X_\mathcal{I} = \sum_{i\in\mathcal{I}} X_i$$
is the difference in scores for the questions in $\mathcal I.$ We wish to compute $\Pr(X_\mathcal{I} \gt 0).$ This probability is taken over the joint distribution of $\mathcal I$ and the $X_i.$
The distribution function of $X_i$ is readily calculated under the assumption the students respond independently:
$$\eqalign{
\Pr(X_i=1) &= P_{ai}(1-P_{bi}) \\
\Pr(X_i=-1) &= P_{bi}(1-P_{ai}) \\
\Pr(X_i=0) &= 1 - \Pr(X_i=1) - \Pr(X_i=0).
}$$
As a shorthand, let us call these probabilities $a_i,$ $b_i,$ and $d_i,$ respectively. Write
$$f_i(x) = a_i x + b_i x^{-1} + d_i.$$
This polynomial is a probability generating function for $X_i.$
Consider the rational function
$$\psi_n(x,t) = \prod_{i=1}^n \left(1 + t f_i(x)\right).$$
(Actually, $x^n\psi_n(x,t)$ is a polynomial: it's a pretty simple rational function.)
When $\psi_n$ is expanded as a polynomial in $t$, the coefficient of $t^k$ consists of the sum of all possible products of $k$ distinct $f_i(x).$ This will be a rational function with nonzero coefficients only for powers of $x$ from $x^{-k}$ through $x^k.$ Because $\mathcal{I}$ is selected uniformly at random, the coefficients of these powers of $x,$ when normalized to sum to unity, give the probability generating function for the difference in scores. The powers correspond to the size of $\mathcal{I}.$
The point of this analysis is that we may compute $\psi(x,t)$ easily and with reasonable efficiency: simply multiply the $n$ polynomials sequentially. Doing this requires retaining the coefficients of $1, t, \ldots, t^k$ in $\psi_j(x,t)$ for $j=0, 1, \ldots, n.$ (we may of course ignore all higher powers of $t$ that appear in any of these partial products). Accordingly, all the necessary information carried by $\psi_j(x,t)$ can be represented by a $2k+1\times n+1$ matrix, with rows indexed by the powers of $x$ (from $-k$ through $k$) and columns indexed by $0$ through $k$.
Each step of the computation requires work proportional to the size of this matrix, scaling as $O(k^2).$ Accounting for the number of steps, this is a $O(k^2n)$-time, $O(kn)$-space algorithm. That makes it quite fast for small $k.$ I have run it in R (not known for excessive speed) for $k$ up to $100$ and $n$ up to $10^5,$ where it takes nine seconds (on a single core). In the setting of the question with $n=100$ and $k=10,$ the computation takes $0.03$ seconds.
Here is an example where the $P_{ai}$ are uniform random values between $0$ and $1$ and the $P_{bi}$ are their squares (which are always less than the $P_{ai}$, thereby strongly favoring student A). I simulated 100,000 examinations, as summarized by this histogram of the net scores:
The blue bars indicate those results in which student A got a better score than B. The red dots are the result of the dynamic program. They agree beautifully with the simulation ($\chi^2$ test, $p=51\%$). Summing all the positive probabilities gives the answer in this case, $0.7526\ldots.$
Note that this calculation yields more than asked for: it produces the entire probability distribution of the difference in scores for all exams of $k$ or fewer randomly selected questions.
For those who wish a working implementation to use or port, here is the R code that produced the simulation (stored in the vector Simulation) and executed the dynamic program (with results in the array P). The repeat block at the end is there only to aggregate all unusually rare outcomes so that the $\chi^2$ test becomes obviously reliable. (In most situations this doesn't matter, but it keeps the sofware from complaining.)
n <- 100
k <- 10
p <- runif(n) # Student A's chances of answering correctly
q <- p^2 # Student B's chances of answering correctly
#
# Compute the full distribution.
#
system.time({
P <- matrix(0, 2*k+1, k+1) # Indexing from (-k,0) to (k,k)
rownames(P) <- (-k):k
colnames(P) <- 0:k
P[k+1, 1] <- 1
for (i in 1:n) {
a <- p[i] * (1 - q[i])
b <- q[i] * (1 - p[i])
d <- (1 - a - b)
P[, 1:k+1] <- P[, 1:k+1] +
a * rbind(0, P[-(2*k+1), 1:k]) +
b * rbind(P[-1, 1:k], 0) +
d * P[, 1:k]
}
P <- apply(P, 2, function(x) x / sum(x))
})
#
# Simulation to check.
#
n.sim <- 1e5
set.seed(17)
system.time(
Simulation <- replicate(n.sim, {
i <- sample.int(n, k)
sum(sign((runif(k) <= p[i]) - (runif(k) <= q[i]))) # Difference in scores, A-B
})
)
#
# Test the calculation.
#
counts <- tabulate(Simulation+k+1, nbins=2*k+1)
n <- sum(counts)
k.min <- 5
repeat {
probs <- P[, k+1]
i <- probs * n.sim >= k.min
z <- sum(probs[!i])
if (z * n >= 5) break
if (k.min * (2*k+1) >= n) break
k.min <- ceiling(k.min * 3/2)
}
probs <- c(z, probs[i])
counts <- c(sum(counts[!i]), counts[i])
chisq.test(counts, p=probs)
#
# The answer.
#
sum(P[(1:k) + k+1, k+1]) # Chance that A-B is positive
|
What is the probability that a student gets a better score than another on a test with randomly sele
A dynamic program will make short work of this.
Suppose we administer all questions to the students and then randomly select a subset $\mathcal{I}$ of $k=10$ out of all $n=100$ questions. Let's defi
|
40,592
|
What is the probability that a student gets a better score than another on a test with randomly selected questions?
|
At each iteration,
$P(A>B)=P(A)∗(1−P(B))$
because A is only better when A is right and B is wrong. So for a particular question, if A is right 90% of the time and B is right 80% of the time, then the joint probability that A is right and B is wrong is
$ P(A,B) = 0.9 ∗ 0.2 = 0.18 $
Now you could write some code that goes through all ten chosen questions and assigns a point to either A or B based on this joint probability. At the end of each exam, the winner is the one with the most points. Do this many of times and look at the probability of A winning over B.
I've written some code that does this here: https://nbviewer.jupyter.org/github/kevinmcinerney/exam_probabilities/blob/master/exam_probabilities.ipynb
The graph doesn't render on the link, but looks like this:
This simulation used values $ P_{ai} = P_{bi} = 0.5$
In this example, I ran 1000 exams each one made up of 25 questions, but all form the same set of 100 questions. The y-axis is the probability that A did better than B on an exam. The exam number from (0-1000) is along the x-axis
Both A and B had random P of getting a question correct so the graph converges on 50%. I used 25 questions, because 10 isn't very representative of the population containing 100 questions. Using ten questions the graph is more likely to converge to a % close to or around 50%. The three lines are three separate trials.
|
What is the probability that a student gets a better score than another on a test with randomly sele
|
At each iteration,
$P(A>B)=P(A)∗(1−P(B))$
because A is only better when A is right and B is wrong. So for a particular question, if A is right 90% of the time and B is right 80% of the time, then the
|
What is the probability that a student gets a better score than another on a test with randomly selected questions?
At each iteration,
$P(A>B)=P(A)∗(1−P(B))$
because A is only better when A is right and B is wrong. So for a particular question, if A is right 90% of the time and B is right 80% of the time, then the joint probability that A is right and B is wrong is
$ P(A,B) = 0.9 ∗ 0.2 = 0.18 $
Now you could write some code that goes through all ten chosen questions and assigns a point to either A or B based on this joint probability. At the end of each exam, the winner is the one with the most points. Do this many of times and look at the probability of A winning over B.
I've written some code that does this here: https://nbviewer.jupyter.org/github/kevinmcinerney/exam_probabilities/blob/master/exam_probabilities.ipynb
The graph doesn't render on the link, but looks like this:
This simulation used values $ P_{ai} = P_{bi} = 0.5$
In this example, I ran 1000 exams each one made up of 25 questions, but all form the same set of 100 questions. The y-axis is the probability that A did better than B on an exam. The exam number from (0-1000) is along the x-axis
Both A and B had random P of getting a question correct so the graph converges on 50%. I used 25 questions, because 10 isn't very representative of the population containing 100 questions. Using ten questions the graph is more likely to converge to a % close to or around 50%. The three lines are three separate trials.
|
What is the probability that a student gets a better score than another on a test with randomly sele
At each iteration,
$P(A>B)=P(A)∗(1−P(B))$
because A is only better when A is right and B is wrong. So for a particular question, if A is right 90% of the time and B is right 80% of the time, then the
|
40,593
|
For creating the joint posterior distribution for multiple variables, are the associated Bayesian priors usually assume independent of each other?
|
Your expression is only correct when you assume independent priors. Otherwise, the expression would become
$$
p(\theta_1, \theta_2 | D) \propto p(D|\theta_1, \theta_2) p(\theta_1, \theta_2)
$$
In this expression, you may need further assumptions to work with $p(\theta_1, \theta_2)$ or you can work with it as is. If you assume independence you get your expression again. From what I've seen, it's also common to factor this expression as
$$
p(\theta_1, \theta_2) = p(\theta_1|\theta_2)p(\theta_2)
$$
This requires no further independence assumptions, and it might be easy to specify the conditional distribution $p(\theta_1|\theta_2)$. Having said that, in many cases it can be completely justifiable to have independent priors on $\theta_1$ and $\theta_2$ so it really depends on the individual problem you're trying to solve.
|
For creating the joint posterior distribution for multiple variables, are the associated Bayesian pr
|
Your expression is only correct when you assume independent priors. Otherwise, the expression would become
$$
p(\theta_1, \theta_2 | D) \propto p(D|\theta_1, \theta_2) p(\theta_1, \theta_2)
$$
In this
|
For creating the joint posterior distribution for multiple variables, are the associated Bayesian priors usually assume independent of each other?
Your expression is only correct when you assume independent priors. Otherwise, the expression would become
$$
p(\theta_1, \theta_2 | D) \propto p(D|\theta_1, \theta_2) p(\theta_1, \theta_2)
$$
In this expression, you may need further assumptions to work with $p(\theta_1, \theta_2)$ or you can work with it as is. If you assume independence you get your expression again. From what I've seen, it's also common to factor this expression as
$$
p(\theta_1, \theta_2) = p(\theta_1|\theta_2)p(\theta_2)
$$
This requires no further independence assumptions, and it might be easy to specify the conditional distribution $p(\theta_1|\theta_2)$. Having said that, in many cases it can be completely justifiable to have independent priors on $\theta_1$ and $\theta_2$ so it really depends on the individual problem you're trying to solve.
|
For creating the joint posterior distribution for multiple variables, are the associated Bayesian pr
Your expression is only correct when you assume independent priors. Otherwise, the expression would become
$$
p(\theta_1, \theta_2 | D) \propto p(D|\theta_1, \theta_2) p(\theta_1, \theta_2)
$$
In this
|
40,594
|
For creating the joint posterior distribution for multiple variables, are the associated Bayesian priors usually assume independent of each other?
|
As Maurits M mentions in another answer whether independence makes sense is really problem specific. The OP question asked:
"I am wondering why most set-ups assume that the priors above are independent. What happens if we do not have it?"
which is really 2 questions.
(1) Why most set-ups assume the priors are independent?
My guess (w/o being able to read minds) would be that multivariate distributions that can be written in closed form are few and far between.
This is also the reason why MCMC techniques are so popular. It is much easier to write a product of marginal priors and may make the sampler easier to write down.
(2) What happens if we do not have it?
This question could be interpreted as the impact of incorrectly specifying independence, or as to how to proceed if you know independence is not a reasonable assumption. I'll answer both.
If you've incorrectly assumed independence, then the degree of the impact will depend on how egregious this violation is w.r.t. the true underlying model. For example, naive Bayes assumes independence and often works well even if the independence is empirically not true. The reason is that often symmetries exist in the data generation mechanism which "cancel out" the independence violations. However, this statement is more of an empirical justification and I'm unaware of any group theoretic proof of this claim.
|
For creating the joint posterior distribution for multiple variables, are the associated Bayesian pr
|
As Maurits M mentions in another answer whether independence makes sense is really problem specific. The OP question asked:
"I am wondering why most set-ups assume that the priors above are independe
|
For creating the joint posterior distribution for multiple variables, are the associated Bayesian priors usually assume independent of each other?
As Maurits M mentions in another answer whether independence makes sense is really problem specific. The OP question asked:
"I am wondering why most set-ups assume that the priors above are independent. What happens if we do not have it?"
which is really 2 questions.
(1) Why most set-ups assume the priors are independent?
My guess (w/o being able to read minds) would be that multivariate distributions that can be written in closed form are few and far between.
This is also the reason why MCMC techniques are so popular. It is much easier to write a product of marginal priors and may make the sampler easier to write down.
(2) What happens if we do not have it?
This question could be interpreted as the impact of incorrectly specifying independence, or as to how to proceed if you know independence is not a reasonable assumption. I'll answer both.
If you've incorrectly assumed independence, then the degree of the impact will depend on how egregious this violation is w.r.t. the true underlying model. For example, naive Bayes assumes independence and often works well even if the independence is empirically not true. The reason is that often symmetries exist in the data generation mechanism which "cancel out" the independence violations. However, this statement is more of an empirical justification and I'm unaware of any group theoretic proof of this claim.
|
For creating the joint posterior distribution for multiple variables, are the associated Bayesian pr
As Maurits M mentions in another answer whether independence makes sense is really problem specific. The OP question asked:
"I am wondering why most set-ups assume that the priors above are independe
|
40,595
|
Derivative of the Joint Distribution Interpretation
|
The first-order partial derivatives of a multivariate joint distribution function can be considered as giving the density of the differentiated variable, jointly with the cumulative probability of the other variable(s). One simple way to see this interpretation is to convert the partial derivative to a density integral, integrated over the other dimensions. From the fundamental theorem of calculus we can write the partial derivative as:
$$\begin{aligned}
\frac{\partial}{\partial x} F_{X,Y} (x, y)
&= \int \limits_{- \infty}^y f_{X,Y} (x, t) dt \\[6pt]
&= \int \limits_{- \infty}^y f_{Y|X} (t|x) f_X(x) dt \\[6pt]
&= \int \limits_{- \infty}^y f_{Y|X} (t|x) dt \times f_X(x) \\[6pt]
&= \mathbb{P} (Y \leqslant y | X = x) f_X (x).
\end{aligned}$$
This shows that the partial derivative gives us the joint density over the line $Y \leqslant y, X = x$ (within the two-dimensional space of the two random variables). The partial derivative with respect to $y$ has an analogous interpretation.
|
Derivative of the Joint Distribution Interpretation
|
The first-order partial derivatives of a multivariate joint distribution function can be considered as giving the density of the differentiated variable, jointly with the cumulative probability of the
|
Derivative of the Joint Distribution Interpretation
The first-order partial derivatives of a multivariate joint distribution function can be considered as giving the density of the differentiated variable, jointly with the cumulative probability of the other variable(s). One simple way to see this interpretation is to convert the partial derivative to a density integral, integrated over the other dimensions. From the fundamental theorem of calculus we can write the partial derivative as:
$$\begin{aligned}
\frac{\partial}{\partial x} F_{X,Y} (x, y)
&= \int \limits_{- \infty}^y f_{X,Y} (x, t) dt \\[6pt]
&= \int \limits_{- \infty}^y f_{Y|X} (t|x) f_X(x) dt \\[6pt]
&= \int \limits_{- \infty}^y f_{Y|X} (t|x) dt \times f_X(x) \\[6pt]
&= \mathbb{P} (Y \leqslant y | X = x) f_X (x).
\end{aligned}$$
This shows that the partial derivative gives us the joint density over the line $Y \leqslant y, X = x$ (within the two-dimensional space of the two random variables). The partial derivative with respect to $y$ has an analogous interpretation.
|
Derivative of the Joint Distribution Interpretation
The first-order partial derivatives of a multivariate joint distribution function can be considered as giving the density of the differentiated variable, jointly with the cumulative probability of the
|
40,596
|
Derivative of the Joint Distribution Interpretation
|
If you take the joint CDF over xy and derive it over just one of the variables - you're left with marginal PDF for that same variable.
Let's prove using a simple joint distribution of two i.i.d. RVs X and Y ~Expo(1)
On one hand, we can get to the marginal PDF through the joint PDF:
$$
F_{XY}(x,y) = \iint_{XY} f_{XY}(x,y) dxdy = \int_{0}^{\infty }\int_{0}^{\infty } e^{-x}e^{-y}dxdy
\\
f_{XY}(x,y) = \frac{\partial^{2} }{\partial x \partial y} F_{XY}(x,y) = e^{-x}e^{-y}
\\
f_{X}(x) = \int_{Y}f_{XY}(x,y)dy = e^{-x} \int_{0}^{\infty } e^{-y}dy = e^{-x}
$$
Where last equation simplifies to e^(-x) because PDF of standalone y integrates to 1.
Alternatively we can go directly from joint CDF to marginal PDF:
$$f_{X}(x) = \frac{\partial }{\partial x} F_{XY}(x,y) = e^{-x} \int_{0}^{\infty } e^{-y}dy = e^{-x}$$
Marginal PDF, if you're unfamiliar, is basically the PDF of X standalone, "freed up" from Y.
|
Derivative of the Joint Distribution Interpretation
|
If you take the joint CDF over xy and derive it over just one of the variables - you're left with marginal PDF for that same variable.
Let's prove using a simple joint distribution of two i.i.d. RVs X
|
Derivative of the Joint Distribution Interpretation
If you take the joint CDF over xy and derive it over just one of the variables - you're left with marginal PDF for that same variable.
Let's prove using a simple joint distribution of two i.i.d. RVs X and Y ~Expo(1)
On one hand, we can get to the marginal PDF through the joint PDF:
$$
F_{XY}(x,y) = \iint_{XY} f_{XY}(x,y) dxdy = \int_{0}^{\infty }\int_{0}^{\infty } e^{-x}e^{-y}dxdy
\\
f_{XY}(x,y) = \frac{\partial^{2} }{\partial x \partial y} F_{XY}(x,y) = e^{-x}e^{-y}
\\
f_{X}(x) = \int_{Y}f_{XY}(x,y)dy = e^{-x} \int_{0}^{\infty } e^{-y}dy = e^{-x}
$$
Where last equation simplifies to e^(-x) because PDF of standalone y integrates to 1.
Alternatively we can go directly from joint CDF to marginal PDF:
$$f_{X}(x) = \frac{\partial }{\partial x} F_{XY}(x,y) = e^{-x} \int_{0}^{\infty } e^{-y}dy = e^{-x}$$
Marginal PDF, if you're unfamiliar, is basically the PDF of X standalone, "freed up" from Y.
|
Derivative of the Joint Distribution Interpretation
If you take the joint CDF over xy and derive it over just one of the variables - you're left with marginal PDF for that same variable.
Let's prove using a simple joint distribution of two i.i.d. RVs X
|
40,597
|
Study path to Bayesian thinking?
|
I've started down my own path towards understanding the bayesian way of thinking and I'll share my perspective. I started off reading classic papers on the various samplers and going through derivations for the conjugate cases, and I don't think that got me very far. True, the elites are going to write their own samplers and exploit every conjugate opportunity possible. But if you want to get a good feel for the approach and potentially gain a few useful methods, there are more direct ways.
My recommendation is to find a good bayesian modeling tool that takes care of the sampling and lets you focus on specifying the likelihoods and priors. For me, this has been Stan. It's based on a particular sampler that doesn't require much tinkering. The User's Guide and Reference Manual (available on the Documentation page) reads like a textbook, and you can learn a lot by going through the examples. When you have an idea for a new model, you can try it out and usually get something working without too much time. You can see some of my own experimentation here.
We are in an age where the focus is on managing computations on enormous data sets, and software like Stan is going to encourage you to perform intense computation on even small data sets (depending on the model). But I think it's worth the time to study and understand. There are still plenty of "small data" problems out there, and it's nice to be able to frame ideas in machine learning (e.g., L2 regularization) in the bayesian context (where there is actually theory!).
|
Study path to Bayesian thinking?
|
I've started down my own path towards understanding the bayesian way of thinking and I'll share my perspective. I started off reading classic papers on the various samplers and going through derivatio
|
Study path to Bayesian thinking?
I've started down my own path towards understanding the bayesian way of thinking and I'll share my perspective. I started off reading classic papers on the various samplers and going through derivations for the conjugate cases, and I don't think that got me very far. True, the elites are going to write their own samplers and exploit every conjugate opportunity possible. But if you want to get a good feel for the approach and potentially gain a few useful methods, there are more direct ways.
My recommendation is to find a good bayesian modeling tool that takes care of the sampling and lets you focus on specifying the likelihoods and priors. For me, this has been Stan. It's based on a particular sampler that doesn't require much tinkering. The User's Guide and Reference Manual (available on the Documentation page) reads like a textbook, and you can learn a lot by going through the examples. When you have an idea for a new model, you can try it out and usually get something working without too much time. You can see some of my own experimentation here.
We are in an age where the focus is on managing computations on enormous data sets, and software like Stan is going to encourage you to perform intense computation on even small data sets (depending on the model). But I think it's worth the time to study and understand. There are still plenty of "small data" problems out there, and it's nice to be able to frame ideas in machine learning (e.g., L2 regularization) in the bayesian context (where there is actually theory!).
|
Study path to Bayesian thinking?
I've started down my own path towards understanding the bayesian way of thinking and I'll share my perspective. I started off reading classic papers on the various samplers and going through derivatio
|
40,598
|
Study path to Bayesian thinking?
|
From your business point of view you might be motivated by Bayesian decision theory, which is a way to apply Bayesian inference to make decisions under uncertainty.
If so, you'd find the topics that introductions to Bayesian analysis often focus on (such as specifying various prior and likelihood distributions and performing sampling computations or derivations analytically) are simply means to this ultimate end.
Here are some resources specifically on this topic:
http://www.cs.haifa.ac.il/~rita/ml_course/lectures/Bayesian_Decision.pdf
http://www.statsathome.com/2017/10/12/bayesian-decision-theory-made-ridiculously-simple/
https://www.cc.gatech.edu/~hic/CS7616/pdf/lecture2.pdf
|
Study path to Bayesian thinking?
|
From your business point of view you might be motivated by Bayesian decision theory, which is a way to apply Bayesian inference to make decisions under uncertainty.
If so, you'd find the topics that i
|
Study path to Bayesian thinking?
From your business point of view you might be motivated by Bayesian decision theory, which is a way to apply Bayesian inference to make decisions under uncertainty.
If so, you'd find the topics that introductions to Bayesian analysis often focus on (such as specifying various prior and likelihood distributions and performing sampling computations or derivations analytically) are simply means to this ultimate end.
Here are some resources specifically on this topic:
http://www.cs.haifa.ac.il/~rita/ml_course/lectures/Bayesian_Decision.pdf
http://www.statsathome.com/2017/10/12/bayesian-decision-theory-made-ridiculously-simple/
https://www.cc.gatech.edu/~hic/CS7616/pdf/lecture2.pdf
|
Study path to Bayesian thinking?
From your business point of view you might be motivated by Bayesian decision theory, which is a way to apply Bayesian inference to make decisions under uncertainty.
If so, you'd find the topics that i
|
40,599
|
Study path to Bayesian thinking?
|
I had a course on Bayesian Data Analysis in the last semester. It assumes no previous background. Here's the course homepage where the instructor has put all of the materials: https://michael-franke.github.io/BDACM_2017/
We used the Kruschke textbook for the course. It worked out fine. I don't think there's much problem with working in R. You still get to understand how things work.
|
Study path to Bayesian thinking?
|
I had a course on Bayesian Data Analysis in the last semester. It assumes no previous background. Here's the course homepage where the instructor has put all of the materials: https://michael-franke.g
|
Study path to Bayesian thinking?
I had a course on Bayesian Data Analysis in the last semester. It assumes no previous background. Here's the course homepage where the instructor has put all of the materials: https://michael-franke.github.io/BDACM_2017/
We used the Kruschke textbook for the course. It worked out fine. I don't think there's much problem with working in R. You still get to understand how things work.
|
Study path to Bayesian thinking?
I had a course on Bayesian Data Analysis in the last semester. It assumes no previous background. Here's the course homepage where the instructor has put all of the materials: https://michael-franke.g
|
40,600
|
Is this a valid way to construct a confidence interval?
|
Hanley and Lippman-Hand (1983) give something like the following argument which provides motivation for the rule. Taking $n$ as fixed, $P(X=0|p) =(1-p)^n$.
Solving $(1-p)^n \geq \alpha\,$ for $p\,$ we get $p\geq 1-\alpha^{\frac{1}{n}}$. The smallest $p$ that keeps the probability of $0$ no less than $\alpha$ is $1-\alpha^{\frac{1}{n}}$.
Now $\alpha^{\frac{1}{n}}=e^{\frac{1}{n}\log{\alpha}} = 1+\frac{1}{n}\log{\alpha}+\frac12 (\frac{1}{n}\log{\alpha})^2 + ...$.
Taking to the first order we get $p\geq -\frac{1}{n}\log{\alpha}$. When $\alpha=0.05$, $-\log(0.05)/n\approx 3/n$.
Jovanovic & Levy (1997) improve this, basing it more clearly a in a CI argument, by casting it as a Clopper-Pearson interval and obtaining the same $(1-p)^n=\alpha$ bound, and hence the same approximate upper bound on $p$:
if $X = x$ is the observed number of events in
n trials, the Clopper-Pearson (max-P) upper $(1- \alpha)$ 100%
bound may be obtained as a solution to
$$\sum_{t=0}^x {n\choose t}p^t (1-p)^t =\alpha$$
Clearly, when $x = 0$ the expression reduces to $(1- p)^n = \alpha$
They also discuss some other arguments.
Hanley, J. A., and Lippman-Hand, A. (1983),
"If nothing goes wrong, is everything all right? Interpreting zero numerators"
Journal of the American Medical Association, 249(13), 1743-1745.
Jovanovic, B. D. and Levy, P. S. (1997),
"A Look at the Rule of Three"
The American Statistician, 51(2), 137-139
|
Is this a valid way to construct a confidence interval?
|
Hanley and Lippman-Hand (1983) give something like the following argument which provides motivation for the rule. Taking $n$ as fixed, $P(X=0|p) =(1-p)^n$.
Solving $(1-p)^n \geq \alpha\,$ for $p\,$ w
|
Is this a valid way to construct a confidence interval?
Hanley and Lippman-Hand (1983) give something like the following argument which provides motivation for the rule. Taking $n$ as fixed, $P(X=0|p) =(1-p)^n$.
Solving $(1-p)^n \geq \alpha\,$ for $p\,$ we get $p\geq 1-\alpha^{\frac{1}{n}}$. The smallest $p$ that keeps the probability of $0$ no less than $\alpha$ is $1-\alpha^{\frac{1}{n}}$.
Now $\alpha^{\frac{1}{n}}=e^{\frac{1}{n}\log{\alpha}} = 1+\frac{1}{n}\log{\alpha}+\frac12 (\frac{1}{n}\log{\alpha})^2 + ...$.
Taking to the first order we get $p\geq -\frac{1}{n}\log{\alpha}$. When $\alpha=0.05$, $-\log(0.05)/n\approx 3/n$.
Jovanovic & Levy (1997) improve this, basing it more clearly a in a CI argument, by casting it as a Clopper-Pearson interval and obtaining the same $(1-p)^n=\alpha$ bound, and hence the same approximate upper bound on $p$:
if $X = x$ is the observed number of events in
n trials, the Clopper-Pearson (max-P) upper $(1- \alpha)$ 100%
bound may be obtained as a solution to
$$\sum_{t=0}^x {n\choose t}p^t (1-p)^t =\alpha$$
Clearly, when $x = 0$ the expression reduces to $(1- p)^n = \alpha$
They also discuss some other arguments.
Hanley, J. A., and Lippman-Hand, A. (1983),
"If nothing goes wrong, is everything all right? Interpreting zero numerators"
Journal of the American Medical Association, 249(13), 1743-1745.
Jovanovic, B. D. and Levy, P. S. (1997),
"A Look at the Rule of Three"
The American Statistician, 51(2), 137-139
|
Is this a valid way to construct a confidence interval?
Hanley and Lippman-Hand (1983) give something like the following argument which provides motivation for the rule. Taking $n$ as fixed, $P(X=0|p) =(1-p)^n$.
Solving $(1-p)^n \geq \alpha\,$ for $p\,$ w
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.