idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
30,701 | Example of time series prediction using neural networks in R | Was looking for the same thing and have stumbled upon this question. Since I haven't found an example I've decided to make my own. Be aware, I am no expert in neural nets or forecasting :)
To model time-series effectively with neural nets (nnets) I believe an important property nnets should have is some kind of memory (keeping track of what happened in the past). Therefore, plain feed-forward nets are probably a bad idea. One of the families in nnets that can effectively simulate memory is the family of recurrent neural networks and one of the most known types of recurrent neural networks are probably Elman networks (together with long short-term memory (LSTM) nets I would say). To find more about Elman networks you can go through the original paper introducing the concept or through Wikipedia. In short, they have an additional layer called context used as a type of memory. The following figure (source) illustrates the idea
Luckily for us, there is the RSNNS package in R capable of fitting Elman nets. The package is described in details here.
Now that we have gone through the basics lets see how we can implement an example in R with the RSNNS package.
library(RSNNS)
#
# simulate an arima time series example of the length n
#
set.seed(10001)
n <- 100
ts.sim <- arima.sim(list(order = c(1,1,0), ar = 0.7), n = n-1)
#
# create an input data set for ts.sim
# sw = sliding-window size
#
# the last point of the time series will not be used
# in the training phase, only in the prediction/validation phase
#
sw <- 1
X <- lapply(sw:(n-2),
function(ind){
ts.sim[(ind-sw+1):ind]
})
X <- do.call(rbind, X)
Y <- sapply(sw:(n-2),
function(ind){
ts.sim[ind+1]
})
# used to validate prediction properties
# on the last point of the series
newX <- ts.sim[(n-sw):(n-1)]
newY <- ts.sim[n]
# build an elman network based on the input
model <- elman(X, Y,
size = c(10, 10),
learnFuncParams = c(0.001),
maxit = 500,
linOut = TRUE)
#
# plot the results
#
limits <- range(c(Y, model$fitted.values))
plot(Y, type = "l", col="red",
ylim=limits, xlim=c(0, length(Y)),
ylab="", xlab="")
lines(model$fitted.values, col = "green", type="l")
points(length(Y)+1, newY, col="red", pch=16)
points(length(Y)+1, predict(model, newdata=newX),
pch="X", col="green")
This code should result in the following figure
So what we did with the code is as follows. First, we have created a time-series example (from the ARIMA model). After that we have decoupled/sliced the time-series example into inputs of the form (sw previous points, next point) for all pairs except the last one (with the next point as the last point of the time-series example). The parameter sw is used to define the "sliding window". I won't debate here what is the proper size for the sliding-window but just note that due to Elman networks having memory the sliding-window of size one is more than a reasonable approach (also, take a look at this post).
After the preparations are done we can simply build an Elman network with the elman function. There are two parameters you should be careful about; the size and the learnFuncParams. The size parameter gives you a way to define the size of the network (hidden layer) and the way you choose this parameter is more an art than a science. A rule of thumb for learnFuncParams is to keep it small if it is feasible (your processing power allows you to keep it small/you have enough time to wait :D).
And voila, you have your neural network capable of predicting a/the future point/value. Predictive power of this approach for our example is illustrated in the previous figure. The red curve presents our simulated time series (without the last point) and the green curve what was obtained with the fitted Elman network. The red point denotes the last point (the one that was not used during the fitting process) and the green point what was predicted by the fitted network. Not bad at all :)
This was an example on how to use RNNs (Elman networks) with R to make predictions/forecasting. Some might argue that RNNs are not the best for the problem and that there are better nnet models for forecasting. Since I'm not an expert in the filed I will avoid discussing these issues.
An interesting read if you would like to find more about RNNs is a critical review of RNNs in sequence learning paper. | Example of time series prediction using neural networks in R | Was looking for the same thing and have stumbled upon this question. Since I haven't found an example I've decided to make my own. Be aware, I am no expert in neural nets or forecasting :)
To model ti | Example of time series prediction using neural networks in R
Was looking for the same thing and have stumbled upon this question. Since I haven't found an example I've decided to make my own. Be aware, I am no expert in neural nets or forecasting :)
To model time-series effectively with neural nets (nnets) I believe an important property nnets should have is some kind of memory (keeping track of what happened in the past). Therefore, plain feed-forward nets are probably a bad idea. One of the families in nnets that can effectively simulate memory is the family of recurrent neural networks and one of the most known types of recurrent neural networks are probably Elman networks (together with long short-term memory (LSTM) nets I would say). To find more about Elman networks you can go through the original paper introducing the concept or through Wikipedia. In short, they have an additional layer called context used as a type of memory. The following figure (source) illustrates the idea
Luckily for us, there is the RSNNS package in R capable of fitting Elman nets. The package is described in details here.
Now that we have gone through the basics lets see how we can implement an example in R with the RSNNS package.
library(RSNNS)
#
# simulate an arima time series example of the length n
#
set.seed(10001)
n <- 100
ts.sim <- arima.sim(list(order = c(1,1,0), ar = 0.7), n = n-1)
#
# create an input data set for ts.sim
# sw = sliding-window size
#
# the last point of the time series will not be used
# in the training phase, only in the prediction/validation phase
#
sw <- 1
X <- lapply(sw:(n-2),
function(ind){
ts.sim[(ind-sw+1):ind]
})
X <- do.call(rbind, X)
Y <- sapply(sw:(n-2),
function(ind){
ts.sim[ind+1]
})
# used to validate prediction properties
# on the last point of the series
newX <- ts.sim[(n-sw):(n-1)]
newY <- ts.sim[n]
# build an elman network based on the input
model <- elman(X, Y,
size = c(10, 10),
learnFuncParams = c(0.001),
maxit = 500,
linOut = TRUE)
#
# plot the results
#
limits <- range(c(Y, model$fitted.values))
plot(Y, type = "l", col="red",
ylim=limits, xlim=c(0, length(Y)),
ylab="", xlab="")
lines(model$fitted.values, col = "green", type="l")
points(length(Y)+1, newY, col="red", pch=16)
points(length(Y)+1, predict(model, newdata=newX),
pch="X", col="green")
This code should result in the following figure
So what we did with the code is as follows. First, we have created a time-series example (from the ARIMA model). After that we have decoupled/sliced the time-series example into inputs of the form (sw previous points, next point) for all pairs except the last one (with the next point as the last point of the time-series example). The parameter sw is used to define the "sliding window". I won't debate here what is the proper size for the sliding-window but just note that due to Elman networks having memory the sliding-window of size one is more than a reasonable approach (also, take a look at this post).
After the preparations are done we can simply build an Elman network with the elman function. There are two parameters you should be careful about; the size and the learnFuncParams. The size parameter gives you a way to define the size of the network (hidden layer) and the way you choose this parameter is more an art than a science. A rule of thumb for learnFuncParams is to keep it small if it is feasible (your processing power allows you to keep it small/you have enough time to wait :D).
And voila, you have your neural network capable of predicting a/the future point/value. Predictive power of this approach for our example is illustrated in the previous figure. The red curve presents our simulated time series (without the last point) and the green curve what was obtained with the fitted Elman network. The red point denotes the last point (the one that was not used during the fitting process) and the green point what was predicted by the fitted network. Not bad at all :)
This was an example on how to use RNNs (Elman networks) with R to make predictions/forecasting. Some might argue that RNNs are not the best for the problem and that there are better nnet models for forecasting. Since I'm not an expert in the filed I will avoid discussing these issues.
An interesting read if you would like to find more about RNNs is a critical review of RNNs in sequence learning paper. | Example of time series prediction using neural networks in R
Was looking for the same thing and have stumbled upon this question. Since I haven't found an example I've decided to make my own. Be aware, I am no expert in neural nets or forecasting :)
To model ti |
30,702 | Daylight saving time in time series modelling (e.g. load data) | I've been thinking more about my previous answer, and now I'm not so sanguine.
A problem arises because electricity consumption varies by hour depending on both external environmental conditions (especially, temperature), and also on the social conventions that determine work patterns. When daylight savings time begins or ends, the alignment between these two shifts abruptly: the "hour during which the sun sets" may shift from falling during the work day, to falling during evening/dinner-time.
Hence the challenge involves not just how to edit values immediately at the point of change-over. The question is whether DST and standard time should be considered as, in some sense, distinct regimes.
The care with which you address the issue depends, of course, on what you are going to use the forecast for. For many purposes, it might be OK to just ignore the subtleties, and proceed as per your first proposal. My suggestion remains to try that first, and see if the accuracy of your model is good enough to meet the needs of your specific application.
If results are unsatisfactory, a second stage of complexity might involve breaking your project in half, and creating separate models for the winter regime and the summer regime. This approach has a lot to recommend it, actually: the relationship between temperature and power consumption is roughly U-shaped, hitting a minimum at about 18 degrees C, reflecting differences in the way temperature changes affect demand for heating versus cooling. Hence whatever model you come up with will end up acting something like the union of two separate regime-specific models anyway.
A variation on the above -- almost a re-phrasing -- would be to include in your regression equation a DST dummy variable. That sounds sensible.
Again, the big question is: how much time and effort does it make sense to devote to exploring this issue and it's implications for forecast quality? If you are doing applied work (as I gather you are), the goal is to craft a model that is fit-to-purpose, rather than devote your life to finding the best of all possible models.
If you really want to explore this issue, you might look up this paper:
Ryan Kellogg, Hendrik Wolff, Daylight time and energy: Evidence from
an Australian experiment, Journal of Environmental Economics and
Management, Volume 56, Issue 3, November 2008, Pages 207-220, ISSN
0095-0696, 10.1016/j.jeem.2008.02.003.
Keywords: Energy; Daylight saving time;
Difference-in-difference-in-difference
The authors take advantage of the fact that two Australian states at the same latitude have different rules concerning implementing daylight savings time. This difference creates conditions for a natural experiment regarding the effect of DST on energy consumption, with one state acting as the "treatment group" and its neighbor acting as the "control group". Additional background is available from Hendrik Wolff's website. It's interesting work -- though perhaps overkill for your application. | Daylight saving time in time series modelling (e.g. load data) | I've been thinking more about my previous answer, and now I'm not so sanguine.
A problem arises because electricity consumption varies by hour depending on both external environmental conditions (espe | Daylight saving time in time series modelling (e.g. load data)
I've been thinking more about my previous answer, and now I'm not so sanguine.
A problem arises because electricity consumption varies by hour depending on both external environmental conditions (especially, temperature), and also on the social conventions that determine work patterns. When daylight savings time begins or ends, the alignment between these two shifts abruptly: the "hour during which the sun sets" may shift from falling during the work day, to falling during evening/dinner-time.
Hence the challenge involves not just how to edit values immediately at the point of change-over. The question is whether DST and standard time should be considered as, in some sense, distinct regimes.
The care with which you address the issue depends, of course, on what you are going to use the forecast for. For many purposes, it might be OK to just ignore the subtleties, and proceed as per your first proposal. My suggestion remains to try that first, and see if the accuracy of your model is good enough to meet the needs of your specific application.
If results are unsatisfactory, a second stage of complexity might involve breaking your project in half, and creating separate models for the winter regime and the summer regime. This approach has a lot to recommend it, actually: the relationship between temperature and power consumption is roughly U-shaped, hitting a minimum at about 18 degrees C, reflecting differences in the way temperature changes affect demand for heating versus cooling. Hence whatever model you come up with will end up acting something like the union of two separate regime-specific models anyway.
A variation on the above -- almost a re-phrasing -- would be to include in your regression equation a DST dummy variable. That sounds sensible.
Again, the big question is: how much time and effort does it make sense to devote to exploring this issue and it's implications for forecast quality? If you are doing applied work (as I gather you are), the goal is to craft a model that is fit-to-purpose, rather than devote your life to finding the best of all possible models.
If you really want to explore this issue, you might look up this paper:
Ryan Kellogg, Hendrik Wolff, Daylight time and energy: Evidence from
an Australian experiment, Journal of Environmental Economics and
Management, Volume 56, Issue 3, November 2008, Pages 207-220, ISSN
0095-0696, 10.1016/j.jeem.2008.02.003.
Keywords: Energy; Daylight saving time;
Difference-in-difference-in-difference
The authors take advantage of the fact that two Australian states at the same latitude have different rules concerning implementing daylight savings time. This difference creates conditions for a natural experiment regarding the effect of DST on energy consumption, with one state acting as the "treatment group" and its neighbor acting as the "control group". Additional background is available from Hendrik Wolff's website. It's interesting work -- though perhaps overkill for your application. | Daylight saving time in time series modelling (e.g. load data)
I've been thinking more about my previous answer, and now I'm not so sanguine.
A problem arises because electricity consumption varies by hour depending on both external environmental conditions (espe |
30,703 | Daylight saving time in time series modelling (e.g. load data) | The suggested procedure -- toss out the extra hour, fill in the missing hour by averaging nearby values -- strikes me as quite reasonable. The affected hours are in the middle of the night when the system is drawing its base load. There is much less volatility of base loads than of peak loads. Therefore, the forecasting results are unlikely to be sensitive to the details: any reasonable way of accounting for the change-over should yield about the same result in terms of calibrating a forecasting model. | Daylight saving time in time series modelling (e.g. load data) | The suggested procedure -- toss out the extra hour, fill in the missing hour by averaging nearby values -- strikes me as quite reasonable. The affected hours are in the middle of the night when the s | Daylight saving time in time series modelling (e.g. load data)
The suggested procedure -- toss out the extra hour, fill in the missing hour by averaging nearby values -- strikes me as quite reasonable. The affected hours are in the middle of the night when the system is drawing its base load. There is much less volatility of base loads than of peak loads. Therefore, the forecasting results are unlikely to be sensitive to the details: any reasonable way of accounting for the change-over should yield about the same result in terms of calibrating a forecasting model. | Daylight saving time in time series modelling (e.g. load data)
The suggested procedure -- toss out the extra hour, fill in the missing hour by averaging nearby values -- strikes me as quite reasonable. The affected hours are in the middle of the night when the s |
30,704 | Mean structure and mean/variance relationship in regression | Consider the assumptions of the standard linear model:
$E[Y_i] = \beta_0 + \beta_1 X_i.$
The "errors" $\{Y_i - (\beta_0+\beta_1 X_i)\}$ are independently and identically distributed.
The first assumption concerns the "mean structure": it stipulates that the expectations of the response variables (their "means") depend linearly on the explanatory variables. Failure (or violation) of this assumption is reflected in lack of goodness of fit and, often, heteroscedastic residuals. It might be cured by introducing more variables or interaction terms in the model and/or by nonlinear re-expressions of either the independent or explanatory variables.
For more information about the mean structure, read about goodness of fit testing, introducing interaction terms, and checking for linearity of responses.
The second assumption concerns the errors and comprises two related ideas: of independence and of identical distribution. It is natural to turn to the second moments of the multivariate distribution of $(Y_i)$ to describe departures from (or violations) of these assumptions:
Lack of independence can be revealed by nonzero covariances between $Y_i$ and $Y_j$, $i \ne j$.
Lack of identical distributions can be revealed by differing variances among the $Y_i$ (lack of homoscedasticity).
(Subtracting $\beta_0+\beta_1 X_i$ from $Y_i$ does not change the variances or covariances because these are understood to be conditional on the $X_i.$ That explains why we can focus attention on the properties of the $Y_i$ instead of the properties of the errors.)
We might collectively term this the "variance structure." In the second case, frequently a departure from identical distributions attains a simple form: the variance of $Y_i$ is seen to have some definite relationship to the expectation of $Y_i$. This is the "mean/variance" relationship. Such relationships occur in actual data and often indicate that some nonlinear re-expression of $Y$ would be useful. For instance, when the variance of $Y$ is proportional to the expectation of $Y$, we might have better success fitting a linear model in which the square root of $Y$ is the explanatory variable rather than $Y$ itself.
There are many approaches to handling violations of the second assumption, ranging from generalized linear models to methods of re-expressing the variables, including Box-Cox transformations, as well as specialized models such as GARCH (for time series). Standard regression diagnostics, including plots of residuals against fitted values, aim at detecting and quantifying departures from this assumption.
Note how these considerations are related to building an appropriate model. They do not refer to the typical sizes of the residuals (the differences between the observations of the $Y_i$ and their estimated values): that is a property of the data which is beyond the control of the analyst. The difference between a linear model satisfying (1) and (2) with small residuals and another linear model satisfying (1) and (2) with large residuals is illustrated in Erogol's answer. | Mean structure and mean/variance relationship in regression | Consider the assumptions of the standard linear model:
$E[Y_i] = \beta_0 + \beta_1 X_i.$
The "errors" $\{Y_i - (\beta_0+\beta_1 X_i)\}$ are independently and identically distributed.
The first ass | Mean structure and mean/variance relationship in regression
Consider the assumptions of the standard linear model:
$E[Y_i] = \beta_0 + \beta_1 X_i.$
The "errors" $\{Y_i - (\beta_0+\beta_1 X_i)\}$ are independently and identically distributed.
The first assumption concerns the "mean structure": it stipulates that the expectations of the response variables (their "means") depend linearly on the explanatory variables. Failure (or violation) of this assumption is reflected in lack of goodness of fit and, often, heteroscedastic residuals. It might be cured by introducing more variables or interaction terms in the model and/or by nonlinear re-expressions of either the independent or explanatory variables.
For more information about the mean structure, read about goodness of fit testing, introducing interaction terms, and checking for linearity of responses.
The second assumption concerns the errors and comprises two related ideas: of independence and of identical distribution. It is natural to turn to the second moments of the multivariate distribution of $(Y_i)$ to describe departures from (or violations) of these assumptions:
Lack of independence can be revealed by nonzero covariances between $Y_i$ and $Y_j$, $i \ne j$.
Lack of identical distributions can be revealed by differing variances among the $Y_i$ (lack of homoscedasticity).
(Subtracting $\beta_0+\beta_1 X_i$ from $Y_i$ does not change the variances or covariances because these are understood to be conditional on the $X_i.$ That explains why we can focus attention on the properties of the $Y_i$ instead of the properties of the errors.)
We might collectively term this the "variance structure." In the second case, frequently a departure from identical distributions attains a simple form: the variance of $Y_i$ is seen to have some definite relationship to the expectation of $Y_i$. This is the "mean/variance" relationship. Such relationships occur in actual data and often indicate that some nonlinear re-expression of $Y$ would be useful. For instance, when the variance of $Y$ is proportional to the expectation of $Y$, we might have better success fitting a linear model in which the square root of $Y$ is the explanatory variable rather than $Y$ itself.
There are many approaches to handling violations of the second assumption, ranging from generalized linear models to methods of re-expressing the variables, including Box-Cox transformations, as well as specialized models such as GARCH (for time series). Standard regression diagnostics, including plots of residuals against fitted values, aim at detecting and quantifying departures from this assumption.
Note how these considerations are related to building an appropriate model. They do not refer to the typical sizes of the residuals (the differences between the observations of the $Y_i$ and their estimated values): that is a property of the data which is beyond the control of the analyst. The difference between a linear model satisfying (1) and (2) with small residuals and another linear model satisfying (1) and (2) with large residuals is illustrated in Erogol's answer. | Mean structure and mean/variance relationship in regression
Consider the assumptions of the standard linear model:
$E[Y_i] = \beta_0 + \beta_1 X_i.$
The "errors" $\{Y_i - (\beta_0+\beta_1 X_i)\}$ are independently and identically distributed.
The first ass |
30,705 | Mean structure and mean/variance relationship in regression | Mean and variance relation shows the spread of the data points on feature space. More variance means more spread on the space thus it is more likely to have bad performed regression if you are using a linear regression algorithm.
As you can see from the figures in high variance case your regression solution is rather bad performance for new comer instances even for the train set as well | Mean structure and mean/variance relationship in regression | Mean and variance relation shows the spread of the data points on feature space. More variance means more spread on the space thus it is more likely to have bad performed regression if you are using a | Mean structure and mean/variance relationship in regression
Mean and variance relation shows the spread of the data points on feature space. More variance means more spread on the space thus it is more likely to have bad performed regression if you are using a linear regression algorithm.
As you can see from the figures in high variance case your regression solution is rather bad performance for new comer instances even for the train set as well | Mean structure and mean/variance relationship in regression
Mean and variance relation shows the spread of the data points on feature space. More variance means more spread on the space thus it is more likely to have bad performed regression if you are using a |
30,706 | Probability that uniformly random points in a rectangle have Euclidean distance less than a given threshold | We can solve this problem analytically using some geometric intuition and arguments. Unfortunately, the answer is quite long and a bit messy.
Basic setup
First, let's set out some notation. Assume we draw points uniformly at random from the rectangle $[0,a] \times [0,b]$. We assume without loss of generality that $0 < b < a$. Let $(X_1,Y_1)$ be the coordinates of the first point and $(X_2,Y_2)$ be the coordinates of the second point. Then, $X_1$, $X_2$, $Y_1$, and $Y_2$ are mutually independent with $X_i$ distributed uniformly on $[0,a]$ and $Y_i$ distributed uniformly on $[0,b]$.
Consider the Euclidean distance between the two points. This is
$$
D = \sqrt{(X_1-X_2)^2 + (Y_1-Y_2)^2} =: \sqrt{ Z_1^2 + Z_2^2} \> ,
$$
where $Z_1 = |X_1-X_2|$ and $Z_2 = |Y_1-Y_2|$.
Triangular distributions
Since $X_1$ and $X_2$ are independent uniforms, then $X_1 - X_2$ has a triangular distribution, whence $Z_1 = |X_1 - X_2|$ has a distribution with density function
$$
f_a(z_1) = \frac{2}{a^2}(a-z_1) ,\quad 0 < z_1 < a \> .
$$
The corresponding distribution function is $F_a(z_1) = 1 - (1-z_1/a)^2$ for $0 \leq z_1 \leq a$.
Similarly, $Z_2 = |Y_1 - Y_2|$ has density $f_b(z_2)$ and distribution function $F_b(z_2)$.
Note that since $Z_1$ is a function only of the two $X_i$ and $Z_2$ is a function only of the $Y_i$, then $Z_1$ and $Z_2$ are independent. So the distance between the points is the euclidean norm of two independent random variables (with different distributions).
The left panel of the figure shows the distribution of $X_1 - X_2$ and the right panel shows $Z_1 = |X_1 - X_2|$ where $a = 5$ in this example.
Some geometric probability
So $Z_1$ and $Z_2$ are independent and are supported on $[0,a]$ and $[0,b]$ respectively. For fixed $d$, the distribution function of the euclidean distance is
$$\renewcommand{\Pr}{\mathbb P}\newcommand{\rd}{\,\mathrm{d}}
\Pr(D \leq d) = \iint_{\{z_1^2+z_2^2 \leq d^2\}} f_a(z_1) f_b(z_2) \rd z_1 \rd z_2 \> .
$$
We can think of this geometrically as having a distribution on the rectangle $[0,a] \times [0,b]$ and considering a quarter circle of radius $d$. We'd like to know the probability that is inside the intersection of these two regions. There are three different possibilities to consider:
Region 1 (orange): $0 \leq d < b$. Here the quarter circle lies completely within the rectangle.
Region 2 (red): $b \leq d \leq a$. Here the quarter circle intersects the rectange along the top and bottom edges.
Region 3 (blue): $a < d \leq \sqrt{a^2 + b^2}$. The quarter circle intersects the rectangle along the top and right edges.
Here is a figure, where we draw an example radius of each of the three types. The rectangle is defined by $a = 5$, $b = 4$. The grayscale heatmap within the rectangle shows the density $f_a(z_1) f_b(z_2) \rd z_1 \rd z_2$ where dark areas have higher density and lighter areas have smaller density. Clicking on the figure will open a larger version of it.
Some ugly calculus
To calculate the probabilities, we need to do some calculus. Let's consider each of the regions in turn and we'll see that a common integral will arise. This integral has a closed-form, though it's not very pretty.
Region 1: $0 \leq d < b$.
$$\newcommand{\radius}{\sqrt{d^2 - y^2}}
\Pr(D \leq d) = \int_0^d \int_0^{\radius} f_b(y) f_a(x) \rd x \rd y = \int_0^d f_b(y) \int_0^{\radius} f_a(x) \rd x \rd y \>.
$$
Now, the inner integral yields $\frac{1}{a^2}\radius (2 a - \radius)$. So, we are left to compute an integral of the form
$$
G(c) - G(0) = \int_0^c (b - y) \radius (2a - \radius) \rd y \> ,
$$
where in this case of interest $c = d$. The antiderivative of the integrand is
$$
\begin{align*}
G(y) &= \int (b - y) \radius (2a - \radius) \rd y \\
&= \frac{a}{3} \radius ( y (3 b - 2 y) + 2 d^2) \\
&\quad + \,a b d^2 \tan^{-1}\Big(\frac{y}{{\scriptstyle \radius}}\Big) - b d^2 y \\
&\quad + \,\frac{b y^3}{3} + \frac{(d y)^2}{2} - \frac{y^4}{4} \> .
\end{align*}
$$
From this we get that $\Pr(D \leq d) = \frac{2}{a^2 b^2} (G(d) - G(0))$.
Region 2: $b \leq d \leq a$.
$$
\Pr(D \leq d) = \frac{2}{a^2 b^2} (G(b) - G(0)) \>,
$$
by the same reasoning as for Region 1, except now we must integrate along the $y$-axis all the way up to $b$ instead of just $d$.
Region 3: $a < d \leq \sqrt{a^2 + b^2}$.
$$
\begin{align*}
\Pr(D \leq d) &= \int_0^\sqrt{d^2-a^2} f_b(y)\rd y + \int_{\sqrt{d^2-a^2}}^b f_b(y) \int_{0}^\radius f_a(x) \rd x \rd y \\
&= F_b(\sqrt{d^2-a^2}) + \frac{2}{a^2 b^2} (G(b) - G(\sqrt{d^2-a^2}))
\end{align*}
$$
Below is a simulation of 20000 points where we plot the empirical distribution as grey points and the theoretical distribution as a line, colored according to the particular region that applies.
From the same simulation, below we plot the first 100 pairs of points and draw lines between them. Each is colored according to the distance between the pair of points and which region this distance falls into.
The expected number of pairs of points within distance $d$ is simply
$$
\mathbb E[\xi] = {n \choose 2} \Pr(D \leq d) \>,
$$
by linearity of expectation. | Probability that uniformly random points in a rectangle have Euclidean distance less than a given th | We can solve this problem analytically using some geometric intuition and arguments. Unfortunately, the answer is quite long and a bit messy.
Basic setup
First, let's set out some notation. Assume we | Probability that uniformly random points in a rectangle have Euclidean distance less than a given threshold
We can solve this problem analytically using some geometric intuition and arguments. Unfortunately, the answer is quite long and a bit messy.
Basic setup
First, let's set out some notation. Assume we draw points uniformly at random from the rectangle $[0,a] \times [0,b]$. We assume without loss of generality that $0 < b < a$. Let $(X_1,Y_1)$ be the coordinates of the first point and $(X_2,Y_2)$ be the coordinates of the second point. Then, $X_1$, $X_2$, $Y_1$, and $Y_2$ are mutually independent with $X_i$ distributed uniformly on $[0,a]$ and $Y_i$ distributed uniformly on $[0,b]$.
Consider the Euclidean distance between the two points. This is
$$
D = \sqrt{(X_1-X_2)^2 + (Y_1-Y_2)^2} =: \sqrt{ Z_1^2 + Z_2^2} \> ,
$$
where $Z_1 = |X_1-X_2|$ and $Z_2 = |Y_1-Y_2|$.
Triangular distributions
Since $X_1$ and $X_2$ are independent uniforms, then $X_1 - X_2$ has a triangular distribution, whence $Z_1 = |X_1 - X_2|$ has a distribution with density function
$$
f_a(z_1) = \frac{2}{a^2}(a-z_1) ,\quad 0 < z_1 < a \> .
$$
The corresponding distribution function is $F_a(z_1) = 1 - (1-z_1/a)^2$ for $0 \leq z_1 \leq a$.
Similarly, $Z_2 = |Y_1 - Y_2|$ has density $f_b(z_2)$ and distribution function $F_b(z_2)$.
Note that since $Z_1$ is a function only of the two $X_i$ and $Z_2$ is a function only of the $Y_i$, then $Z_1$ and $Z_2$ are independent. So the distance between the points is the euclidean norm of two independent random variables (with different distributions).
The left panel of the figure shows the distribution of $X_1 - X_2$ and the right panel shows $Z_1 = |X_1 - X_2|$ where $a = 5$ in this example.
Some geometric probability
So $Z_1$ and $Z_2$ are independent and are supported on $[0,a]$ and $[0,b]$ respectively. For fixed $d$, the distribution function of the euclidean distance is
$$\renewcommand{\Pr}{\mathbb P}\newcommand{\rd}{\,\mathrm{d}}
\Pr(D \leq d) = \iint_{\{z_1^2+z_2^2 \leq d^2\}} f_a(z_1) f_b(z_2) \rd z_1 \rd z_2 \> .
$$
We can think of this geometrically as having a distribution on the rectangle $[0,a] \times [0,b]$ and considering a quarter circle of radius $d$. We'd like to know the probability that is inside the intersection of these two regions. There are three different possibilities to consider:
Region 1 (orange): $0 \leq d < b$. Here the quarter circle lies completely within the rectangle.
Region 2 (red): $b \leq d \leq a$. Here the quarter circle intersects the rectange along the top and bottom edges.
Region 3 (blue): $a < d \leq \sqrt{a^2 + b^2}$. The quarter circle intersects the rectangle along the top and right edges.
Here is a figure, where we draw an example radius of each of the three types. The rectangle is defined by $a = 5$, $b = 4$. The grayscale heatmap within the rectangle shows the density $f_a(z_1) f_b(z_2) \rd z_1 \rd z_2$ where dark areas have higher density and lighter areas have smaller density. Clicking on the figure will open a larger version of it.
Some ugly calculus
To calculate the probabilities, we need to do some calculus. Let's consider each of the regions in turn and we'll see that a common integral will arise. This integral has a closed-form, though it's not very pretty.
Region 1: $0 \leq d < b$.
$$\newcommand{\radius}{\sqrt{d^2 - y^2}}
\Pr(D \leq d) = \int_0^d \int_0^{\radius} f_b(y) f_a(x) \rd x \rd y = \int_0^d f_b(y) \int_0^{\radius} f_a(x) \rd x \rd y \>.
$$
Now, the inner integral yields $\frac{1}{a^2}\radius (2 a - \radius)$. So, we are left to compute an integral of the form
$$
G(c) - G(0) = \int_0^c (b - y) \radius (2a - \radius) \rd y \> ,
$$
where in this case of interest $c = d$. The antiderivative of the integrand is
$$
\begin{align*}
G(y) &= \int (b - y) \radius (2a - \radius) \rd y \\
&= \frac{a}{3} \radius ( y (3 b - 2 y) + 2 d^2) \\
&\quad + \,a b d^2 \tan^{-1}\Big(\frac{y}{{\scriptstyle \radius}}\Big) - b d^2 y \\
&\quad + \,\frac{b y^3}{3} + \frac{(d y)^2}{2} - \frac{y^4}{4} \> .
\end{align*}
$$
From this we get that $\Pr(D \leq d) = \frac{2}{a^2 b^2} (G(d) - G(0))$.
Region 2: $b \leq d \leq a$.
$$
\Pr(D \leq d) = \frac{2}{a^2 b^2} (G(b) - G(0)) \>,
$$
by the same reasoning as for Region 1, except now we must integrate along the $y$-axis all the way up to $b$ instead of just $d$.
Region 3: $a < d \leq \sqrt{a^2 + b^2}$.
$$
\begin{align*}
\Pr(D \leq d) &= \int_0^\sqrt{d^2-a^2} f_b(y)\rd y + \int_{\sqrt{d^2-a^2}}^b f_b(y) \int_{0}^\radius f_a(x) \rd x \rd y \\
&= F_b(\sqrt{d^2-a^2}) + \frac{2}{a^2 b^2} (G(b) - G(\sqrt{d^2-a^2}))
\end{align*}
$$
Below is a simulation of 20000 points where we plot the empirical distribution as grey points and the theoretical distribution as a line, colored according to the particular region that applies.
From the same simulation, below we plot the first 100 pairs of points and draw lines between them. Each is colored according to the distance between the pair of points and which region this distance falls into.
The expected number of pairs of points within distance $d$ is simply
$$
\mathbb E[\xi] = {n \choose 2} \Pr(D \leq d) \>,
$$
by linearity of expectation. | Probability that uniformly random points in a rectangle have Euclidean distance less than a given th
We can solve this problem analytically using some geometric intuition and arguments. Unfortunately, the answer is quite long and a bit messy.
Basic setup
First, let's set out some notation. Assume we |
30,707 | Probability that uniformly random points in a rectangle have Euclidean distance less than a given threshold | If the points are truly uniformly distributed, i.e. in a fixed known pattern, then for any distance d, you can simply loop over all pairs and count the ones within the distance. Your probability is (that number / n).
If you have the additional freedom to pick how the n points are distributed/picked, then this is the rectangular version of the Bertrand paradox. That page shows a number of ways of answering this question based on how you distribute your points. | Probability that uniformly random points in a rectangle have Euclidean distance less than a given th | If the points are truly uniformly distributed, i.e. in a fixed known pattern, then for any distance d, you can simply loop over all pairs and count the ones within the distance. Your probability is ( | Probability that uniformly random points in a rectangle have Euclidean distance less than a given threshold
If the points are truly uniformly distributed, i.e. in a fixed known pattern, then for any distance d, you can simply loop over all pairs and count the ones within the distance. Your probability is (that number / n).
If you have the additional freedom to pick how the n points are distributed/picked, then this is the rectangular version of the Bertrand paradox. That page shows a number of ways of answering this question based on how you distribute your points. | Probability that uniformly random points in a rectangle have Euclidean distance less than a given th
If the points are truly uniformly distributed, i.e. in a fixed known pattern, then for any distance d, you can simply loop over all pairs and count the ones within the distance. Your probability is ( |
30,708 | How to tell quantitatively whether 1D data is clustered around 1 or 3 values? | I advise strongly against using k-means here. The results for different values of k aren't very well comparable. The method is just a crude heuristic. If you really want to use clustering, use EM clustering, since your data seems to contain normal distributions. And validate your results!
Instead, the obvious approach is to try fitting a single Gaussian function and (for example using the Levenberg-Marquard method) fit three Gaussian functions, maybe constrained to the same height (to avoid degeneration).
Then test, which of the two distributions fits better. | How to tell quantitatively whether 1D data is clustered around 1 or 3 values? | I advise strongly against using k-means here. The results for different values of k aren't very well comparable. The method is just a crude heuristic. If you really want to use clustering, use EM clus | How to tell quantitatively whether 1D data is clustered around 1 or 3 values?
I advise strongly against using k-means here. The results for different values of k aren't very well comparable. The method is just a crude heuristic. If you really want to use clustering, use EM clustering, since your data seems to contain normal distributions. And validate your results!
Instead, the obvious approach is to try fitting a single Gaussian function and (for example using the Levenberg-Marquard method) fit three Gaussian functions, maybe constrained to the same height (to avoid degeneration).
Then test, which of the two distributions fits better. | How to tell quantitatively whether 1D data is clustered around 1 or 3 values?
I advise strongly against using k-means here. The results for different values of k aren't very well comparable. The method is just a crude heuristic. If you really want to use clustering, use EM clus |
30,709 | How to tell quantitatively whether 1D data is clustered around 1 or 3 values? | Fit a mixture distribution to the data, something like a mixture of 3 normal distributions, then compare the likelihood of that fit to a fit of a single normal distribution (using likelihood ratio test, or AIC/BIC). The flexmix package for R may be of help. | How to tell quantitatively whether 1D data is clustered around 1 or 3 values? | Fit a mixture distribution to the data, something like a mixture of 3 normal distributions, then compare the likelihood of that fit to a fit of a single normal distribution (using likelihood ratio tes | How to tell quantitatively whether 1D data is clustered around 1 or 3 values?
Fit a mixture distribution to the data, something like a mixture of 3 normal distributions, then compare the likelihood of that fit to a fit of a single normal distribution (using likelihood ratio test, or AIC/BIC). The flexmix package for R may be of help. | How to tell quantitatively whether 1D data is clustered around 1 or 3 values?
Fit a mixture distribution to the data, something like a mixture of 3 normal distributions, then compare the likelihood of that fit to a fit of a single normal distribution (using likelihood ratio tes |
30,710 | How to tell quantitatively whether 1D data is clustered around 1 or 3 values? | If you want to use K-means clustering, then you need a way to compare the $K=1$ and $K=3$ cases. One approach would be to use the gap statistic from Tibshirani et al. and choose the $K$ that provides the better value. There's an R implementation available in SLmisc, though that particular function will try $K=1,2,3$, so you will need to take care to ensure that only $K=1$ or $K=3$ can be returned as the optimal value. | How to tell quantitatively whether 1D data is clustered around 1 or 3 values? | If you want to use K-means clustering, then you need a way to compare the $K=1$ and $K=3$ cases. One approach would be to use the gap statistic from Tibshirani et al. and choose the $K$ that provides | How to tell quantitatively whether 1D data is clustered around 1 or 3 values?
If you want to use K-means clustering, then you need a way to compare the $K=1$ and $K=3$ cases. One approach would be to use the gap statistic from Tibshirani et al. and choose the $K$ that provides the better value. There's an R implementation available in SLmisc, though that particular function will try $K=1,2,3$, so you will need to take care to ensure that only $K=1$ or $K=3$ can be returned as the optimal value. | How to tell quantitatively whether 1D data is clustered around 1 or 3 values?
If you want to use K-means clustering, then you need a way to compare the $K=1$ and $K=3$ cases. One approach would be to use the gap statistic from Tibshirani et al. and choose the $K$ that provides |
30,711 | How to tell quantitatively whether 1D data is clustered around 1 or 3 values? | Use a K-means clustering algorithm to identify the various means
Look for function KNN in R-seek to find the appropriate function | How to tell quantitatively whether 1D data is clustered around 1 or 3 values? | Use a K-means clustering algorithm to identify the various means
Look for function KNN in R-seek to find the appropriate function | How to tell quantitatively whether 1D data is clustered around 1 or 3 values?
Use a K-means clustering algorithm to identify the various means
Look for function KNN in R-seek to find the appropriate function | How to tell quantitatively whether 1D data is clustered around 1 or 3 values?
Use a K-means clustering algorithm to identify the various means
Look for function KNN in R-seek to find the appropriate function |
30,712 | Improving data analysis through a better visualization of data? | Although the other respondents have provided useful insights, I find myself disagreeing with some of their points of view. In particular, I believe that graphics which can show the details of the data (without being cluttered) are richer and more rewarding to view than those that overtly summarize or hide the data, and I believe all the data are interesting, not just those for computer X. Let's take a look.
(I am showing small plots here to make the point that quite a lot of numbers can be usefully shown, in detail, in small spaces.)
This plot shows the individual data values, all $80 = 2 \times 4 \times 10$ of them. It uses the distance along the y-axis to represent computing times, because people can most quickly and accurately compare distances on a common axis (as Bill Cleveland's studies have shown). To ensure that variability is understood correctly in the context of actual time, the y-axis is extended down to zero: cutting it off at any positive value will exaggerate the relative variation in timing, introducing a "Lie Factor" (in Tufte's terminology).
Graphic geometry (point markers versus line segments) clearly distinguish computer X (markers) from computer Y (segments). Variations in symbolism--both shape and color for the point markers--as well as variation in position along the x axis clearly distinguish the programs. (Using shape assures the distinctions will persist even in a grayscale rendering, which is likely in a print journal.)
The programs appear not to have any inherent order, so it is meaningless to present them alphabetically by their code names "a", ..., "d". This freedom has been exploited to sequence the results by the mean time required by computer X. This simple change, which requires no additional complexity or ink, reveals an interesting pattern: the relative timings of the programs on computer Y differ from the relative timings on computer X. Although this might or might not be statistically significant, it is a feature of the data that this graphic serendipitously makes apparent. That's what we hope a good graphic will do.
By making the point markers large enough, they almost blend visually into a graphical representation of total variability by program. (The blending loses some information: we don't see where the overlaps occur, exactly. This could be fixed by jittering the points slightly in the horizontal direction, thereby resolving all overlaps.)
This graphic alone could suffice to present the data. However, there is more to be discovered by using the same techniques to compare timings from one run to another.
This time, horizontal position distinguishes computer Y from computer X, essentially by using side-by-side panels. (Outlines around each panel have been erased, because they would interfere with the visual comparisons we want to make across the plot.) Within each panel, position distinguishes the run. Exactly as in the first plot--and using the same marker scheme to distinguish the programs--the markers vary in shape and color. This facilitates comparisons between the two plots.
Note the visual contrast in marker patterns between the two panels: this has an immediacy not afforded by the tables of numbers, which have to be carefully scanned before one is aware that computer Y is so consistent in its timings.
The markers are joined by faint dashed lines to provide visual connections within each program. These lines are extra ink, seemingly unnecessary for presenting the data, so I suspect Professor Tufte would eschew them. However, I find they serve as useful visual guides to separate the clutter where markers for different programs nearly overlap.
Again, I presume the runs are independent and therefore the run number is meaningless. Once more we can exploit that: separately within each panel, runs have been sequenced by the total time for the four algorithms. (The x axis does not label run numbers, because this would just be a distraction.) As in the first plot, this sequencing reveals several interesting patterns of correlation among the timings of the four algorithms within each run. Most of the variation for computer X is due to changes in algorithm "b" (red squares). We already saw that in the first graphic. The worst total performances, however, are due to two long times for algorithms "c" and "d" (gold diamonds and green triangles, respectively), and these occurred within the same two runs. It is also interesting that the outliers for programs "a" and "c" both occurred in the same run. These observations could reveal useful information about variation in program timing for computer X. They are examples of how because these graphics show the details of the data (rather than summaries like bars or boxplots or whatever), much can be seen concerning variation and correlations--but I needn't elaborate on that here; you can explore it for yourself.
I constructed these graphics without giving any thought to a "story" or "spinning" the data, because I wanted first to see what the data have to say. Such graphics will never grace the pages of USA Today, perhaps, but due to their ability to reveal patterns by enabling fast, accurate visual comparisons, they are good candidates for communicating results to a scientific or technical audience. (Which is not to say they are without flaws: there are some obvious ways to improve them, including jittering in the first and supplying good legends and judicious labels in both.) So yes, I agree that attention to the potential audience is important, but I am not persuaded that graphics ought to be created with the intention of advocating or pressing a particular point of view.
In summary, I would like to offer this advice.
Use design principles found in the literature on cartography and cognitive neuroscience (e.g., Alan MacEachren) to improve the chances that readers will interpret your graphic as you intend and that they will be able to draw honest, unbiased, conclusions from them.
Use design principles found in the literature on statistical graphics (e.g., Ed Tufte and Bill Cleveland) to create informative data-rich presentations.
Experiment and be creative. Principles are the starting point for making a statistical graphic, but they can be broken. Understand which principles you are breaking and why.
Aim for revelation rather than mere summary. A satisfying graphic clearly reveals patterns of interest in the data. A great graphic will reveal unexpected patterns and invites us to make comparisons we might not have thought of beforehand. It may prompt us to ask new questions and more questions. That is how we advance our understanding. | Improving data analysis through a better visualization of data? | Although the other respondents have provided useful insights, I find myself disagreeing with some of their points of view. In particular, I believe that graphics which can show the details of the dat | Improving data analysis through a better visualization of data?
Although the other respondents have provided useful insights, I find myself disagreeing with some of their points of view. In particular, I believe that graphics which can show the details of the data (without being cluttered) are richer and more rewarding to view than those that overtly summarize or hide the data, and I believe all the data are interesting, not just those for computer X. Let's take a look.
(I am showing small plots here to make the point that quite a lot of numbers can be usefully shown, in detail, in small spaces.)
This plot shows the individual data values, all $80 = 2 \times 4 \times 10$ of them. It uses the distance along the y-axis to represent computing times, because people can most quickly and accurately compare distances on a common axis (as Bill Cleveland's studies have shown). To ensure that variability is understood correctly in the context of actual time, the y-axis is extended down to zero: cutting it off at any positive value will exaggerate the relative variation in timing, introducing a "Lie Factor" (in Tufte's terminology).
Graphic geometry (point markers versus line segments) clearly distinguish computer X (markers) from computer Y (segments). Variations in symbolism--both shape and color for the point markers--as well as variation in position along the x axis clearly distinguish the programs. (Using shape assures the distinctions will persist even in a grayscale rendering, which is likely in a print journal.)
The programs appear not to have any inherent order, so it is meaningless to present them alphabetically by their code names "a", ..., "d". This freedom has been exploited to sequence the results by the mean time required by computer X. This simple change, which requires no additional complexity or ink, reveals an interesting pattern: the relative timings of the programs on computer Y differ from the relative timings on computer X. Although this might or might not be statistically significant, it is a feature of the data that this graphic serendipitously makes apparent. That's what we hope a good graphic will do.
By making the point markers large enough, they almost blend visually into a graphical representation of total variability by program. (The blending loses some information: we don't see where the overlaps occur, exactly. This could be fixed by jittering the points slightly in the horizontal direction, thereby resolving all overlaps.)
This graphic alone could suffice to present the data. However, there is more to be discovered by using the same techniques to compare timings from one run to another.
This time, horizontal position distinguishes computer Y from computer X, essentially by using side-by-side panels. (Outlines around each panel have been erased, because they would interfere with the visual comparisons we want to make across the plot.) Within each panel, position distinguishes the run. Exactly as in the first plot--and using the same marker scheme to distinguish the programs--the markers vary in shape and color. This facilitates comparisons between the two plots.
Note the visual contrast in marker patterns between the two panels: this has an immediacy not afforded by the tables of numbers, which have to be carefully scanned before one is aware that computer Y is so consistent in its timings.
The markers are joined by faint dashed lines to provide visual connections within each program. These lines are extra ink, seemingly unnecessary for presenting the data, so I suspect Professor Tufte would eschew them. However, I find they serve as useful visual guides to separate the clutter where markers for different programs nearly overlap.
Again, I presume the runs are independent and therefore the run number is meaningless. Once more we can exploit that: separately within each panel, runs have been sequenced by the total time for the four algorithms. (The x axis does not label run numbers, because this would just be a distraction.) As in the first plot, this sequencing reveals several interesting patterns of correlation among the timings of the four algorithms within each run. Most of the variation for computer X is due to changes in algorithm "b" (red squares). We already saw that in the first graphic. The worst total performances, however, are due to two long times for algorithms "c" and "d" (gold diamonds and green triangles, respectively), and these occurred within the same two runs. It is also interesting that the outliers for programs "a" and "c" both occurred in the same run. These observations could reveal useful information about variation in program timing for computer X. They are examples of how because these graphics show the details of the data (rather than summaries like bars or boxplots or whatever), much can be seen concerning variation and correlations--but I needn't elaborate on that here; you can explore it for yourself.
I constructed these graphics without giving any thought to a "story" or "spinning" the data, because I wanted first to see what the data have to say. Such graphics will never grace the pages of USA Today, perhaps, but due to their ability to reveal patterns by enabling fast, accurate visual comparisons, they are good candidates for communicating results to a scientific or technical audience. (Which is not to say they are without flaws: there are some obvious ways to improve them, including jittering in the first and supplying good legends and judicious labels in both.) So yes, I agree that attention to the potential audience is important, but I am not persuaded that graphics ought to be created with the intention of advocating or pressing a particular point of view.
In summary, I would like to offer this advice.
Use design principles found in the literature on cartography and cognitive neuroscience (e.g., Alan MacEachren) to improve the chances that readers will interpret your graphic as you intend and that they will be able to draw honest, unbiased, conclusions from them.
Use design principles found in the literature on statistical graphics (e.g., Ed Tufte and Bill Cleveland) to create informative data-rich presentations.
Experiment and be creative. Principles are the starting point for making a statistical graphic, but they can be broken. Understand which principles you are breaking and why.
Aim for revelation rather than mere summary. A satisfying graphic clearly reveals patterns of interest in the data. A great graphic will reveal unexpected patterns and invites us to make comparisons we might not have thought of beforehand. It may prompt us to ask new questions and more questions. That is how we advance our understanding. | Improving data analysis through a better visualization of data?
Although the other respondents have provided useful insights, I find myself disagreeing with some of their points of view. In particular, I believe that graphics which can show the details of the dat |
30,713 | Improving data analysis through a better visualization of data? | Plots let you tell a story, to spin the data in the way that you want the reader to interpret your results. What's the takeaway message? What do you want to stick in their minds? Determine that message, then think about how to make it into a figure.
In your plots, I don't know what message I should learn and you give me too much of the raw data back---I want efficient summaries, not the data themselves.
For plot 1, I'd ask, what comparisons do you want to make? The charts that you have illustrate the run times across program for a given computer. It sounds like you want to do the comparisons across computers for a given program. If this is the case, then you want the stats for program a on computer x to be in the same plot as the stats for program a on computer y. I'd put all 8 boxes in your two boxplots in the same figure, ordered ax, ay, bx, by, ... to facilitate the comparison that you are really making.
The same goes for plot 2, but I find this plot strange. You are basically showing every data point that you have---a box for each run and a run only has 4 observations. Why not just give me a box plot of total run times for computer x and one for computer y?
The same "too much data" critique applies to your last plot as well. Plot 3 doesn't add any new information to plot 2. I can get the overall time if I just multiply the mean time by 4 in plot 2. Here, too, you could plot a box each for computer x and y, but these will literally be multiples of the plot that I proposed to replace plot 2.
I agree with @Andy W that computer y isn't that interesting and maybe you want to just state that and exclude it from the plots for brevity (though I think the suggestions that I made can help you trim these plots down). I don't think that tables are very good ways to go, however. | Improving data analysis through a better visualization of data? | Plots let you tell a story, to spin the data in the way that you want the reader to interpret your results. What's the takeaway message? What do you want to stick in their minds? Determine that messag | Improving data analysis through a better visualization of data?
Plots let you tell a story, to spin the data in the way that you want the reader to interpret your results. What's the takeaway message? What do you want to stick in their minds? Determine that message, then think about how to make it into a figure.
In your plots, I don't know what message I should learn and you give me too much of the raw data back---I want efficient summaries, not the data themselves.
For plot 1, I'd ask, what comparisons do you want to make? The charts that you have illustrate the run times across program for a given computer. It sounds like you want to do the comparisons across computers for a given program. If this is the case, then you want the stats for program a on computer x to be in the same plot as the stats for program a on computer y. I'd put all 8 boxes in your two boxplots in the same figure, ordered ax, ay, bx, by, ... to facilitate the comparison that you are really making.
The same goes for plot 2, but I find this plot strange. You are basically showing every data point that you have---a box for each run and a run only has 4 observations. Why not just give me a box plot of total run times for computer x and one for computer y?
The same "too much data" critique applies to your last plot as well. Plot 3 doesn't add any new information to plot 2. I can get the overall time if I just multiply the mean time by 4 in plot 2. Here, too, you could plot a box each for computer x and y, but these will literally be multiples of the plot that I proposed to replace plot 2.
I agree with @Andy W that computer y isn't that interesting and maybe you want to just state that and exclude it from the plots for brevity (though I think the suggestions that I made can help you trim these plots down). I don't think that tables are very good ways to go, however. | Improving data analysis through a better visualization of data?
Plots let you tell a story, to spin the data in the way that you want the reader to interpret your results. What's the takeaway message? What do you want to stick in their minds? Determine that messag |
30,714 | Improving data analysis through a better visualization of data? | Your plots seem fine to me, and if you have space constraints you could place them all in one plot instead of three separate ones (e.g. use par(mfrow=c(3,2)) and then just output them to all the same device).
There isn't much to report though for Machine Y, it literally has no variation except for program b. I do think the graphs are informative to see not only how much longer the running times are for Machine X but also how much the running times vary.
If this really is your use case though, it is such simple data placing all of the data in a table would be sufficient to demonstrate the difference between machines (although I believe the graphs are still useful if you can afford room to place them in the document as well). | Improving data analysis through a better visualization of data? | Your plots seem fine to me, and if you have space constraints you could place them all in one plot instead of three separate ones (e.g. use par(mfrow=c(3,2)) and then just output them to all the same | Improving data analysis through a better visualization of data?
Your plots seem fine to me, and if you have space constraints you could place them all in one plot instead of three separate ones (e.g. use par(mfrow=c(3,2)) and then just output them to all the same device).
There isn't much to report though for Machine Y, it literally has no variation except for program b. I do think the graphs are informative to see not only how much longer the running times are for Machine X but also how much the running times vary.
If this really is your use case though, it is such simple data placing all of the data in a table would be sufficient to demonstrate the difference between machines (although I believe the graphs are still useful if you can afford room to place them in the document as well). | Improving data analysis through a better visualization of data?
Your plots seem fine to me, and if you have space constraints you could place them all in one plot instead of three separate ones (e.g. use par(mfrow=c(3,2)) and then just output them to all the same |
30,715 | Why is R plotting standardized residuals against theoretical quantiles in a Q-Q plot? | When you use the standardized residuals, the expected value of the residuals is zero and the variance is (approximately) one. This has two benefits:
If you rescale one of your variables (say change kilometres to miles), the residual plots remain unchanged.
In the qqplot, the residuals should lie on the line y = x
You expect 95% of your residuals to lie between -1.96 and 1.96. This makes it easier to spot outliers. | Why is R plotting standardized residuals against theoretical quantiles in a Q-Q plot? | When you use the standardized residuals, the expected value of the residuals is zero and the variance is (approximately) one. This has two benefits:
If you rescale one of your variables (say change | Why is R plotting standardized residuals against theoretical quantiles in a Q-Q plot?
When you use the standardized residuals, the expected value of the residuals is zero and the variance is (approximately) one. This has two benefits:
If you rescale one of your variables (say change kilometres to miles), the residual plots remain unchanged.
In the qqplot, the residuals should lie on the line y = x
You expect 95% of your residuals to lie between -1.96 and 1.96. This makes it easier to spot outliers. | Why is R plotting standardized residuals against theoretical quantiles in a Q-Q plot?
When you use the standardized residuals, the expected value of the residuals is zero and the variance is (approximately) one. This has two benefits:
If you rescale one of your variables (say change |
30,716 | Why is R plotting standardized residuals against theoretical quantiles in a Q-Q plot? | The theoretical residuals in a linear model are independent identically normally distributed. However the observed residuals are not independent and do not have equal variances. So standardizing the residuals divides by the estimated standard deviation associated with that residual making them more equal in their variances (using information from the hat matrix to compute this). This is a more meaningful residual to look at in the qqplot.
Also, are you really running qqplot on the fitted model? or is this the qqplot from runing plot on the model? | Why is R plotting standardized residuals against theoretical quantiles in a Q-Q plot? | The theoretical residuals in a linear model are independent identically normally distributed. However the observed residuals are not independent and do not have equal variances. So standardizing the | Why is R plotting standardized residuals against theoretical quantiles in a Q-Q plot?
The theoretical residuals in a linear model are independent identically normally distributed. However the observed residuals are not independent and do not have equal variances. So standardizing the residuals divides by the estimated standard deviation associated with that residual making them more equal in their variances (using information from the hat matrix to compute this). This is a more meaningful residual to look at in the qqplot.
Also, are you really running qqplot on the fitted model? or is this the qqplot from runing plot on the model? | Why is R plotting standardized residuals against theoretical quantiles in a Q-Q plot?
The theoretical residuals in a linear model are independent identically normally distributed. However the observed residuals are not independent and do not have equal variances. So standardizing the |
30,717 | What is the term for a time series regression having more than one predictor? | ARIMAX (Box-Tiao) is what it is called when you add covariates to Arima models, is is basically arima + X.
http://www.r-bloggers.com/the-arimax-model-muddle/
Also search for Panel data or TSCS: 'Time-series–cross-section (TSCS) data consist of comparable time series data observed on a variety of units'
See:
http://as.nyu.edu/docs/IO/2576/beck.pdf
or https://stat.ethz.ch/pipermail/r-sig-mixed-models/2010q4/004530.html | What is the term for a time series regression having more than one predictor? | ARIMAX (Box-Tiao) is what it is called when you add covariates to Arima models, is is basically arima + X.
http://www.r-bloggers.com/the-arimax-model-muddle/
Also search for Panel data or TSCS: 'Time | What is the term for a time series regression having more than one predictor?
ARIMAX (Box-Tiao) is what it is called when you add covariates to Arima models, is is basically arima + X.
http://www.r-bloggers.com/the-arimax-model-muddle/
Also search for Panel data or TSCS: 'Time-series–cross-section (TSCS) data consist of comparable time series data observed on a variety of units'
See:
http://as.nyu.edu/docs/IO/2576/beck.pdf
or https://stat.ethz.ch/pipermail/r-sig-mixed-models/2010q4/004530.html | What is the term for a time series regression having more than one predictor?
ARIMAX (Box-Tiao) is what it is called when you add covariates to Arima models, is is basically arima + X.
http://www.r-bloggers.com/the-arimax-model-muddle/
Also search for Panel data or TSCS: 'Time |
30,718 | What is the term for a time series regression having more than one predictor? | This is called a Transfer Function Model. It has also been referred to as a Dynamic Regression Model. | What is the term for a time series regression having more than one predictor? | This is called a Transfer Function Model. It has also been referred to as a Dynamic Regression Model. | What is the term for a time series regression having more than one predictor?
This is called a Transfer Function Model. It has also been referred to as a Dynamic Regression Model. | What is the term for a time series regression having more than one predictor?
This is called a Transfer Function Model. It has also been referred to as a Dynamic Regression Model. |
30,719 | What is the term for a time series regression having more than one predictor? | Searching the SAS documentation for ARIMAX (based on Patrick's answer), I found (in "The ARIMA Procedure: Input Variables and Regression with ARMA Errors") that they listed the following terms:
ARIMAX
Box-Tiao
Transfer Function Model
Dynamic Regression
Intervention Model
Interrupted Time Series Model
Regression Model with ARMA Errors
The term ARIMAX was used almost equally as much as the term Transfer Function Model. Intervention Model and Interrupted Time Series Model appear to refer to a different kind of model than the others. | What is the term for a time series regression having more than one predictor? | Searching the SAS documentation for ARIMAX (based on Patrick's answer), I found (in "The ARIMA Procedure: Input Variables and Regression with ARMA Errors") that they listed the following terms:
ARIMA | What is the term for a time series regression having more than one predictor?
Searching the SAS documentation for ARIMAX (based on Patrick's answer), I found (in "The ARIMA Procedure: Input Variables and Regression with ARMA Errors") that they listed the following terms:
ARIMAX
Box-Tiao
Transfer Function Model
Dynamic Regression
Intervention Model
Interrupted Time Series Model
Regression Model with ARMA Errors
The term ARIMAX was used almost equally as much as the term Transfer Function Model. Intervention Model and Interrupted Time Series Model appear to refer to a different kind of model than the others. | What is the term for a time series regression having more than one predictor?
Searching the SAS documentation for ARIMAX (based on Patrick's answer), I found (in "The ARIMA Procedure: Input Variables and Regression with ARMA Errors") that they listed the following terms:
ARIMA |
30,720 | Forcing a set of numbers to a gaussian bell-curve | A scaled range, like 200 to 800 (for SATs, e.g.), is just a change of units of measurement. (It works exactly like changing temperatures in Fahrenheit to those in Celsius.)
The middle value of 500 is intended to correspond to the average of the data. The range is intended to correspond to about 99.7% of the data when the data do follow a Normal distribution ("Bell curve"). It is guaranteed to include 8/9 of the data (Chebyshev's Inequality).
In this case, the formula 1-5 computes the standard deviation of the data. This is simply a new unit of measurement for the original data. It needs to correspond to 100 units in the new scale. Therefore, to convert an original value to the scaled value,
Subtract the average.
Divide by the standard deviation.
Multiply by 100.
Add 500.
If the result lies beyond the range $[200, 800]$ you can either use it as-is or "clamp" it to the range by rounding up to 200, down to 800.
In the example, using data $\{1,3,4,5,7\}$, the average is $4$ and the SD is $2$. Therefore, upon rescaling, $1$ becomes $(1 - 4)/2 * 100 + 500 = 350$. The entire rescaled dataset, computed similarly, is $\{350, 450, 500, 550, 650\}$.
When the original data are distributed in a distinctly non-normal way, you need another approach. You no longer compute an average or SD. Instead, put all the scores in order, from 1st (smallest) up to $n$th (largest). These are their ranks. Convert any rank $i$ into its percentage $(i-1/2)/n$. (In the example, $n=5$ and data are already in rank order $i=1,2,3,4,5$. Therefore their percentages are $1/10, 3/10, 5/10, 7/10, 9/10$, often written equivalently as $10\%, 30\%$, etc.) Corresponding to any percentage (between $0$ and $1$, necessarily) is a normal quantile. It is computed with the normal quantile function, which is closely related to the error function. (Simple numerical approximations are straightforward to code.) Its values, which typically will be between -3 and 3, have to be rescaled (just as before) to the range $[200, 800]$. Do this by first multiplying the normal quantile by 100 and then adding 500.
The normal quantile function is available in many computing platforms, including spreadsheets (Excel's normsinv, for instance). For example, the normal quantiles (or "normal scores") for the data $\{1,3,4,5,7\}$ are $\{372, 448, 500, 552, 628\}$.
This "normal scoring" approach will always give scores between 200 and 800 when you have 370 or fewer values. When you have 1111 or fewer values, all but the highest and lowest will have scores between 200 and 800. | Forcing a set of numbers to a gaussian bell-curve | A scaled range, like 200 to 800 (for SATs, e.g.), is just a change of units of measurement. (It works exactly like changing temperatures in Fahrenheit to those in Celsius.)
The middle value of 500 is | Forcing a set of numbers to a gaussian bell-curve
A scaled range, like 200 to 800 (for SATs, e.g.), is just a change of units of measurement. (It works exactly like changing temperatures in Fahrenheit to those in Celsius.)
The middle value of 500 is intended to correspond to the average of the data. The range is intended to correspond to about 99.7% of the data when the data do follow a Normal distribution ("Bell curve"). It is guaranteed to include 8/9 of the data (Chebyshev's Inequality).
In this case, the formula 1-5 computes the standard deviation of the data. This is simply a new unit of measurement for the original data. It needs to correspond to 100 units in the new scale. Therefore, to convert an original value to the scaled value,
Subtract the average.
Divide by the standard deviation.
Multiply by 100.
Add 500.
If the result lies beyond the range $[200, 800]$ you can either use it as-is or "clamp" it to the range by rounding up to 200, down to 800.
In the example, using data $\{1,3,4,5,7\}$, the average is $4$ and the SD is $2$. Therefore, upon rescaling, $1$ becomes $(1 - 4)/2 * 100 + 500 = 350$. The entire rescaled dataset, computed similarly, is $\{350, 450, 500, 550, 650\}$.
When the original data are distributed in a distinctly non-normal way, you need another approach. You no longer compute an average or SD. Instead, put all the scores in order, from 1st (smallest) up to $n$th (largest). These are their ranks. Convert any rank $i$ into its percentage $(i-1/2)/n$. (In the example, $n=5$ and data are already in rank order $i=1,2,3,4,5$. Therefore their percentages are $1/10, 3/10, 5/10, 7/10, 9/10$, often written equivalently as $10\%, 30\%$, etc.) Corresponding to any percentage (between $0$ and $1$, necessarily) is a normal quantile. It is computed with the normal quantile function, which is closely related to the error function. (Simple numerical approximations are straightforward to code.) Its values, which typically will be between -3 and 3, have to be rescaled (just as before) to the range $[200, 800]$. Do this by first multiplying the normal quantile by 100 and then adding 500.
The normal quantile function is available in many computing platforms, including spreadsheets (Excel's normsinv, for instance). For example, the normal quantiles (or "normal scores") for the data $\{1,3,4,5,7\}$ are $\{372, 448, 500, 552, 628\}$.
This "normal scoring" approach will always give scores between 200 and 800 when you have 370 or fewer values. When you have 1111 or fewer values, all but the highest and lowest will have scores between 200 and 800. | Forcing a set of numbers to a gaussian bell-curve
A scaled range, like 200 to 800 (for SATs, e.g.), is just a change of units of measurement. (It works exactly like changing temperatures in Fahrenheit to those in Celsius.)
The middle value of 500 is |
30,721 | Forcing a set of numbers to a gaussian bell-curve | You could try this approach - normalise your data set to range between the values -1 and +1 thus:
$$
\left(\frac{\text{individual_value} - \text{min_of_all_values}}{\text{max_of_all_values} - \text{min_of-all_values}}-0.5\right)*2.
$$
This will convert every value in your data set to a value between -1 and +1, with the actual maximum and minimum values being set to +1 and -1 respectively, and then reset these +1 and -1 values to be +0.9999 and -0.9999 (necessary for following calculations.)
Then apply the Fisher Transformation to each of the above normalised values to "force it" to approximately conform to a normal distribution, and then "un-normalise" each of these Fisher Transform values to range in value between 200 and 800 thus:
$$
\frac{\text{Fish_value} - \text{min_all_Fish_values}}{\text{max_all_Fish_values} - \text{min_all_Fish_values}}*600 + 200
$$
The maximum Fisher Transform value will be set to exactly 800, the minimum Fisher Transform value will be set to exactly 200, and all the other values will lie between these two extremes, according to an approximate normal distribution.
Referencing your original question on SO and the issue of scalability, the advantage of this approach is that provided any new data point is not itself a new maximum or minimum for the data set as a whole you can apply the above calculations to the new data point to get its score between 200 and 800 without affecting any of the existing scores of the original data set. If a new data point is a new maximum or minimum you will have to recalculate the scores for the whole data set with this new "normalising" maximum or minimum value. | Forcing a set of numbers to a gaussian bell-curve | You could try this approach - normalise your data set to range between the values -1 and +1 thus:
$$
\left(\frac{\text{individual_value} - \text{min_of_all_values}}{\text{max_of_all_values} - \text{m | Forcing a set of numbers to a gaussian bell-curve
You could try this approach - normalise your data set to range between the values -1 and +1 thus:
$$
\left(\frac{\text{individual_value} - \text{min_of_all_values}}{\text{max_of_all_values} - \text{min_of-all_values}}-0.5\right)*2.
$$
This will convert every value in your data set to a value between -1 and +1, with the actual maximum and minimum values being set to +1 and -1 respectively, and then reset these +1 and -1 values to be +0.9999 and -0.9999 (necessary for following calculations.)
Then apply the Fisher Transformation to each of the above normalised values to "force it" to approximately conform to a normal distribution, and then "un-normalise" each of these Fisher Transform values to range in value between 200 and 800 thus:
$$
\frac{\text{Fish_value} - \text{min_all_Fish_values}}{\text{max_all_Fish_values} - \text{min_all_Fish_values}}*600 + 200
$$
The maximum Fisher Transform value will be set to exactly 800, the minimum Fisher Transform value will be set to exactly 200, and all the other values will lie between these two extremes, according to an approximate normal distribution.
Referencing your original question on SO and the issue of scalability, the advantage of this approach is that provided any new data point is not itself a new maximum or minimum for the data set as a whole you can apply the above calculations to the new data point to get its score between 200 and 800 without affecting any of the existing scores of the original data set. If a new data point is a new maximum or minimum you will have to recalculate the scores for the whole data set with this new "normalising" maximum or minimum value. | Forcing a set of numbers to a gaussian bell-curve
You could try this approach - normalise your data set to range between the values -1 and +1 thus:
$$
\left(\frac{\text{individual_value} - \text{min_of_all_values}}{\text{max_of_all_values} - \text{m |
30,722 | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a multivariate normal distribution? | The question reads to me like the OP was asking when $U = (X,Y,Z)^{\mathrm{T}}$ are jointly normal then what is the probability $P(X \geq Y \mbox{ and } X \geq Z)$?
For that question we could look at the joint distribution of $AU$ where $A$ looks like
$$
A=\left[
\begin{array}{ccc}
1 & -1 & 0 \newline
1 & 0 & -1
\end{array}\right]
$$
Of course, $AU$ is also jointly normal with mean $A\mu$ and variance-covariance $A\Sigma A^{\mathrm{T}}$, and the desired probability is $P(AU > \mathbf{0}_{n-1})$. We could get this in R with something like
set.seed(1)
Mu <- c(1,2,3)
library(MCMCpack)
S <- rwish(3, diag(3)) # get var-cov matrix
A <- matrix(c(1,-1,0, 1,0,-1), nrow = 2, byrow = TRUE)
newMu <- as.vector(A %*% Mu)
newS <- A %*% S %*% t(A)
library(mvtnorm)
pmvnorm(lower=c(0,0), mean = newMu, sigma = newS)
which is about 0.1446487 on my system. If a person knew something about the matrix $\Sigma$ then (s)he might even be able to write something down that looks like a formula (I haven't tried, though). | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a mult | The question reads to me like the OP was asking when $U = (X,Y,Z)^{\mathrm{T}}$ are jointly normal then what is the probability $P(X \geq Y \mbox{ and } X \geq Z)$?
For that question we could look at | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a multivariate normal distribution?
The question reads to me like the OP was asking when $U = (X,Y,Z)^{\mathrm{T}}$ are jointly normal then what is the probability $P(X \geq Y \mbox{ and } X \geq Z)$?
For that question we could look at the joint distribution of $AU$ where $A$ looks like
$$
A=\left[
\begin{array}{ccc}
1 & -1 & 0 \newline
1 & 0 & -1
\end{array}\right]
$$
Of course, $AU$ is also jointly normal with mean $A\mu$ and variance-covariance $A\Sigma A^{\mathrm{T}}$, and the desired probability is $P(AU > \mathbf{0}_{n-1})$. We could get this in R with something like
set.seed(1)
Mu <- c(1,2,3)
library(MCMCpack)
S <- rwish(3, diag(3)) # get var-cov matrix
A <- matrix(c(1,-1,0, 1,0,-1), nrow = 2, byrow = TRUE)
newMu <- as.vector(A %*% Mu)
newS <- A %*% S %*% t(A)
library(mvtnorm)
pmvnorm(lower=c(0,0), mean = newMu, sigma = newS)
which is about 0.1446487 on my system. If a person knew something about the matrix $\Sigma$ then (s)he might even be able to write something down that looks like a formula (I haven't tried, though). | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a mult
The question reads to me like the OP was asking when $U = (X,Y,Z)^{\mathrm{T}}$ are jointly normal then what is the probability $P(X \geq Y \mbox{ and } X \geq Z)$?
For that question we could look at |
30,723 | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a multivariate normal distribution? | Answer updated thanks to remarks from Whuber and Srikant
Proposition
Let C=[C_1;C_2] be a 2*n matrix, $X^0=(X^0_i)\sim \mathcal{N} (0,\Sigma)$ $\mathbb{R}^n$ valued. Let $\Sigma^Y=^tC\Sigma C=(\sigma^Y_{ij})$. Then, for $u_1,u_2\in\mathbb{R}$
$P(^tC_1X^0\geq u_1\text{ and } ^tC_2X^0\geq u_2)=\mathbb{E}\left [ \bar{\Phi}\left (\frac{u_2-\frac{\sigma^Y_{21}}{\sigma^Y_{11}}^tC_1X^0}{\sqrt{\sigma^Y_{22}-\frac{\sigma^Y_{21}\sigma^Y_{12}}{\sigma^Y_{11}}}} \right )1_{ ^tC_1X^0\geq u_1 } \right ]$
where $\bar{\Phi}=P(\mathcal{N}(0,1)>z)$
Answer to the question when the dimension is 3
Assume $i=1$, $\Sigma=(\sigma_{ij})$.
The probability $P(X_1>X_2 \text{ and }X_1>X_3)$ is obtained using the preceding proposition with $X^0=X-\mu$, $C_1=(1,-1,0)$, $C_2=(1,0,-1)$, $u_1=\mu_2-\mu1$ and $u_2=\mu_3-\mu1$. This gives
$\sigma^Y_{11}=\sigma_{11}+\sigma_{22}-2\sigma_{12}$
$\sigma^Y_{22}=\sigma_{11}+\sigma_{33}-2\sigma_{13}$
$\sigma^Y_{12}=\sigma_{11}+2\sigma_{23}-\sigma_{31}-\sigma_{21}$
Proof of the proposition
Assume $c\in\mathbb{R}^n$ and $\Sigma$ has full rank. It is easy to show that for any $u\in\mathbb{R}$
$$P(^tcX^0>u)=\bar{\Phi} \left (\frac{u}{\|\Sigma^{1/2}c\|_2} \right )$$
Let us denote $Y_1=^tC_1X^0,Y_2=^tC_2X^0$. From the correlation theorem, since $Y=(Y_1,Y_2)$ is centered gaussian in $\mathbb{R}^2$ with covariance $\Sigma^Y$
then $Y_2|Y_1$ is gaussian with mean $\frac{\sigma^Y_{21}}{\sigma^Y_{11}}Y_1$ and variance $\sigma^Y_{22}-\frac{\sigma^Y_{21}}{\sigma^Y_{11}}\sigma^Y_{12}$.
This, with
$P(Y_1>u_1 \text{ and } Y_2>u_2)=\mathbb{E}\left [\mathbb{E}[1_{Y_2\geq u_2 }|Y_1] 1_{Y_1\geq u_1 }\right ] $
gives the desired result.
How to extend the proposition
If we want to be able to solve the initial problem with dimension larger than $3$,
we need to compute
$P(\forall j \; ^tc_jX^0\geq u_j) $
(for well chosen $u_j$). Set $Y=(Y_1,\dots,Y_n)$ with $Y_j=^tc_jX$ centered $\mathbb{R}$-valued gaussians.
You can use the correlation theorem iteratively to derive the distribution of $Y_1|Y_{2:n}$, $Y_2|Y_{3:n}$..... This may give something like a recurcive formulation of the solution to the proposition when C is $p*n$ (recurcive on $p$). | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a mult | Answer updated thanks to remarks from Whuber and Srikant
Proposition
Let C=[C_1;C_2] be a 2*n matrix, $X^0=(X^0_i)\sim \mathcal{N} (0,\Sigma)$ $\mathbb{R}^n$ valued. Let $\Sigma^Y=^tC\Sigma C=(\sig | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a multivariate normal distribution?
Answer updated thanks to remarks from Whuber and Srikant
Proposition
Let C=[C_1;C_2] be a 2*n matrix, $X^0=(X^0_i)\sim \mathcal{N} (0,\Sigma)$ $\mathbb{R}^n$ valued. Let $\Sigma^Y=^tC\Sigma C=(\sigma^Y_{ij})$. Then, for $u_1,u_2\in\mathbb{R}$
$P(^tC_1X^0\geq u_1\text{ and } ^tC_2X^0\geq u_2)=\mathbb{E}\left [ \bar{\Phi}\left (\frac{u_2-\frac{\sigma^Y_{21}}{\sigma^Y_{11}}^tC_1X^0}{\sqrt{\sigma^Y_{22}-\frac{\sigma^Y_{21}\sigma^Y_{12}}{\sigma^Y_{11}}}} \right )1_{ ^tC_1X^0\geq u_1 } \right ]$
where $\bar{\Phi}=P(\mathcal{N}(0,1)>z)$
Answer to the question when the dimension is 3
Assume $i=1$, $\Sigma=(\sigma_{ij})$.
The probability $P(X_1>X_2 \text{ and }X_1>X_3)$ is obtained using the preceding proposition with $X^0=X-\mu$, $C_1=(1,-1,0)$, $C_2=(1,0,-1)$, $u_1=\mu_2-\mu1$ and $u_2=\mu_3-\mu1$. This gives
$\sigma^Y_{11}=\sigma_{11}+\sigma_{22}-2\sigma_{12}$
$\sigma^Y_{22}=\sigma_{11}+\sigma_{33}-2\sigma_{13}$
$\sigma^Y_{12}=\sigma_{11}+2\sigma_{23}-\sigma_{31}-\sigma_{21}$
Proof of the proposition
Assume $c\in\mathbb{R}^n$ and $\Sigma$ has full rank. It is easy to show that for any $u\in\mathbb{R}$
$$P(^tcX^0>u)=\bar{\Phi} \left (\frac{u}{\|\Sigma^{1/2}c\|_2} \right )$$
Let us denote $Y_1=^tC_1X^0,Y_2=^tC_2X^0$. From the correlation theorem, since $Y=(Y_1,Y_2)$ is centered gaussian in $\mathbb{R}^2$ with covariance $\Sigma^Y$
then $Y_2|Y_1$ is gaussian with mean $\frac{\sigma^Y_{21}}{\sigma^Y_{11}}Y_1$ and variance $\sigma^Y_{22}-\frac{\sigma^Y_{21}}{\sigma^Y_{11}}\sigma^Y_{12}$.
This, with
$P(Y_1>u_1 \text{ and } Y_2>u_2)=\mathbb{E}\left [\mathbb{E}[1_{Y_2\geq u_2 }|Y_1] 1_{Y_1\geq u_1 }\right ] $
gives the desired result.
How to extend the proposition
If we want to be able to solve the initial problem with dimension larger than $3$,
we need to compute
$P(\forall j \; ^tc_jX^0\geq u_j) $
(for well chosen $u_j$). Set $Y=(Y_1,\dots,Y_n)$ with $Y_j=^tc_jX$ centered $\mathbb{R}$-valued gaussians.
You can use the correlation theorem iteratively to derive the distribution of $Y_1|Y_{2:n}$, $Y_2|Y_{3:n}$..... This may give something like a recurcive formulation of the solution to the proposition when C is $p*n$ (recurcive on $p$). | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a mult
Answer updated thanks to remarks from Whuber and Srikant
Proposition
Let C=[C_1;C_2] be a 2*n matrix, $X^0=(X^0_i)\sim \mathcal{N} (0,\Sigma)$ $\mathbb{R}^n$ valued. Let $\Sigma^Y=^tC\Sigma C=(\sig |
30,724 | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a multivariate normal distribution? | I interpreted the question to ask the distribution of the maximal element of a multivariate normal. In this case, the CDF can be computed from the CDF of a multivariate normal. This usually doesn't have a nice solution (even in terms of the univariate normal CDF), however can be evaluated numerically. In R:
library(mvtnorm)
# given xl, mu and sigma
pmvnorm(upper=rep(xl,length(mu)), mean=mu, sigma=sigma)
However on re-reading the question, it seems to be asking the probability that a particular element of the vector is maximal. In this case, I'd agree with G. Jay Kerns. | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a mult | I interpreted the question to ask the distribution of the maximal element of a multivariate normal. In this case, the CDF can be computed from the CDF of a multivariate normal. This usually doesn't ha | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a multivariate normal distribution?
I interpreted the question to ask the distribution of the maximal element of a multivariate normal. In this case, the CDF can be computed from the CDF of a multivariate normal. This usually doesn't have a nice solution (even in terms of the univariate normal CDF), however can be evaluated numerically. In R:
library(mvtnorm)
# given xl, mu and sigma
pmvnorm(upper=rep(xl,length(mu)), mean=mu, sigma=sigma)
However on re-reading the question, it seems to be asking the probability that a particular element of the vector is maximal. In this case, I'd agree with G. Jay Kerns. | What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a mult
I interpreted the question to ask the distribution of the maximal element of a multivariate normal. In this case, the CDF can be computed from the CDF of a multivariate normal. This usually doesn't ha |
30,725 | Significance of single precision floating point | From the GPUtools help file, it seems that useSingle=TRUE is the default for the functions. | Significance of single precision floating point | From the GPUtools help file, it seems that useSingle=TRUE is the default for the functions. | Significance of single precision floating point
From the GPUtools help file, it seems that useSingle=TRUE is the default for the functions. | Significance of single precision floating point
From the GPUtools help file, it seems that useSingle=TRUE is the default for the functions. |
30,726 | Significance of single precision floating point | Because before GPUs there was no practical sense of using single reals; you never have too much accuracy and memory is usually not a problem. And supporting only doubles made R design simpler. (Although R supports reading/writing single reals.)
Yes, because Python is aimed to be more compatible with compiled languages. Yet you are right that it is possible for R libraries' wrappers to do in-fly conversion (this of course takes time but this is a minor problem); you can try e-mailing GPU packages' maintainers requesting such changes. | Significance of single precision floating point | Because before GPUs there was no practical sense of using single reals; you never have too much accuracy and memory is usually not a problem. And supporting only doubles made R design simpler. (Althou | Significance of single precision floating point
Because before GPUs there was no practical sense of using single reals; you never have too much accuracy and memory is usually not a problem. And supporting only doubles made R design simpler. (Although R supports reading/writing single reals.)
Yes, because Python is aimed to be more compatible with compiled languages. Yet you are right that it is possible for R libraries' wrappers to do in-fly conversion (this of course takes time but this is a minor problem); you can try e-mailing GPU packages' maintainers requesting such changes. | Significance of single precision floating point
Because before GPUs there was no practical sense of using single reals; you never have too much accuracy and memory is usually not a problem. And supporting only doubles made R design simpler. (Althou |
30,727 | Significance of single precision floating point | I presume that by GPU programming, you mean programming nvidia cards? In which case the underlying code calls from R and python are to C/CUDA.
The simple reason that only single precision is offered is because that is what most GPU cards support.
However, the new nvidia Fermi architecture does support double precision. If you bought a nvidia graphics card this year, then it's probably a Fermi. Even here things aren't simple:
You get a slight performance hit if you compile with double precision (a factor of two if I remember correctly).
On the cheaper cards Fermi cards, nvidia intentionally disabled double precision. However, it is possible to get round this and run double precision programs. I managed to do this on my GeForce GTX 465 under linux.
To answer the question in your title, "Is single precision OK?", it depends on your application (sorry crap answer!). I suppose everyone now uses double precision because it no longer gives a performance hit.
When I dabbled with GPUs, programming suddenly became far more complicated. You have to worry about things like:
warpsize and arranging your memory properly.
#threads per kernel.
debugging is horrible - there's no print statement in the GPU kernel statements
lack of random number generators
Single precision. | Significance of single precision floating point | I presume that by GPU programming, you mean programming nvidia cards? In which case the underlying code calls from R and python are to C/CUDA.
The simple reason that only single precision is offered | Significance of single precision floating point
I presume that by GPU programming, you mean programming nvidia cards? In which case the underlying code calls from R and python are to C/CUDA.
The simple reason that only single precision is offered is because that is what most GPU cards support.
However, the new nvidia Fermi architecture does support double precision. If you bought a nvidia graphics card this year, then it's probably a Fermi. Even here things aren't simple:
You get a slight performance hit if you compile with double precision (a factor of two if I remember correctly).
On the cheaper cards Fermi cards, nvidia intentionally disabled double precision. However, it is possible to get round this and run double precision programs. I managed to do this on my GeForce GTX 465 under linux.
To answer the question in your title, "Is single precision OK?", it depends on your application (sorry crap answer!). I suppose everyone now uses double precision because it no longer gives a performance hit.
When I dabbled with GPUs, programming suddenly became far more complicated. You have to worry about things like:
warpsize and arranging your memory properly.
#threads per kernel.
debugging is horrible - there's no print statement in the GPU kernel statements
lack of random number generators
Single precision. | Significance of single precision floating point
I presume that by GPU programming, you mean programming nvidia cards? In which case the underlying code calls from R and python are to C/CUDA.
The simple reason that only single precision is offered |
30,728 | Significance of single precision floating point | The vast majority of GPUs in circulation only support single precision floating point.
As far as the title question, you need to look at the data you'll be handling to determine if single precision is enough for you. Often, you'll find that singles are perfectly acceptable for >90% of the data you handle, but will fail spectacularly for that last 10%; unless you have an easy way of determining whether your particular data set will fail or not, you're stuck using double precision for everything. | Significance of single precision floating point | The vast majority of GPUs in circulation only support single precision floating point.
As far as the title question, you need to look at the data you'll be handling to determine if single precision is | Significance of single precision floating point
The vast majority of GPUs in circulation only support single precision floating point.
As far as the title question, you need to look at the data you'll be handling to determine if single precision is enough for you. Often, you'll find that singles are perfectly acceptable for >90% of the data you handle, but will fail spectacularly for that last 10%; unless you have an easy way of determining whether your particular data set will fail or not, you're stuck using double precision for everything. | Significance of single precision floating point
The vast majority of GPUs in circulation only support single precision floating point.
As far as the title question, you need to look at the data you'll be handling to determine if single precision is |
30,729 | Significance of single precision floating point | OK, a new answer to an old question but even more relevant now. The question you're asking has to do with finite precision, normally the domain of signal analysis and experimental mathematics.
Double precision (DP) floats let us pretend that finite precision problems don't exist, the same as we do with most real-world mathematical problems. In experimental math there is no pretending.
Single precision (SP) floats force us to consider quantization noise. If our machine learning models inherently reject noise, such as neural nets (NN), convolutional nets (CNN), residual nets (ResN), etc, then SP most often gives similar results to DP.
Half precision (HP) floats (now supported in cuda toolkit 7.5) require that quantization effects (noise and rounding) be considered. Most likely we'll soon see HP floats in the common machine learning toolkits.
There is recent work to create lower precision computations in floats as well as fixed precision numbers. Stochastic rounding has enabled convergence to procede with CNNs whereas the solution diverges without it. These papers will help you to improve your understanding of the problems with the use of finite precision numbers in machine learning.
To address your questions:
SP is not so bad. As you point out it's twice as fast, but it also allows you to put more layers into memory. A bonus is in saving overhead getting data on and off the gpu. The faster computations and the lower overhead result in lower convergence times. That said, HP, for some problems, will be better in some parts of the network and not in others.
It seems to me that many of the machine learning toolkits handle SPs and DPs. Perhaps someone else with a wider range of experience with the toolkits will add their nickle.
Python will support what the gpu toolkit supports. You don't want to use python data types because then you'll be running an interpreted script on the cpu.
Note that the trend in neural networks now is to go with very deep layers, with runs of more than a few days common on the fastest gpu clusters. | Significance of single precision floating point | OK, a new answer to an old question but even more relevant now. The question you're asking has to do with finite precision, normally the domain of signal analysis and experimental mathematics.
Double | Significance of single precision floating point
OK, a new answer to an old question but even more relevant now. The question you're asking has to do with finite precision, normally the domain of signal analysis and experimental mathematics.
Double precision (DP) floats let us pretend that finite precision problems don't exist, the same as we do with most real-world mathematical problems. In experimental math there is no pretending.
Single precision (SP) floats force us to consider quantization noise. If our machine learning models inherently reject noise, such as neural nets (NN), convolutional nets (CNN), residual nets (ResN), etc, then SP most often gives similar results to DP.
Half precision (HP) floats (now supported in cuda toolkit 7.5) require that quantization effects (noise and rounding) be considered. Most likely we'll soon see HP floats in the common machine learning toolkits.
There is recent work to create lower precision computations in floats as well as fixed precision numbers. Stochastic rounding has enabled convergence to procede with CNNs whereas the solution diverges without it. These papers will help you to improve your understanding of the problems with the use of finite precision numbers in machine learning.
To address your questions:
SP is not so bad. As you point out it's twice as fast, but it also allows you to put more layers into memory. A bonus is in saving overhead getting data on and off the gpu. The faster computations and the lower overhead result in lower convergence times. That said, HP, for some problems, will be better in some parts of the network and not in others.
It seems to me that many of the machine learning toolkits handle SPs and DPs. Perhaps someone else with a wider range of experience with the toolkits will add their nickle.
Python will support what the gpu toolkit supports. You don't want to use python data types because then you'll be running an interpreted script on the cpu.
Note that the trend in neural networks now is to go with very deep layers, with runs of more than a few days common on the fastest gpu clusters. | Significance of single precision floating point
OK, a new answer to an old question but even more relevant now. The question you're asking has to do with finite precision, normally the domain of signal analysis and experimental mathematics.
Double |
30,730 | What can be done about assumption violations in logistic regression? | There is no assumption of normality in logistic regression. Linear regression is often motivated as a Gaussian GLM (since solving a least squares problem is the same as assuming the likelihood for the model is normal), and this is where the normality assumption of the residuals comes from. In contrast, logistic regression makes the assumption that the likelihood is binomial
$$ \operatorname{logit}(p_i) = x^T_i\beta $$
$$ y_i \sim \operatorname{Binomial}(p_i; n_i) $$
As a consequence, looking at the difference between observation and prediction is not as informative (especially if the outcome is a 1/0) and you're better off looking at deviance residuals (which require grouping of continuous covariates) or other types of residuals listed in Frank Harrell's Regression modelling strategies.
Heteroskedasticity is actually an assumption of logistic regression. Since the variance of a binomial random variable is $np(1-p)$ and $p$ is a function of $x$, then the conditional variance changes as a function of $x$ (hence is not constant; is heteroskedastic).
You might want to use splines. | What can be done about assumption violations in logistic regression? | There is no assumption of normality in logistic regression. Linear regression is often motivated as a Gaussian GLM (since solving a least squares problem is the same as assuming the likelihood for th | What can be done about assumption violations in logistic regression?
There is no assumption of normality in logistic regression. Linear regression is often motivated as a Gaussian GLM (since solving a least squares problem is the same as assuming the likelihood for the model is normal), and this is where the normality assumption of the residuals comes from. In contrast, logistic regression makes the assumption that the likelihood is binomial
$$ \operatorname{logit}(p_i) = x^T_i\beta $$
$$ y_i \sim \operatorname{Binomial}(p_i; n_i) $$
As a consequence, looking at the difference between observation and prediction is not as informative (especially if the outcome is a 1/0) and you're better off looking at deviance residuals (which require grouping of continuous covariates) or other types of residuals listed in Frank Harrell's Regression modelling strategies.
Heteroskedasticity is actually an assumption of logistic regression. Since the variance of a binomial random variable is $np(1-p)$ and $p$ is a function of $x$, then the conditional variance changes as a function of $x$ (hence is not constant; is heteroskedastic).
You might want to use splines. | What can be done about assumption violations in logistic regression?
There is no assumption of normality in logistic regression. Linear regression is often motivated as a Gaussian GLM (since solving a least squares problem is the same as assuming the likelihood for th |
30,731 | Recommendation: high-level comparison of formal causal reasoning approaches | Take a look at Section 4 for a comparison of PO and DAG/SCM approaches:
Imbens, Guido W. 2020. "Potential Outcome and Directed Acyclic Graph
Approaches to Causality: Relevance for Empirical Practice in
Economics." Journal of Economic Literature, 58 (4): 1129-79
https://doi.org/10.1257/jel.20191597 (ungated draft)
Pearl's response can be found at Pearl, J. On Imbens’s Comparison of Two Approaches to Empirical Economics. (2020). Academic blog. Causal Analysis in Theory and Practice. Posted January 29, 2020. | Recommendation: high-level comparison of formal causal reasoning approaches | Take a look at Section 4 for a comparison of PO and DAG/SCM approaches:
Imbens, Guido W. 2020. "Potential Outcome and Directed Acyclic Graph
Approaches to Causality: Relevance for Empirical Practice | Recommendation: high-level comparison of formal causal reasoning approaches
Take a look at Section 4 for a comparison of PO and DAG/SCM approaches:
Imbens, Guido W. 2020. "Potential Outcome and Directed Acyclic Graph
Approaches to Causality: Relevance for Empirical Practice in
Economics." Journal of Economic Literature, 58 (4): 1129-79
https://doi.org/10.1257/jel.20191597 (ungated draft)
Pearl's response can be found at Pearl, J. On Imbens’s Comparison of Two Approaches to Empirical Economics. (2020). Academic blog. Causal Analysis in Theory and Practice. Posted January 29, 2020. | Recommendation: high-level comparison of formal causal reasoning approaches
Take a look at Section 4 for a comparison of PO and DAG/SCM approaches:
Imbens, Guido W. 2020. "Potential Outcome and Directed Acyclic Graph
Approaches to Causality: Relevance for Empirical Practice |
30,732 | Recommendation: high-level comparison of formal causal reasoning approaches | We have a unifying theory for potential outcomes, graphical models, and structural econometrics. It is based on the theory of structural causal models.
Check out:
Appendix of the Crash Course in Good and Bad Controls.
On Pearl’s Hierarchy and the Foundations of Causal Inference
Pearl, J. (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press., Chapter 7.
Pearl, J., & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect (1st ed.). Basic Books., Chapter 8.
These answers may also help:
Which Theories of Causality Should I know?
Is the linearity assumption in linear regression merely a definition of $\epsilon$?
The references above concern a unified mathematical framework for the counterfactual and structural theories of causation.
In practice, of course, each "school" may have different conventions and cultural differences. For instance, certain identification strategies are more popular in economics than epidemiology, or inside econometrics itself there is a cultural division between "reduced form" and "structural" econometrics, etc.
As for Pearl's answer to Imbens' article, it is simply stating that there's no such thing as a "DAG approach." The formal mathematical framework is a structural causal model. The DAG is just one tool for partially specifying a structural model, namely, imposing certain types of exclusion and independence restrictions. You can impose as many assumptions as you would like, and monotonicity has nothing special in it. For instance, see how Pearl and I defined monotonicity here. That is why it makes no sense to ask how you would represent monotonicity in the "DAG approach" vs "PO approach." Maybe a better question would be: how can we formally represent monotonicity constraints (or other shape constraints) graphically, in a way that we can leverage such constraints to algorithmically derive new identification results? This is the topic of ongoing research.
PS: it goes without saying that I think Imbens is a great scholar, and it has inspired a lot of my own work as well! The above is just a comment on this specific point. | Recommendation: high-level comparison of formal causal reasoning approaches | We have a unifying theory for potential outcomes, graphical models, and structural econometrics. It is based on the theory of structural causal models.
Check out:
Appendix of the Crash Course in Good | Recommendation: high-level comparison of formal causal reasoning approaches
We have a unifying theory for potential outcomes, graphical models, and structural econometrics. It is based on the theory of structural causal models.
Check out:
Appendix of the Crash Course in Good and Bad Controls.
On Pearl’s Hierarchy and the Foundations of Causal Inference
Pearl, J. (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press., Chapter 7.
Pearl, J., & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect (1st ed.). Basic Books., Chapter 8.
These answers may also help:
Which Theories of Causality Should I know?
Is the linearity assumption in linear regression merely a definition of $\epsilon$?
The references above concern a unified mathematical framework for the counterfactual and structural theories of causation.
In practice, of course, each "school" may have different conventions and cultural differences. For instance, certain identification strategies are more popular in economics than epidemiology, or inside econometrics itself there is a cultural division between "reduced form" and "structural" econometrics, etc.
As for Pearl's answer to Imbens' article, it is simply stating that there's no such thing as a "DAG approach." The formal mathematical framework is a structural causal model. The DAG is just one tool for partially specifying a structural model, namely, imposing certain types of exclusion and independence restrictions. You can impose as many assumptions as you would like, and monotonicity has nothing special in it. For instance, see how Pearl and I defined monotonicity here. That is why it makes no sense to ask how you would represent monotonicity in the "DAG approach" vs "PO approach." Maybe a better question would be: how can we formally represent monotonicity constraints (or other shape constraints) graphically, in a way that we can leverage such constraints to algorithmically derive new identification results? This is the topic of ongoing research.
PS: it goes without saying that I think Imbens is a great scholar, and it has inspired a lot of my own work as well! The above is just a comment on this specific point. | Recommendation: high-level comparison of formal causal reasoning approaches
We have a unifying theory for potential outcomes, graphical models, and structural econometrics. It is based on the theory of structural causal models.
Check out:
Appendix of the Crash Course in Good |
30,733 | Can any data be learned using polynomial logistic regression | Comments to the question suggest the following interpretation:
Given any two non-overlapping finite collections of points $A$ and $B$ in a Euclidean space $E^n,$ does there always exist a polynomial function $f_{A,B}:E^n\to\mathbb R$ that perfectly separates the collections? That is, $f_{A,B}$ has positive values on all points of $A$ and negative values on all points of $B.$
The answer is yes, by construction.
Let $|\ |$ be the usual Euclidean distance. Its square is a quadratic polynomial. Specifically, using any orthogonal coordinate system write $\mathbf{x}=(x_1,\ldots, x_n)$ and $\mathbf{y}=(y_1,\ldots, y_n).$ We have
$$|\mathbf{x}-\mathbf{y}|^2 = \sum_{i=1}^n (x_i-y_i)^2,$$
which explicitly is a quadratic polynomial function of the coordinates.
Define $$f_{A,B}(\mathbf x)=\left[\sum_{\mathbf y\in A}\frac{1}{|\mathbf x-\mathbf y|^2}-\sum_{\mathbf y\in B}\frac{1}{|\mathbf x-\mathbf y|^2}\right]\prod_{\mathbf y\in A\cup B}|\mathbf x-\mathbf y|^2.$$
Notice how $f_{A,B}$ is defined as a product. The terms on the right hand side clear the denominators of the fractions on the left, showing that $f$ is actually defined everywhere on $E^n$ and is a polynomial function.
The function in the left term of the product has poles (explodes to $\pm \infty$) precisely at the data points $\mathbf x \in A\cup B.$ At the points of $A$ its values diverge to $+\infty$ and at the points of $B$ its values diverge to $-\infty.$ Because the product at the right is non-negative, we see that in a sufficiently small neighborhood of $A$ $f_{A,B}$ is always positive and in a sufficiently small neighborhood of $B$ $f_{A,B}$ is always negative. Thus $f_{A,B}$ does its job of separating $A$ from $B,$ QED.
Here is an illustration showing the contour $f_{A,B}=0$ for $80$ randomly selected points in the plane $E^2.$ Of these, $43$ were randomly selected to form the subset $A$ (drawn as blue triangles) and others form the subset $B,$ drawn as red circles. You can see this construction works because all blue triangles fall within the gray (positive) region where $f_{A,B}\gt 0$ and all the red circles fall within the interior of its complement where $f_{A,B}\lt 0.$
To see more examples, modify and run this R script that produced the figure. Its function f, defined at the outset, implements the construction of $f_{A,B}.$
#
# The columns of `A` are all data points. The values of `I` are +/-1, indicating
# the subset each column belongs to.
#
f <- function(x, A, I) {
d2 <- colSums((A-x)^2)
j <- d2 == 0 # At most one point, assuming all points in `A` are unique
if (sum(j) > 0) # Avoids division by zero
return(prod(d2[!j]) * prod(I[j]))
sum(I / d2) * prod(d2)
}
#
# Create random points and a random binary classification of them.
#
# set.seed(17)
d <- 2 # Dimensions
n <- 80 # total number of points
p <- 1/2 # Expected Fraction in `A`
A <- matrix(runif(d*n), d)
I <- sample(c(-1,1), ncol(A), replace=TRUE, prob=c(1-p, p))
#
# Check `f` by applying it to the data points and confirming it gives the
# correct signs.
#
I. <- sign(apply(A, 2, f, A=A, I=I))
if (!isTRUE(all.equal(I, I.))) stop("f does not work...")
#
# For plotting, compute values of `f` along a slice through the space.
#
slice <- rep(1/2, d-2) # Choose which slice to plot
X <- Y <- seq(-0.2, 1.2, length.out=201)
Z <- matrix(NA_real_, length(X), length(Y))
for (i in seq_along(X)) for (j in seq_along(Y))
Z[i, j] <- f(c(X[i], Y[j], slice), A, I)
#
# Display a 2D plot.
#
image(X, Y, sign(Z), col=c("Gray", "White"), xaxt="n", yaxt="n", asp=1, bty="n",
main="Polynomial separator of random points")
contour(X, Y, Z, levels=0, labels="", lwd=2, labcex=0.001, add=TRUE)
points(t(A), pch=ifelse(I==1, 19, 17), col=ifelse(I==1, "Red", "Blue")) | Can any data be learned using polynomial logistic regression | Comments to the question suggest the following interpretation:
Given any two non-overlapping finite collections of points $A$ and $B$ in a Euclidean space $E^n,$ does there always exist a polynomial | Can any data be learned using polynomial logistic regression
Comments to the question suggest the following interpretation:
Given any two non-overlapping finite collections of points $A$ and $B$ in a Euclidean space $E^n,$ does there always exist a polynomial function $f_{A,B}:E^n\to\mathbb R$ that perfectly separates the collections? That is, $f_{A,B}$ has positive values on all points of $A$ and negative values on all points of $B.$
The answer is yes, by construction.
Let $|\ |$ be the usual Euclidean distance. Its square is a quadratic polynomial. Specifically, using any orthogonal coordinate system write $\mathbf{x}=(x_1,\ldots, x_n)$ and $\mathbf{y}=(y_1,\ldots, y_n).$ We have
$$|\mathbf{x}-\mathbf{y}|^2 = \sum_{i=1}^n (x_i-y_i)^2,$$
which explicitly is a quadratic polynomial function of the coordinates.
Define $$f_{A,B}(\mathbf x)=\left[\sum_{\mathbf y\in A}\frac{1}{|\mathbf x-\mathbf y|^2}-\sum_{\mathbf y\in B}\frac{1}{|\mathbf x-\mathbf y|^2}\right]\prod_{\mathbf y\in A\cup B}|\mathbf x-\mathbf y|^2.$$
Notice how $f_{A,B}$ is defined as a product. The terms on the right hand side clear the denominators of the fractions on the left, showing that $f$ is actually defined everywhere on $E^n$ and is a polynomial function.
The function in the left term of the product has poles (explodes to $\pm \infty$) precisely at the data points $\mathbf x \in A\cup B.$ At the points of $A$ its values diverge to $+\infty$ and at the points of $B$ its values diverge to $-\infty.$ Because the product at the right is non-negative, we see that in a sufficiently small neighborhood of $A$ $f_{A,B}$ is always positive and in a sufficiently small neighborhood of $B$ $f_{A,B}$ is always negative. Thus $f_{A,B}$ does its job of separating $A$ from $B,$ QED.
Here is an illustration showing the contour $f_{A,B}=0$ for $80$ randomly selected points in the plane $E^2.$ Of these, $43$ were randomly selected to form the subset $A$ (drawn as blue triangles) and others form the subset $B,$ drawn as red circles. You can see this construction works because all blue triangles fall within the gray (positive) region where $f_{A,B}\gt 0$ and all the red circles fall within the interior of its complement where $f_{A,B}\lt 0.$
To see more examples, modify and run this R script that produced the figure. Its function f, defined at the outset, implements the construction of $f_{A,B}.$
#
# The columns of `A` are all data points. The values of `I` are +/-1, indicating
# the subset each column belongs to.
#
f <- function(x, A, I) {
d2 <- colSums((A-x)^2)
j <- d2 == 0 # At most one point, assuming all points in `A` are unique
if (sum(j) > 0) # Avoids division by zero
return(prod(d2[!j]) * prod(I[j]))
sum(I / d2) * prod(d2)
}
#
# Create random points and a random binary classification of them.
#
# set.seed(17)
d <- 2 # Dimensions
n <- 80 # total number of points
p <- 1/2 # Expected Fraction in `A`
A <- matrix(runif(d*n), d)
I <- sample(c(-1,1), ncol(A), replace=TRUE, prob=c(1-p, p))
#
# Check `f` by applying it to the data points and confirming it gives the
# correct signs.
#
I. <- sign(apply(A, 2, f, A=A, I=I))
if (!isTRUE(all.equal(I, I.))) stop("f does not work...")
#
# For plotting, compute values of `f` along a slice through the space.
#
slice <- rep(1/2, d-2) # Choose which slice to plot
X <- Y <- seq(-0.2, 1.2, length.out=201)
Z <- matrix(NA_real_, length(X), length(Y))
for (i in seq_along(X)) for (j in seq_along(Y))
Z[i, j] <- f(c(X[i], Y[j], slice), A, I)
#
# Display a 2D plot.
#
image(X, Y, sign(Z), col=c("Gray", "White"), xaxt="n", yaxt="n", asp=1, bty="n",
main="Polynomial separator of random points")
contour(X, Y, Z, levels=0, labels="", lwd=2, labcex=0.001, add=TRUE)
points(t(A), pch=ifelse(I==1, 19, 17), col=ifelse(I==1, "Red", "Blue")) | Can any data be learned using polynomial logistic regression
Comments to the question suggest the following interpretation:
Given any two non-overlapping finite collections of points $A$ and $B$ in a Euclidean space $E^n,$ does there always exist a polynomial |
30,734 | Can any data be learned using polynomial logistic regression | There are a few problems with your first paragraph which may make your question difficult to answer.
We know that a polynomial can approximate any function.
Can it? If you're referring to a Taylor polynomial, then the function must be smooth. Not every function is a smooth function.
In binary logistic regression we're trying to fit a decision boundary to our data.
This isn't true. Logistic regression seeks to estimate the coefficients for a model
$$ p(x) = \dfrac{1}{1 + \exp(-x^T\beta)} $$
There is no decision boundary here, and any cut off is imposed post facto. | Can any data be learned using polynomial logistic regression | There are a few problems with your first paragraph which may make your question difficult to answer.
We know that a polynomial can approximate any function.
Can it? If you're referring to a Taylor | Can any data be learned using polynomial logistic regression
There are a few problems with your first paragraph which may make your question difficult to answer.
We know that a polynomial can approximate any function.
Can it? If you're referring to a Taylor polynomial, then the function must be smooth. Not every function is a smooth function.
In binary logistic regression we're trying to fit a decision boundary to our data.
This isn't true. Logistic regression seeks to estimate the coefficients for a model
$$ p(x) = \dfrac{1}{1 + \exp(-x^T\beta)} $$
There is no decision boundary here, and any cut off is imposed post facto. | Can any data be learned using polynomial logistic regression
There are a few problems with your first paragraph which may make your question difficult to answer.
We know that a polynomial can approximate any function.
Can it? If you're referring to a Taylor |
30,735 | Does automatic model selection via AIC bias the p-values of the selected model? [Looking for simulation-based evidence] | Yes, it does bias the p-values
Your intuition on this is correct --- generally speaking, whenever we select a model via optimisation we bias the resulting p-value for any tests that fail to account for that optimisation step. This is true regardless of whether we optimise over p-values, or the maximum log-likelihood, or AIC, BIC, etc.
By minimising AIC you are maximising the log-likelihood (and therefore the likelihood) over each model with the same number of terms. To see this, suppose we let $\mathcal{M}$ denote an individual model and we let $\mathscr{M}$ denote the class of all models under consideration. If we let $\mathscr{M}_k$ be the class of models with $k$ parameters then we can write the maximum log-likelihood over this class as:
$$R(k) \equiv \max_{\mathcal{M} \in \mathscr{M}_k} \hat{\ell}_\mathcal{M}$$
The minimum AIC over the class of all models is related to this quantity by:
$$\begin{align}
\min_{\mathcal{M} \in \mathscr{M}} \text{AIC}_\mathcal{M}
&= \min_{\mathcal{M} \in \mathscr{M}} (2 k_\mathcal{M} - 2 \hat{\ell}_\mathcal{M}) \\[6pt]
&= \min_{k} \min_{\mathcal{M} \in \mathscr{M}_k} (2 k - 2 \hat{\ell}_\mathcal{M}) \\[6pt]
&= \min_{k} \Big( 2 k - 2 \max_{\mathcal{M} \in \mathscr{M}_k} \hat{\ell}_\mathcal{M} \Big) \\[6pt]
&= \min_{k} \Big( 2 k - 2 R(k) \Big). \\[6pt]
\end{align}$$
As you can see, minimising AIC is equivalent to a two step process: first we find all the models that maximise the log-likelihood compared to other models with the same number of parameters; then we minimise the quantity $(2k-R(k))$ using the models from the first step (which each have different values for $k$). This means that when you select a model by minimising AIC, you are implicitly maximising log-likelihood over subclasses of models. By optimising the log-likelihood, you are inducing a bias towards parameter estimates that give a highly "peaked" likelihood function (giving a higher maximum), which is going to tend to happen when you use parameter values that are far from the "null" hypotheses in standard model tests. You then optimise over $k$ which exaccerbates this further by selecting the number of model terms that gives a highly "peaked" likelihood function relative to the number of model terms. This means that your optimisation procedure is going to tend to give you a model with "significant" model terms, biasing your p-values downward in these tests.
In regard to the size of the bias, this is going to depend largely on the size of the model class over which you are optimising. If you optimise over a small class of models then the bias may be modest, but if you optimise over a large class of models then the resulting bias will be large. Since you have stated that you are using an "all possible models" approach, that means that you are optimising over $2^m$ possible models where you have $m$ model terms to consider. The size of the model space grows exponentially in $m$ so you rapidly get to a large model space which will lead to large bias when optimising over this space.
Correcting for this kind of bias is complicated, and it generally entails running simulations where you perform the same (AIC) optimisation over a set of null models where all model obey the null hypothesis of interest. If you can form suitable pivotal quantities, this kind of simulation can give you an estimated null distribution for your test statistics under the optimisation method and you can then use this to estimate the true (unbiased) p-value of the test. This is quite a complicated exercise, as it generallly requires you to custom-program a computation that loops over model fitting and optimisation, and computation and extraction of resulting test statistics. | Does automatic model selection via AIC bias the p-values of the selected model? [Looking for simulat | Yes, it does bias the p-values
Your intuition on this is correct --- generally speaking, whenever we select a model via optimisation we bias the resulting p-value for any tests that fail to account fo | Does automatic model selection via AIC bias the p-values of the selected model? [Looking for simulation-based evidence]
Yes, it does bias the p-values
Your intuition on this is correct --- generally speaking, whenever we select a model via optimisation we bias the resulting p-value for any tests that fail to account for that optimisation step. This is true regardless of whether we optimise over p-values, or the maximum log-likelihood, or AIC, BIC, etc.
By minimising AIC you are maximising the log-likelihood (and therefore the likelihood) over each model with the same number of terms. To see this, suppose we let $\mathcal{M}$ denote an individual model and we let $\mathscr{M}$ denote the class of all models under consideration. If we let $\mathscr{M}_k$ be the class of models with $k$ parameters then we can write the maximum log-likelihood over this class as:
$$R(k) \equiv \max_{\mathcal{M} \in \mathscr{M}_k} \hat{\ell}_\mathcal{M}$$
The minimum AIC over the class of all models is related to this quantity by:
$$\begin{align}
\min_{\mathcal{M} \in \mathscr{M}} \text{AIC}_\mathcal{M}
&= \min_{\mathcal{M} \in \mathscr{M}} (2 k_\mathcal{M} - 2 \hat{\ell}_\mathcal{M}) \\[6pt]
&= \min_{k} \min_{\mathcal{M} \in \mathscr{M}_k} (2 k - 2 \hat{\ell}_\mathcal{M}) \\[6pt]
&= \min_{k} \Big( 2 k - 2 \max_{\mathcal{M} \in \mathscr{M}_k} \hat{\ell}_\mathcal{M} \Big) \\[6pt]
&= \min_{k} \Big( 2 k - 2 R(k) \Big). \\[6pt]
\end{align}$$
As you can see, minimising AIC is equivalent to a two step process: first we find all the models that maximise the log-likelihood compared to other models with the same number of parameters; then we minimise the quantity $(2k-R(k))$ using the models from the first step (which each have different values for $k$). This means that when you select a model by minimising AIC, you are implicitly maximising log-likelihood over subclasses of models. By optimising the log-likelihood, you are inducing a bias towards parameter estimates that give a highly "peaked" likelihood function (giving a higher maximum), which is going to tend to happen when you use parameter values that are far from the "null" hypotheses in standard model tests. You then optimise over $k$ which exaccerbates this further by selecting the number of model terms that gives a highly "peaked" likelihood function relative to the number of model terms. This means that your optimisation procedure is going to tend to give you a model with "significant" model terms, biasing your p-values downward in these tests.
In regard to the size of the bias, this is going to depend largely on the size of the model class over which you are optimising. If you optimise over a small class of models then the bias may be modest, but if you optimise over a large class of models then the resulting bias will be large. Since you have stated that you are using an "all possible models" approach, that means that you are optimising over $2^m$ possible models where you have $m$ model terms to consider. The size of the model space grows exponentially in $m$ so you rapidly get to a large model space which will lead to large bias when optimising over this space.
Correcting for this kind of bias is complicated, and it generally entails running simulations where you perform the same (AIC) optimisation over a set of null models where all model obey the null hypothesis of interest. If you can form suitable pivotal quantities, this kind of simulation can give you an estimated null distribution for your test statistics under the optimisation method and you can then use this to estimate the true (unbiased) p-value of the test. This is quite a complicated exercise, as it generallly requires you to custom-program a computation that loops over model fitting and optimisation, and computation and extraction of resulting test statistics. | Does automatic model selection via AIC bias the p-values of the selected model? [Looking for simulat
Yes, it does bias the p-values
Your intuition on this is correct --- generally speaking, whenever we select a model via optimisation we bias the resulting p-value for any tests that fail to account fo |
30,736 | Does automatic model selection via AIC bias the p-values of the selected model? [Looking for simulation-based evidence] | Since simulations were requested as a followup to Ben's excellent answer, here's a simple example.
Model
Consider a polynomial regression model of the form:
$$y_i = \beta_0 + \sum_{d=1}^D \beta_d x_i^d + \epsilon_i \quad \quad
\epsilon_i \underset{\text{i.i.d.}}{\sim} \mathcal{N}(0, \sigma^2)$$
The intercept $\beta_0$, coefficients $\beta_1, \dots, \beta_D$, and noise variance $\sigma^2$ will be fit by maximum likelihood for each choice of degree $D$. The degree will be chosen to minimize the AIC.
Hypothesis testing
Suppose we want to test the null hypothesis that all coefficients are zero (except the intercept). Ordinarily, we could use an F test. But, as Ben described, the resulting p values will be biased here because the test doesn't account for the model selection step. We can confirm this by examining the distribution of p values in an example where the null hypothesis is known to be true.
Simulation
Repeat $10^4$ times:
Generate a dataset containing $n=20$ points, where explanatory variables $x_i$ and responses $y_i$ are sampled independently from the standard normal distribution. Since they're independent, the null hypothesis is true.
Fit polynomial regression models with degree $D$ ranging from 1 to $D_{max}=10$, using maximum likelihood.
Select the model with minimum AIC.
Run an F test for the selected model, as above. Record the resulting p value.
The parameters here are unrealistic (e.g. nobody would fit a 10th degree polynomial to 20 data points). But, a proper hypothesis test should be able to handle it. I've set things up this way to make the failure/bias more obvious.
Biased p values
Under the null hypothesis, proper p values should be uniformly distributed between 0 and 1. We can check whether this is true, since p values from the simulations are samples from the distribution under the null hypothesis. The histogram below shows that the distribution is decidedly non-uniform, with low p values much more probable than they should be.
This indicates that the p values are optimistically biased, and we'd run into trouble using them for hypothesis testing. A test is deemed significant if the p value falls below a threshold $\alpha$, which specifies the maximum acceptable type I error rate---the probability of wrongly obtaining a significant result, assuming the null hypothesis is true. Since the null hypothesis is true in the simulations, all significant results are type I errors. So, for any nominal type I error rate $\alpha$, we can estimate the actual type I error rate as the fraction of p values that fall below $\alpha$. Given proper p values, the nominal and actual rates should be equal. However, as shown in the plot above, the actual type I error rate exceeds nominal rate for all choices of $\alpha$.
Dependence on model size
As Ben also mentioned, comparing a greater number of models during the model selection step will increase the amount of bias. To show this effect, I repeated the simulation above for different model sizes. In particular, I varied the maximum permitted degree $D_{max}$ of the polynomial regression model. The plot below shows the actual vs. nominal type I error rates for each choice of $D_{max}$.
Notice that the curve is the identity line for $D_{max}=1$, indicating that p values behave as theyy should. This is because no model selection happens in this case--a degree 1 polynomial is the only choice. However, p values are increasingly biased for greater choices of $D_{max}$ because an increasing number of models are compared in each case.
Code
Matlab code implementing the simulations described above:
% parameters
n = 20; % how many data points
Dmax = 10; % maxmimum polynomial degree
nreps = 1e4; % how many simulations
% compute p value for each simulation
p = zeros(1, nreps);
for rep = 1 : nreps
fprintf('%d/%d\n', rep, nreps);
% generate data
x = randn(n, 1);
y = randn(n, 1);
% design matrix for all polynomial degrees
A = x .^ (1 : Dmax);
% fit polynomial regression models
% choose degree that minimizes aic
best_mdl = [];
best_aic = inf;
for D = 1 : Dmax
mdl = fitlm(A(:, 1:D), y);
aic = mdl.ModelCriterion.AIC;
if aic < best_aic
best_mdl = mdl;
best_aic = aic;
end
end
% run F test on model with best aic
p(rep) = best_mdl.coefTest();
end
% nominal type I error rates
alpha = linspace(0, 1, 100);
% actual type I error rate for each alpha level
fpr = sum(p < alpha(:), 2) / nreps; | Does automatic model selection via AIC bias the p-values of the selected model? [Looking for simulat | Since simulations were requested as a followup to Ben's excellent answer, here's a simple example.
Model
Consider a polynomial regression model of the form:
$$y_i = \beta_0 + \sum_{d=1}^D \beta_d x_i^ | Does automatic model selection via AIC bias the p-values of the selected model? [Looking for simulation-based evidence]
Since simulations were requested as a followup to Ben's excellent answer, here's a simple example.
Model
Consider a polynomial regression model of the form:
$$y_i = \beta_0 + \sum_{d=1}^D \beta_d x_i^d + \epsilon_i \quad \quad
\epsilon_i \underset{\text{i.i.d.}}{\sim} \mathcal{N}(0, \sigma^2)$$
The intercept $\beta_0$, coefficients $\beta_1, \dots, \beta_D$, and noise variance $\sigma^2$ will be fit by maximum likelihood for each choice of degree $D$. The degree will be chosen to minimize the AIC.
Hypothesis testing
Suppose we want to test the null hypothesis that all coefficients are zero (except the intercept). Ordinarily, we could use an F test. But, as Ben described, the resulting p values will be biased here because the test doesn't account for the model selection step. We can confirm this by examining the distribution of p values in an example where the null hypothesis is known to be true.
Simulation
Repeat $10^4$ times:
Generate a dataset containing $n=20$ points, where explanatory variables $x_i$ and responses $y_i$ are sampled independently from the standard normal distribution. Since they're independent, the null hypothesis is true.
Fit polynomial regression models with degree $D$ ranging from 1 to $D_{max}=10$, using maximum likelihood.
Select the model with minimum AIC.
Run an F test for the selected model, as above. Record the resulting p value.
The parameters here are unrealistic (e.g. nobody would fit a 10th degree polynomial to 20 data points). But, a proper hypothesis test should be able to handle it. I've set things up this way to make the failure/bias more obvious.
Biased p values
Under the null hypothesis, proper p values should be uniformly distributed between 0 and 1. We can check whether this is true, since p values from the simulations are samples from the distribution under the null hypothesis. The histogram below shows that the distribution is decidedly non-uniform, with low p values much more probable than they should be.
This indicates that the p values are optimistically biased, and we'd run into trouble using them for hypothesis testing. A test is deemed significant if the p value falls below a threshold $\alpha$, which specifies the maximum acceptable type I error rate---the probability of wrongly obtaining a significant result, assuming the null hypothesis is true. Since the null hypothesis is true in the simulations, all significant results are type I errors. So, for any nominal type I error rate $\alpha$, we can estimate the actual type I error rate as the fraction of p values that fall below $\alpha$. Given proper p values, the nominal and actual rates should be equal. However, as shown in the plot above, the actual type I error rate exceeds nominal rate for all choices of $\alpha$.
Dependence on model size
As Ben also mentioned, comparing a greater number of models during the model selection step will increase the amount of bias. To show this effect, I repeated the simulation above for different model sizes. In particular, I varied the maximum permitted degree $D_{max}$ of the polynomial regression model. The plot below shows the actual vs. nominal type I error rates for each choice of $D_{max}$.
Notice that the curve is the identity line for $D_{max}=1$, indicating that p values behave as theyy should. This is because no model selection happens in this case--a degree 1 polynomial is the only choice. However, p values are increasingly biased for greater choices of $D_{max}$ because an increasing number of models are compared in each case.
Code
Matlab code implementing the simulations described above:
% parameters
n = 20; % how many data points
Dmax = 10; % maxmimum polynomial degree
nreps = 1e4; % how many simulations
% compute p value for each simulation
p = zeros(1, nreps);
for rep = 1 : nreps
fprintf('%d/%d\n', rep, nreps);
% generate data
x = randn(n, 1);
y = randn(n, 1);
% design matrix for all polynomial degrees
A = x .^ (1 : Dmax);
% fit polynomial regression models
% choose degree that minimizes aic
best_mdl = [];
best_aic = inf;
for D = 1 : Dmax
mdl = fitlm(A(:, 1:D), y);
aic = mdl.ModelCriterion.AIC;
if aic < best_aic
best_mdl = mdl;
best_aic = aic;
end
end
% run F test on model with best aic
p(rep) = best_mdl.coefTest();
end
% nominal type I error rates
alpha = linspace(0, 1, 100);
% actual type I error rate for each alpha level
fpr = sum(p < alpha(:), 2) / nreps; | Does automatic model selection via AIC bias the p-values of the selected model? [Looking for simulat
Since simulations were requested as a followup to Ben's excellent answer, here's a simple example.
Model
Consider a polynomial regression model of the form:
$$y_i = \beta_0 + \sum_{d=1}^D \beta_d x_i^ |
30,737 | Is the mode of a Poisson Binomial distribution next to the mean? | Darroch, J. N. "On the distribution of the number of successes in independent trials." The Annals of Mathematical Statistics 35.3 (1964): 1317-1321,
proved that the mode of a Poisson binomial variable satisfies the following:
\begin{equation}
mode=
\begin{cases}
k \hspace{3mm}if\hspace{3mm}k \leq \mu \leq k+\frac{1}{k+2},\\
k \hspace{3mm}or\hspace{3mm} k+1\hspace{3mm}if\hspace{3mm} k+\frac{1}{k+2} \leq \mu \leq k+1 - \frac{1}{n-k+1},\\
k+1\hspace{3mm}if\hspace{3mm}k+1 - \frac{1}{n-k+1} \leq \mu \leq k+1.
\end{cases}
\end{equation}
Consequently, the mode differs from the mean by at most $1$. Note that a Poisson binomial distribution can have either one or two consecutive modes. | Is the mode of a Poisson Binomial distribution next to the mean? | Darroch, J. N. "On the distribution of the number of successes in independent trials." The Annals of Mathematical Statistics 35.3 (1964): 1317-1321,
proved that the mode of a Poisson binomial variabl | Is the mode of a Poisson Binomial distribution next to the mean?
Darroch, J. N. "On the distribution of the number of successes in independent trials." The Annals of Mathematical Statistics 35.3 (1964): 1317-1321,
proved that the mode of a Poisson binomial variable satisfies the following:
\begin{equation}
mode=
\begin{cases}
k \hspace{3mm}if\hspace{3mm}k \leq \mu \leq k+\frac{1}{k+2},\\
k \hspace{3mm}or\hspace{3mm} k+1\hspace{3mm}if\hspace{3mm} k+\frac{1}{k+2} \leq \mu \leq k+1 - \frac{1}{n-k+1},\\
k+1\hspace{3mm}if\hspace{3mm}k+1 - \frac{1}{n-k+1} \leq \mu \leq k+1.
\end{cases}
\end{equation}
Consequently, the mode differs from the mean by at most $1$. Note that a Poisson binomial distribution can have either one or two consecutive modes. | Is the mode of a Poisson Binomial distribution next to the mean?
Darroch, J. N. "On the distribution of the number of successes in independent trials." The Annals of Mathematical Statistics 35.3 (1964): 1317-1321,
proved that the mode of a Poisson binomial variabl |
30,738 | Is the mode of a Poisson Binomial distribution next to the mean? | I found the answer in a paper by Samuels:
Samuels, Stephen M. "On the number of successes in independent trials." The Annals of Mathematical Statistics 36.4 (1965): 1272-1278.
As a consequence of Theorem 1 we have that if there is an integer $k$ satisfying
$$
k\leq \mu \leq k+1
$$
then
$$
Pr(X=k-1)<Pr(X=k) \text{ and } Pr(X=k+1)>Pr(X=k+2).
$$
So the answer is yes. | Is the mode of a Poisson Binomial distribution next to the mean? | I found the answer in a paper by Samuels:
Samuels, Stephen M. "On the number of successes in independent trials." The Annals of Mathematical Statistics 36.4 (1965): 1272-1278.
As a consequence of Th | Is the mode of a Poisson Binomial distribution next to the mean?
I found the answer in a paper by Samuels:
Samuels, Stephen M. "On the number of successes in independent trials." The Annals of Mathematical Statistics 36.4 (1965): 1272-1278.
As a consequence of Theorem 1 we have that if there is an integer $k$ satisfying
$$
k\leq \mu \leq k+1
$$
then
$$
Pr(X=k-1)<Pr(X=k) \text{ and } Pr(X=k+1)>Pr(X=k+2).
$$
So the answer is yes. | Is the mode of a Poisson Binomial distribution next to the mean?
I found the answer in a paper by Samuels:
Samuels, Stephen M. "On the number of successes in independent trials." The Annals of Mathematical Statistics 36.4 (1965): 1272-1278.
As a consequence of Th |
30,739 | How can I find the distribution for the number of years until the first year's rainfall is exceeded for the first time? | Let us assume that annual rainfall can be considered a continuous variable (which requires the probability of no rainfall in a year to be equal to zero.) The following steps get us to our goal:
The first year's rainfall has cumulative distribution function
$F(X_1)$, and, as is well-known, $F(X_1) \sim \text{Uniform}(0,1)$ (the Probability Integral Transform)
The probability that any given successive year's rainfall exceeds the
first year's rainfall is $1-F(X_1)$, label it $p$. (This is because the probability of any given successive year's rainfall being $\leq$ year 1's rainfall is $F(X_1)$, so the probability of exceeding it is just $1-F(X_1)$.) Since $F(X_1)
\sim \text{Uniform}(0,1)$, $p$ is too; $1 - $ a $\text{Uniform}(0,1)$ variate is also a $\text{Uniform}(0,1)$ variate.
The number of years ($k$) until the first exceedence of the first
year's rainfall, conditional upon $p$, has a Geometric distribution with probability parameter $p$: $p(k | p) = (1-p)^{k-1}p$.
To remove the conditioning upon $p$, we integrate the Geometric distribution with respect to the Uniform distribution on $p$, which "disappears" in the expression below because it is equal to $1$ everywhere:
$$p(k) = \int_0^1(1-p)^{k-1}p\text{d}p$$
which is the $\text{Beta}(2,k)$ function. Expanding this function leads to:
$$p(k) = {1 \over k(k+1)}, \, k \geq 1$$
A quick check in R that this (plausibly) does sum to $1$:
> sum(beta(2,1:100000))
[1] 0.99999 | How can I find the distribution for the number of years until the first year's rainfall is exceeded | Let us assume that annual rainfall can be considered a continuous variable (which requires the probability of no rainfall in a year to be equal to zero.) The following steps get us to our goal:
The | How can I find the distribution for the number of years until the first year's rainfall is exceeded for the first time?
Let us assume that annual rainfall can be considered a continuous variable (which requires the probability of no rainfall in a year to be equal to zero.) The following steps get us to our goal:
The first year's rainfall has cumulative distribution function
$F(X_1)$, and, as is well-known, $F(X_1) \sim \text{Uniform}(0,1)$ (the Probability Integral Transform)
The probability that any given successive year's rainfall exceeds the
first year's rainfall is $1-F(X_1)$, label it $p$. (This is because the probability of any given successive year's rainfall being $\leq$ year 1's rainfall is $F(X_1)$, so the probability of exceeding it is just $1-F(X_1)$.) Since $F(X_1)
\sim \text{Uniform}(0,1)$, $p$ is too; $1 - $ a $\text{Uniform}(0,1)$ variate is also a $\text{Uniform}(0,1)$ variate.
The number of years ($k$) until the first exceedence of the first
year's rainfall, conditional upon $p$, has a Geometric distribution with probability parameter $p$: $p(k | p) = (1-p)^{k-1}p$.
To remove the conditioning upon $p$, we integrate the Geometric distribution with respect to the Uniform distribution on $p$, which "disappears" in the expression below because it is equal to $1$ everywhere:
$$p(k) = \int_0^1(1-p)^{k-1}p\text{d}p$$
which is the $\text{Beta}(2,k)$ function. Expanding this function leads to:
$$p(k) = {1 \over k(k+1)}, \, k \geq 1$$
A quick check in R that this (plausibly) does sum to $1$:
> sum(beta(2,1:100000))
[1] 0.99999 | How can I find the distribution for the number of years until the first year's rainfall is exceeded
Let us assume that annual rainfall can be considered a continuous variable (which requires the probability of no rainfall in a year to be equal to zero.) The following steps get us to our goal:
The |
30,740 | How can I find the distribution for the number of years until the first year's rainfall is exceeded for the first time? | jbowman's answer is correct, though there is an alternative approach to the same answer
By symmetry and assuming a continuous distribution so no ties, the probability that $X_1$ is the largest of $\{X_1,X_2,\cdots X_n\}$ is $\frac1n$ and the probability that $X_1$ is the largest of $\{X_1,X_2,\cdots X_n, X_{n+1}\}$ is $\frac1{n+1}$
This means the probability $X_{n+1}$ is the first value to exceed $X_1$ is $\frac{1}{n}-\frac{1}{n+1}=\frac{1}{n(n+1)}$
Slightly counter-intuitively, this implies that the expected number of years until $X_1$ is exceeded is therefore infinite. | How can I find the distribution for the number of years until the first year's rainfall is exceeded | jbowman's answer is correct, though there is an alternative approach to the same answer
By symmetry and assuming a continuous distribution so no ties, the probability that $X_1$ is the largest of $\{X | How can I find the distribution for the number of years until the first year's rainfall is exceeded for the first time?
jbowman's answer is correct, though there is an alternative approach to the same answer
By symmetry and assuming a continuous distribution so no ties, the probability that $X_1$ is the largest of $\{X_1,X_2,\cdots X_n\}$ is $\frac1n$ and the probability that $X_1$ is the largest of $\{X_1,X_2,\cdots X_n, X_{n+1}\}$ is $\frac1{n+1}$
This means the probability $X_{n+1}$ is the first value to exceed $X_1$ is $\frac{1}{n}-\frac{1}{n+1}=\frac{1}{n(n+1)}$
Slightly counter-intuitively, this implies that the expected number of years until $X_1$ is exceeded is therefore infinite. | How can I find the distribution for the number of years until the first year's rainfall is exceeded
jbowman's answer is correct, though there is an alternative approach to the same answer
By symmetry and assuming a continuous distribution so no ties, the probability that $X_1$ is the largest of $\{X |
30,741 | Loss function (and encoding?) for angles | Almost any loss function that is symmetric and differentiable at $0$ is locally quadratic. Thus, you don't have to be too fussy when searching for a good loss function when you need symmetry and differentiability.
Notice that with nearby angles $\phi$ and $\theta,$ the Taylor series expansion of the cosine gives
$$\mathcal{L}(\phi,\theta)=2(1 - \cos(\phi-\theta)) = (\phi-\theta)^2 + O((\phi-\theta)^4)$$
is locally quadratic at $\phi-\theta=0$ (and all integral multiples of $2\pi$) through third order. Moreover, this function of $\phi$ and $\theta$ isn't badly behaved: it's defined for all angles, is differentiable everywhere, and--most importantly--respects the modular nature of angle comparison. Thus $\mathcal{L}$ is a natural and simple angular version of a quadratic loss. This would be a good place to start your analysis.
If you need more flexibility, consider defining your loss as a function of $\sqrt{2(1-\cos(\phi-\theta))}:$ clearly this is a circular analog of the absolute difference. | Loss function (and encoding?) for angles | Almost any loss function that is symmetric and differentiable at $0$ is locally quadratic. Thus, you don't have to be too fussy when searching for a good loss function when you need symmetry and dif | Loss function (and encoding?) for angles
Almost any loss function that is symmetric and differentiable at $0$ is locally quadratic. Thus, you don't have to be too fussy when searching for a good loss function when you need symmetry and differentiability.
Notice that with nearby angles $\phi$ and $\theta,$ the Taylor series expansion of the cosine gives
$$\mathcal{L}(\phi,\theta)=2(1 - \cos(\phi-\theta)) = (\phi-\theta)^2 + O((\phi-\theta)^4)$$
is locally quadratic at $\phi-\theta=0$ (and all integral multiples of $2\pi$) through third order. Moreover, this function of $\phi$ and $\theta$ isn't badly behaved: it's defined for all angles, is differentiable everywhere, and--most importantly--respects the modular nature of angle comparison. Thus $\mathcal{L}$ is a natural and simple angular version of a quadratic loss. This would be a good place to start your analysis.
If you need more flexibility, consider defining your loss as a function of $\sqrt{2(1-\cos(\phi-\theta))}:$ clearly this is a circular analog of the absolute difference. | Loss function (and encoding?) for angles
Almost any loss function that is symmetric and differentiable at $0$ is locally quadratic. Thus, you don't have to be too fussy when searching for a good loss function when you need symmetry and dif |
30,742 | When generating samples using variational autoencoder, we decode samples from $N(0,1)$ instead of $\mu + \sigma N(0,1)$ | During training, we are drawing $z \sim P(z|x)$, and then decoding with $\hat x = g(z)$.
During generation, we are drawing $z \sim P(z)$, and then decoding $x = g(z)$.
So this answers your question: during generation, we want to generate samples from the prior distribution of latent codes, whereas during training, we are drawing samples from the posterior distribution, because we are trying to reconstruct a specific datapoint. | When generating samples using variational autoencoder, we decode samples from $N(0,1)$ instead of $\ | During training, we are drawing $z \sim P(z|x)$, and then decoding with $\hat x = g(z)$.
During generation, we are drawing $z \sim P(z)$, and then decoding $x = g(z)$.
So this answers your question: | When generating samples using variational autoencoder, we decode samples from $N(0,1)$ instead of $\mu + \sigma N(0,1)$
During training, we are drawing $z \sim P(z|x)$, and then decoding with $\hat x = g(z)$.
During generation, we are drawing $z \sim P(z)$, and then decoding $x = g(z)$.
So this answers your question: during generation, we want to generate samples from the prior distribution of latent codes, whereas during training, we are drawing samples from the posterior distribution, because we are trying to reconstruct a specific datapoint. | When generating samples using variational autoencoder, we decode samples from $N(0,1)$ instead of $\
During training, we are drawing $z \sim P(z|x)$, and then decoding with $\hat x = g(z)$.
During generation, we are drawing $z \sim P(z)$, and then decoding $x = g(z)$.
So this answers your question: |
30,743 | If the AIC and the BIC are asymptotically equivalent to cross validation, is it possible to dispense with a test set when using them? | AIC is asymptotically equivalent to leave-1-out cross-validation (LOOCV) (Stone 1977) and BIC is equivalent to leave-k-out cross-validation (LKOCV) where $k=n[1−1/(\log(n)−1)]$, with $n=$ sample size (Shao 1997). So if you are happy with LOOCV or LKOCV in terms of optimizing model prediction error and consistency of selection, respectively, then yes you could in principle get rid of splitting the data in a training & test set. Note that within the context of L0-penalized GLMs (where you penalize the log-likelihood of your model based on lambda * the nr of nonzero coefficients, i.e. the L0-norm of your model coefficients) you can also optimize the AIC or BIC objective directly, as $\lambda = 2$ for AIC and $\lambda=\log(n)$ for BIC, which is what is done in the l0ara R package. To me this makes more sense than what they e.g. do in the case of LASSO or elastic net regression in glmnet, where optimizing one objective (LASSO or elastic net regression) is followed by the tuning of the regularization parameter(s) based on some other objective (which e.g. minimizes cross validation prediction error, AIC or BIC).
Syed (2011) on page 10 notes "We can also try to gain an intuitive understanding of the asymptotic equivalence by noting that the AIC minimizes the Kullback-Leibler divergence between the approximate model and the true model. The Kullback-Leibler divergence is not a distance measure between distributions, but really a measure of the information loss when the approximate model is used to model the ground reality. Leave-one-out cross validation uses a maximal amount of data for training to make a prediction for one observation. That is, $n −1$ observations as stand-ins for the approximate model relative to the single observation representing “reality”. We can think of this as learning the maximal amount of information that can be gained from the data in estimating loss. Given independent and identically distributed observations, performing this over $n$ possible validation sets leads to an asymptotically unbiased estimate."
Note that the LOOCV error can also be calculated analytically from the residuals and the diagonal of the hat matrix, without having to actually carry out any cross validation. This would always be an alternative to the AIC, as that is only an asymptotic approximation of the LOOCV error.
References
Stone M. (1977) An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion. Journal of the Royal Statistical Society Series B. 39, 44–7.
Shao J. (1997) An asymptotic theory for linear model selection. Statistica Sinica 7, 221-242. | If the AIC and the BIC are asymptotically equivalent to cross validation, is it possible to dispense | AIC is asymptotically equivalent to leave-1-out cross-validation (LOOCV) (Stone 1977) and BIC is equivalent to leave-k-out cross-validation (LKOCV) where $k=n[1−1/(\log(n)−1)]$, with $n=$ sample size | If the AIC and the BIC are asymptotically equivalent to cross validation, is it possible to dispense with a test set when using them?
AIC is asymptotically equivalent to leave-1-out cross-validation (LOOCV) (Stone 1977) and BIC is equivalent to leave-k-out cross-validation (LKOCV) where $k=n[1−1/(\log(n)−1)]$, with $n=$ sample size (Shao 1997). So if you are happy with LOOCV or LKOCV in terms of optimizing model prediction error and consistency of selection, respectively, then yes you could in principle get rid of splitting the data in a training & test set. Note that within the context of L0-penalized GLMs (where you penalize the log-likelihood of your model based on lambda * the nr of nonzero coefficients, i.e. the L0-norm of your model coefficients) you can also optimize the AIC or BIC objective directly, as $\lambda = 2$ for AIC and $\lambda=\log(n)$ for BIC, which is what is done in the l0ara R package. To me this makes more sense than what they e.g. do in the case of LASSO or elastic net regression in glmnet, where optimizing one objective (LASSO or elastic net regression) is followed by the tuning of the regularization parameter(s) based on some other objective (which e.g. minimizes cross validation prediction error, AIC or BIC).
Syed (2011) on page 10 notes "We can also try to gain an intuitive understanding of the asymptotic equivalence by noting that the AIC minimizes the Kullback-Leibler divergence between the approximate model and the true model. The Kullback-Leibler divergence is not a distance measure between distributions, but really a measure of the information loss when the approximate model is used to model the ground reality. Leave-one-out cross validation uses a maximal amount of data for training to make a prediction for one observation. That is, $n −1$ observations as stand-ins for the approximate model relative to the single observation representing “reality”. We can think of this as learning the maximal amount of information that can be gained from the data in estimating loss. Given independent and identically distributed observations, performing this over $n$ possible validation sets leads to an asymptotically unbiased estimate."
Note that the LOOCV error can also be calculated analytically from the residuals and the diagonal of the hat matrix, without having to actually carry out any cross validation. This would always be an alternative to the AIC, as that is only an asymptotic approximation of the LOOCV error.
References
Stone M. (1977) An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion. Journal of the Royal Statistical Society Series B. 39, 44–7.
Shao J. (1997) An asymptotic theory for linear model selection. Statistica Sinica 7, 221-242. | If the AIC and the BIC are asymptotically equivalent to cross validation, is it possible to dispense
AIC is asymptotically equivalent to leave-1-out cross-validation (LOOCV) (Stone 1977) and BIC is equivalent to leave-k-out cross-validation (LKOCV) where $k=n[1−1/(\log(n)−1)]$, with $n=$ sample size |
30,744 | If the AIC and the BIC are asymptotically equivalent to cross validation, is it possible to dispense with a test set when using them? | Not really an answer to your question, but a bit long for a comment. Asymptotic equivalence boils down to saying: if the sample is very large and the number of parameters not that large, how you penalize the number of parameters won't matter much: the $-2\log(lik)$ part will be what matters most.
I guess no empirical answer of the kind you request can be given, since it would be highly problem dependent. A sample size may be "large" in a problem with five parameters, and not so large in a problem in which the number of parameters (or equivalent parameters: think of a non-parametric setting) also grows with the sample size.
Further, one may not have a clear cut notion of parameters and how to count them: think of CART-style trees. How do you count parameters there?
For these reasons I think cross-validation, however expensive and cumbersome at times, continues to be a tool difficult to replace. | If the AIC and the BIC are asymptotically equivalent to cross validation, is it possible to dispense | Not really an answer to your question, but a bit long for a comment. Asymptotic equivalence boils down to saying: if the sample is very large and the number of parameters not that large, how you penal | If the AIC and the BIC are asymptotically equivalent to cross validation, is it possible to dispense with a test set when using them?
Not really an answer to your question, but a bit long for a comment. Asymptotic equivalence boils down to saying: if the sample is very large and the number of parameters not that large, how you penalize the number of parameters won't matter much: the $-2\log(lik)$ part will be what matters most.
I guess no empirical answer of the kind you request can be given, since it would be highly problem dependent. A sample size may be "large" in a problem with five parameters, and not so large in a problem in which the number of parameters (or equivalent parameters: think of a non-parametric setting) also grows with the sample size.
Further, one may not have a clear cut notion of parameters and how to count them: think of CART-style trees. How do you count parameters there?
For these reasons I think cross-validation, however expensive and cumbersome at times, continues to be a tool difficult to replace. | If the AIC and the BIC are asymptotically equivalent to cross validation, is it possible to dispense
Not really an answer to your question, but a bit long for a comment. Asymptotic equivalence boils down to saying: if the sample is very large and the number of parameters not that large, how you penal |
30,745 | Covariance matrix as linear transformation | This is not true for all (non-zero) vectors, but let's explore. The covariance matrix $A$ has an orthonormal basis $v_1, \ldots, v_n$ of eigenvectors with eigenvalues $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n \geq 0$ (all are non-negative since these correspond to variances). Suppose that $\lambda_1 > \lambda_2$ and take some vector $v = c_1v_1+\ldots+c_nv_n$ where $c_1 > 0$. Then $$A^m v = c_1 \lambda_1^m v_1 + \ldots + c_n \lambda_n^m v_n$$ The term $\lambda_1^m$ dominates the others so $A^mv$ points more and more in the direction of $v_1$. To be more precise let $\alpha$ be the angle between $v_1$ and $A^m v$. Then $$\cos(\alpha) = \frac{\langle v_1, A^mv\rangle}{\lVert A^m v\rVert} = \frac{c_1 \lambda_1^m}{\left(c_1^2\lambda_1^{2m} + \ldots + c_n^2 \lambda_n^{2m}\right)^{\tfrac12}}$$ and this tends to $1$ for increasing power $m$. If we take $c_1<0$ then $\cos(\alpha)$ tends to $-1$ so $A^m v$ points in the direction of $-v_1$ in this case.
Note that if $v$ was taken orthogonal to $v_1$ (so $c_1=0$) then $A^m v$ remains orthogonal to $v_1$.
If $A$ is non-singular (so $\lambda_n >0$) then $v_1, \ldots, v_n$ are still eigenvectors of $A^{-1}$ but now with eigenvalues $0 < \lambda_1^{-1} \leq \lambda_2^{-1} \leq \ldots \leq \lambda_n^{-1}$. Note that the ordering of the eigenvalues reverses and the same story can be applied to $A^{-m} v$. In particular if $\lambda_{n-1}^{-1} < \lambda_n^{-1}$ and the coefficient $c_n$ in $v$ is not zero then $A^{-m}v$ points more and more in the direction of $\pm v_n$. | Covariance matrix as linear transformation | This is not true for all (non-zero) vectors, but let's explore. The covariance matrix $A$ has an orthonormal basis $v_1, \ldots, v_n$ of eigenvectors with eigenvalues $\lambda_1 \geq \lambda_2 \geq \l | Covariance matrix as linear transformation
This is not true for all (non-zero) vectors, but let's explore. The covariance matrix $A$ has an orthonormal basis $v_1, \ldots, v_n$ of eigenvectors with eigenvalues $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n \geq 0$ (all are non-negative since these correspond to variances). Suppose that $\lambda_1 > \lambda_2$ and take some vector $v = c_1v_1+\ldots+c_nv_n$ where $c_1 > 0$. Then $$A^m v = c_1 \lambda_1^m v_1 + \ldots + c_n \lambda_n^m v_n$$ The term $\lambda_1^m$ dominates the others so $A^mv$ points more and more in the direction of $v_1$. To be more precise let $\alpha$ be the angle between $v_1$ and $A^m v$. Then $$\cos(\alpha) = \frac{\langle v_1, A^mv\rangle}{\lVert A^m v\rVert} = \frac{c_1 \lambda_1^m}{\left(c_1^2\lambda_1^{2m} + \ldots + c_n^2 \lambda_n^{2m}\right)^{\tfrac12}}$$ and this tends to $1$ for increasing power $m$. If we take $c_1<0$ then $\cos(\alpha)$ tends to $-1$ so $A^m v$ points in the direction of $-v_1$ in this case.
Note that if $v$ was taken orthogonal to $v_1$ (so $c_1=0$) then $A^m v$ remains orthogonal to $v_1$.
If $A$ is non-singular (so $\lambda_n >0$) then $v_1, \ldots, v_n$ are still eigenvectors of $A^{-1}$ but now with eigenvalues $0 < \lambda_1^{-1} \leq \lambda_2^{-1} \leq \ldots \leq \lambda_n^{-1}$. Note that the ordering of the eigenvalues reverses and the same story can be applied to $A^{-m} v$. In particular if $\lambda_{n-1}^{-1} < \lambda_n^{-1}$ and the coefficient $c_n$ in $v$ is not zero then $A^{-m}v$ points more and more in the direction of $\pm v_n$. | Covariance matrix as linear transformation
This is not true for all (non-zero) vectors, but let's explore. The covariance matrix $A$ has an orthonormal basis $v_1, \ldots, v_n$ of eigenvectors with eigenvalues $\lambda_1 \geq \lambda_2 \geq \l |
30,746 | Gaussian Mixture Models: Maximum Likelihood Estimation or Expectation Maximization? | You can use ML directly but as the priors of the different gaussians (usually called latent variables) are unknown, you'll probably find that your optimization objective is pretty hard. EM iterative method solves this intractability.
Suggested read:
https://see.stanford.edu/materials/aimlcs229/cs229-notes8.pdf | Gaussian Mixture Models: Maximum Likelihood Estimation or Expectation Maximization? | You can use ML directly but as the priors of the different gaussians (usually called latent variables) are unknown, you'll probably find that your optimization objective is pretty hard. EM iterative m | Gaussian Mixture Models: Maximum Likelihood Estimation or Expectation Maximization?
You can use ML directly but as the priors of the different gaussians (usually called latent variables) are unknown, you'll probably find that your optimization objective is pretty hard. EM iterative method solves this intractability.
Suggested read:
https://see.stanford.edu/materials/aimlcs229/cs229-notes8.pdf | Gaussian Mixture Models: Maximum Likelihood Estimation or Expectation Maximization?
You can use ML directly but as the priors of the different gaussians (usually called latent variables) are unknown, you'll probably find that your optimization objective is pretty hard. EM iterative m |
30,747 | Gaussian Mixture Models: Maximum Likelihood Estimation or Expectation Maximization? | Mehrin, you are in danger of creating a false dichotomy. EM is an optimisation technique that can be used to find maximum likelihood estimates and so the choice is not "one or the other".
In mixture models EM is often used to find MLEs or MAPs as it produces transparent algorithms. | Gaussian Mixture Models: Maximum Likelihood Estimation or Expectation Maximization? | Mehrin, you are in danger of creating a false dichotomy. EM is an optimisation technique that can be used to find maximum likelihood estimates and so the choice is not "one or the other".
In mixture m | Gaussian Mixture Models: Maximum Likelihood Estimation or Expectation Maximization?
Mehrin, you are in danger of creating a false dichotomy. EM is an optimisation technique that can be used to find maximum likelihood estimates and so the choice is not "one or the other".
In mixture models EM is often used to find MLEs or MAPs as it produces transparent algorithms. | Gaussian Mixture Models: Maximum Likelihood Estimation or Expectation Maximization?
Mehrin, you are in danger of creating a false dichotomy. EM is an optimisation technique that can be used to find maximum likelihood estimates and so the choice is not "one or the other".
In mixture m |
30,748 | Semi-martingale vs. martingale. What is the difference? | Martingale and semi-martingale have very precise mathematical definitions, so it's definitely not something easy to understand. I'll try to give some intuition without too much mathematical details.
Martingale is a stochastic process that it's expectation is equal to the current value. This means:
No clear trend (like a random process)
Your previous knowledge can't help you to predict the future
For example, if you have the Google stock, and it's $100. You might assume the daily stock movement is a martingale. The probability of it reaching 110 is the same as the probability of it reaching 90. On average, the most likely stock price after a week is still 100. If you need to make a bet on the most likely stock price after one week of trading, you should go for 100. Note in our example, we don't need to know the past stock price, only today's price is relevant. Since your best prediction is actually today's price, you aren't actually predicting anything.
Mathematically:
$$\mathbf E(X_{n+1}|X_1,\ldots,X_n)= X_n.$$
Semi-martingale is similar to martingale but it's not always a martingale. For example, if you can somehow use the past stock data to predict accurately Google stock price for the first week (and only the first week), it won't be a martingale process. Starting from the second week, the process becomes a martingale again.
Mathematically:
You can think semi-martingale like a merge of martingale (more precisely local martingale) and a non-martingale process. | Semi-martingale vs. martingale. What is the difference? | Martingale and semi-martingale have very precise mathematical definitions, so it's definitely not something easy to understand. I'll try to give some intuition without too much mathematical details.
M | Semi-martingale vs. martingale. What is the difference?
Martingale and semi-martingale have very precise mathematical definitions, so it's definitely not something easy to understand. I'll try to give some intuition without too much mathematical details.
Martingale is a stochastic process that it's expectation is equal to the current value. This means:
No clear trend (like a random process)
Your previous knowledge can't help you to predict the future
For example, if you have the Google stock, and it's $100. You might assume the daily stock movement is a martingale. The probability of it reaching 110 is the same as the probability of it reaching 90. On average, the most likely stock price after a week is still 100. If you need to make a bet on the most likely stock price after one week of trading, you should go for 100. Note in our example, we don't need to know the past stock price, only today's price is relevant. Since your best prediction is actually today's price, you aren't actually predicting anything.
Mathematically:
$$\mathbf E(X_{n+1}|X_1,\ldots,X_n)= X_n.$$
Semi-martingale is similar to martingale but it's not always a martingale. For example, if you can somehow use the past stock data to predict accurately Google stock price for the first week (and only the first week), it won't be a martingale process. Starting from the second week, the process becomes a martingale again.
Mathematically:
You can think semi-martingale like a merge of martingale (more precisely local martingale) and a non-martingale process. | Semi-martingale vs. martingale. What is the difference?
Martingale and semi-martingale have very precise mathematical definitions, so it's definitely not something easy to understand. I'll try to give some intuition without too much mathematical details.
M |
30,749 | Is the sum of a large number of independent Cauchy random variables Normal? | No.
You're missing one of the central assumptions of the central limit theorem:
... random variables with finite variances ...
The Cauchy distribution does not have a finite variance.
The Cauchy distribution is an example of a distribution which has no mean, variance or higher moments defined.
In fact
If $X_1, \ldots, X_n$ are independent and identically distributed random variables, each with a standard Cauchy distribution, then the sample mean $\frac{X_1 + \cdots + X_n}{n}$ has the same standard Cauchy distribution.
So the situation in your question is quite clear cut, you just keep getting back the same Cauchy distribution.
This is the concept of a stable distribution right?
Yes. A (strictly) stable distribution (or random variable) is one for which any linear combination $a X_1 + b X_2$ of two i.i.d copies is distributed proportionally to the original distribution. The Cauchy distribution is indeed strictly stationary.
(*) Quotations from wikipedia. | Is the sum of a large number of independent Cauchy random variables Normal? | No.
You're missing one of the central assumptions of the central limit theorem:
... random variables with finite variances ...
The Cauchy distribution does not have a finite variance.
The Cauchy di | Is the sum of a large number of independent Cauchy random variables Normal?
No.
You're missing one of the central assumptions of the central limit theorem:
... random variables with finite variances ...
The Cauchy distribution does not have a finite variance.
The Cauchy distribution is an example of a distribution which has no mean, variance or higher moments defined.
In fact
If $X_1, \ldots, X_n$ are independent and identically distributed random variables, each with a standard Cauchy distribution, then the sample mean $\frac{X_1 + \cdots + X_n}{n}$ has the same standard Cauchy distribution.
So the situation in your question is quite clear cut, you just keep getting back the same Cauchy distribution.
This is the concept of a stable distribution right?
Yes. A (strictly) stable distribution (or random variable) is one for which any linear combination $a X_1 + b X_2$ of two i.i.d copies is distributed proportionally to the original distribution. The Cauchy distribution is indeed strictly stationary.
(*) Quotations from wikipedia. | Is the sum of a large number of independent Cauchy random variables Normal?
No.
You're missing one of the central assumptions of the central limit theorem:
... random variables with finite variances ...
The Cauchy distribution does not have a finite variance.
The Cauchy di |
30,750 | How does OLS regression relate to generalised linear modelling | In the context of generalized linear models (GLMs), OLS is viewed as a special case of GLM. Under this framework, the distribution of the OLS error terms is normal (gaussian) and the link function is the identity function.
Generalized linear models allow for different error distributions and also allow the dependent (or response) variable to have a different relationship with the independent variables. This allows for modelling counts or binary or multinomial outcomes. This relationship is encoded in the link function.
Below is an example using R to show that OLS is a special case of GLM:
# create data
x <- 1:20
y <- 2*x + 3 + rnorm(20)
# OLS
lm(y~x)
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept) x
2.706 2.011
# GLM
glm(y~x, family=gaussian(identity))
Call: glm(formula = y ~ x, family = gaussian(identity))
Coefficients:
(Intercept) x
2.706 2.011
Degrees of Freedom: 19 Total (i.e. Null); 18 Residual
Null Deviance: 2717
Residual Deviance: 28.98 AIC: 70.18
It is important to note that OLS can also be viewed mathematically as the linear projection of the dependent variable onto the independent variables in a manner that minimizes the squared distance from the projection to the observations. From this bare-bones viewpoint, the assumption of normality in the conditional error term is irrelevant.
However, to make statistical inferences, assumptions must be added. Either normality of the error term, or more typically, the central limit theorem are invoked for the purpose of inference following estimation of an OLS model. | How does OLS regression relate to generalised linear modelling | In the context of generalized linear models (GLMs), OLS is viewed as a special case of GLM. Under this framework, the distribution of the OLS error terms is normal (gaussian) and the link function is | How does OLS regression relate to generalised linear modelling
In the context of generalized linear models (GLMs), OLS is viewed as a special case of GLM. Under this framework, the distribution of the OLS error terms is normal (gaussian) and the link function is the identity function.
Generalized linear models allow for different error distributions and also allow the dependent (or response) variable to have a different relationship with the independent variables. This allows for modelling counts or binary or multinomial outcomes. This relationship is encoded in the link function.
Below is an example using R to show that OLS is a special case of GLM:
# create data
x <- 1:20
y <- 2*x + 3 + rnorm(20)
# OLS
lm(y~x)
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept) x
2.706 2.011
# GLM
glm(y~x, family=gaussian(identity))
Call: glm(formula = y ~ x, family = gaussian(identity))
Coefficients:
(Intercept) x
2.706 2.011
Degrees of Freedom: 19 Total (i.e. Null); 18 Residual
Null Deviance: 2717
Residual Deviance: 28.98 AIC: 70.18
It is important to note that OLS can also be viewed mathematically as the linear projection of the dependent variable onto the independent variables in a manner that minimizes the squared distance from the projection to the observations. From this bare-bones viewpoint, the assumption of normality in the conditional error term is irrelevant.
However, to make statistical inferences, assumptions must be added. Either normality of the error term, or more typically, the central limit theorem are invoked for the purpose of inference following estimation of an OLS model. | How does OLS regression relate to generalised linear modelling
In the context of generalized linear models (GLMs), OLS is viewed as a special case of GLM. Under this framework, the distribution of the OLS error terms is normal (gaussian) and the link function is |
30,751 | How does OLS regression relate to generalised linear modelling | Generalized linear models are an extension of OLS. In both there is a linear relationship between the "dependent" variable and the explanatory variables of the form: $y=β_0+β_1x_1+β_2x_2+...β_nx_n+ε$ or $\mathbf{y}=\mathbf{X} \mathbf{\beta}$. In generalized linear models, though, $\mathbf{\rho}=\mathbf{X} \mathbf{\beta}$, so that the relationship to $E(Y) = μ = g^{−1}(\rho)$.
In OLS the assumption is that the residuals follow a normal distribution with mean zero, and constant variance. This is not the case in glm, where the variance in the predicted values to be a function of $\ E(y)$. | How does OLS regression relate to generalised linear modelling | Generalized linear models are an extension of OLS. In both there is a linear relationship between the "dependent" variable and the explanatory variables of the form: $y=β_0+β_1x_1+β_2x_2+...β_nx_n+ε$ | How does OLS regression relate to generalised linear modelling
Generalized linear models are an extension of OLS. In both there is a linear relationship between the "dependent" variable and the explanatory variables of the form: $y=β_0+β_1x_1+β_2x_2+...β_nx_n+ε$ or $\mathbf{y}=\mathbf{X} \mathbf{\beta}$. In generalized linear models, though, $\mathbf{\rho}=\mathbf{X} \mathbf{\beta}$, so that the relationship to $E(Y) = μ = g^{−1}(\rho)$.
In OLS the assumption is that the residuals follow a normal distribution with mean zero, and constant variance. This is not the case in glm, where the variance in the predicted values to be a function of $\ E(y)$. | How does OLS regression relate to generalised linear modelling
Generalized linear models are an extension of OLS. In both there is a linear relationship between the "dependent" variable and the explanatory variables of the form: $y=β_0+β_1x_1+β_2x_2+...β_nx_n+ε$ |
30,752 | Can frequentists use Bayes theorem? | I can't find the quote but I read somewhere that: using Bayes Theorem doesn't make you a Bayesian, using Bayes Theorem for everything does.
Bayes Theorem is used by frequentists all the time. See the examples at the Bayes Theorem Wikipedia page. Scroll down to the Interpretation section and you'll notice that there is a Bayesian Interpretation and a Frequentist Interpretation section.
Contrast this with Bayesian Inference.
So yes, a frequentist can use Bayes Theorem. Bayesian inference views probabilities and uncertainty differently than frequentist inference, and Bayes Theorem is sort of the universal engine used in that inference. | Can frequentists use Bayes theorem? | I can't find the quote but I read somewhere that: using Bayes Theorem doesn't make you a Bayesian, using Bayes Theorem for everything does.
Bayes Theorem is used by frequentists all the time. See the | Can frequentists use Bayes theorem?
I can't find the quote but I read somewhere that: using Bayes Theorem doesn't make you a Bayesian, using Bayes Theorem for everything does.
Bayes Theorem is used by frequentists all the time. See the examples at the Bayes Theorem Wikipedia page. Scroll down to the Interpretation section and you'll notice that there is a Bayesian Interpretation and a Frequentist Interpretation section.
Contrast this with Bayesian Inference.
So yes, a frequentist can use Bayes Theorem. Bayesian inference views probabilities and uncertainty differently than frequentist inference, and Bayes Theorem is sort of the universal engine used in that inference. | Can frequentists use Bayes theorem?
I can't find the quote but I read somewhere that: using Bayes Theorem doesn't make you a Bayesian, using Bayes Theorem for everything does.
Bayes Theorem is used by frequentists all the time. See the |
30,753 | Beta as distribution of proportions (or as continuous Binomial) | The beta as a distribution for variables that are, or are like, proportions is a a popular playground in several fields of statistical science. Beta regression is a major focus of this text and monograph.
As that book and other literature exemplify in detail, that still leaves scope for discussion in general and in particular over the merits of such models as compared with generalised linear models using a binomial family and (usually) robust-sandwich-Huber-White flavour, let alone linear probability models. | Beta as distribution of proportions (or as continuous Binomial) | The beta as a distribution for variables that are, or are like, proportions is a a popular playground in several fields of statistical science. Beta regression is a major focus of this text and monogr | Beta as distribution of proportions (or as continuous Binomial)
The beta as a distribution for variables that are, or are like, proportions is a a popular playground in several fields of statistical science. Beta regression is a major focus of this text and monograph.
As that book and other literature exemplify in detail, that still leaves scope for discussion in general and in particular over the merits of such models as compared with generalised linear models using a binomial family and (usually) robust-sandwich-Huber-White flavour, let alone linear probability models. | Beta as distribution of proportions (or as continuous Binomial)
The beta as a distribution for variables that are, or are like, proportions is a a popular playground in several fields of statistical science. Beta regression is a major focus of this text and monogr |
30,754 | Beta as distribution of proportions (or as continuous Binomial) | Adding to Nick's answer, such parametrization as described in my question is also mentioned in vignette to betareg package and by Ferrari and Cribari-Neto (2004) who also proposed it in their paper about Beta regression. Ferrari and Cribari-Neto (2004) describe the parameters as $\mu$ for mean and $\phi$ for precision (i.e. as it grows, it decreases variance).
Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815. | Beta as distribution of proportions (or as continuous Binomial) | Adding to Nick's answer, such parametrization as described in my question is also mentioned in vignette to betareg package and by Ferrari and Cribari-Neto (2004) who also proposed it in their paper ab | Beta as distribution of proportions (or as continuous Binomial)
Adding to Nick's answer, such parametrization as described in my question is also mentioned in vignette to betareg package and by Ferrari and Cribari-Neto (2004) who also proposed it in their paper about Beta regression. Ferrari and Cribari-Neto (2004) describe the parameters as $\mu$ for mean and $\phi$ for precision (i.e. as it grows, it decreases variance).
Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815. | Beta as distribution of proportions (or as continuous Binomial)
Adding to Nick's answer, such parametrization as described in my question is also mentioned in vignette to betareg package and by Ferrari and Cribari-Neto (2004) who also proposed it in their paper ab |
30,755 | What is "t" in generating functions? | In a sense, an MGF is simply a way of encoding a set of moments into a convenient function in a way that you can do some useful things with the function.
The variable $t$ in no way relates to the random variable $X$. You could as readily write $M_X(s)$ or $M_X(u)$... it is, in essence a kind of dummy variable. It doesn't stand for anything beyond being the argument of the mgf.
Herbert Wilf [1] calls a generating function:
a clothesline on which we hang up a sequence of numbers for display
It really wouldn't matter which exact clothesline you hung them on; another would do just as well.
Is there any way to derive the functions from anywhere?
There's more than one way to turn a set of moments into a generating function (e.g. a discrete distribution has a probability generating function, a moment generating function, a cumulant generating function and a characteristic function and you can recover the moments (in some cases less directly than others) from any of them.
So there's not a unique way to encode a set of moments into a function; it's a matter of choice about how you set it up. While they're similar (and, naturally, related), some are more convenient for particular kinds of tasks.
I see a certain analogy between mgf and Laplace transform and cf and Fourier transform.
Not merely an analogy, at least if we consider the bilateral Laplace transform (which I'll still denote as $\mathcal{L}$ here). We see $M_X(t) = \mathcal{L}_X(-t)$ is (at least up to a change of sign) really a Laplace transform (indeed, consider $\mathcal{L}_X(-t) =\mathcal{L}_{-X}(t)$, so it's the bilateral Laplace transform of a flipped variate). One can convert readily from one to the other, and use results for Laplace transforms on mgfs quite happily (and, for that matter, tables of Laplace transforms, if we keep that sign issue in mind). Similarly, characteristic functions are not merely analogous to Fourier transforms, they are Fourier transforms (again, up to the sign of the argument which is of no consequence outside the obvious effect swapping the sign of the argument has on a function).
If Fourier transforms and Laplace transforms help give you intuition about what mgfs and cfs "are" you should certainly exploit those intuitions, but on the other hand, it's not always necessary to have intuition when manipulating these things.
In fact when playing with cfs, because they always exist and are unique, I often tend to think of them as just the distribution looked at through a different lens.
I can see that taking the derivative of the function and evaluating at t=0 gives the moment (if the integral is absolutely convergent), but why?
Because the particular generating function we chose to use (the mgf) was set up to work that way. In order to be able to extract the set of moments from the function again you need something like that -- a way to eliminate all the lower ones (such as differentiation) and eliminate all the higher ones (such as set the argument to 0) so that you can pick out the exact one you need. For that to happen you already need something that works kind of like an mgf. At the same time, it's nice if it has some other properties you can exploit (as the various generating functions we use with random variables do), so that restricts our set of choices even further.
[1] Wilf, H. (1994)
generatingfunctionology, 2nd ed
Academic Press Inc., San Diego
https://www.math.upenn.edu/~wilf/DownldGF.html | What is "t" in generating functions? | In a sense, an MGF is simply a way of encoding a set of moments into a convenient function in a way that you can do some useful things with the function.
The variable $t$ in no way relates to the rand | What is "t" in generating functions?
In a sense, an MGF is simply a way of encoding a set of moments into a convenient function in a way that you can do some useful things with the function.
The variable $t$ in no way relates to the random variable $X$. You could as readily write $M_X(s)$ or $M_X(u)$... it is, in essence a kind of dummy variable. It doesn't stand for anything beyond being the argument of the mgf.
Herbert Wilf [1] calls a generating function:
a clothesline on which we hang up a sequence of numbers for display
It really wouldn't matter which exact clothesline you hung them on; another would do just as well.
Is there any way to derive the functions from anywhere?
There's more than one way to turn a set of moments into a generating function (e.g. a discrete distribution has a probability generating function, a moment generating function, a cumulant generating function and a characteristic function and you can recover the moments (in some cases less directly than others) from any of them.
So there's not a unique way to encode a set of moments into a function; it's a matter of choice about how you set it up. While they're similar (and, naturally, related), some are more convenient for particular kinds of tasks.
I see a certain analogy between mgf and Laplace transform and cf and Fourier transform.
Not merely an analogy, at least if we consider the bilateral Laplace transform (which I'll still denote as $\mathcal{L}$ here). We see $M_X(t) = \mathcal{L}_X(-t)$ is (at least up to a change of sign) really a Laplace transform (indeed, consider $\mathcal{L}_X(-t) =\mathcal{L}_{-X}(t)$, so it's the bilateral Laplace transform of a flipped variate). One can convert readily from one to the other, and use results for Laplace transforms on mgfs quite happily (and, for that matter, tables of Laplace transforms, if we keep that sign issue in mind). Similarly, characteristic functions are not merely analogous to Fourier transforms, they are Fourier transforms (again, up to the sign of the argument which is of no consequence outside the obvious effect swapping the sign of the argument has on a function).
If Fourier transforms and Laplace transforms help give you intuition about what mgfs and cfs "are" you should certainly exploit those intuitions, but on the other hand, it's not always necessary to have intuition when manipulating these things.
In fact when playing with cfs, because they always exist and are unique, I often tend to think of them as just the distribution looked at through a different lens.
I can see that taking the derivative of the function and evaluating at t=0 gives the moment (if the integral is absolutely convergent), but why?
Because the particular generating function we chose to use (the mgf) was set up to work that way. In order to be able to extract the set of moments from the function again you need something like that -- a way to eliminate all the lower ones (such as differentiation) and eliminate all the higher ones (such as set the argument to 0) so that you can pick out the exact one you need. For that to happen you already need something that works kind of like an mgf. At the same time, it's nice if it has some other properties you can exploit (as the various generating functions we use with random variables do), so that restricts our set of choices even further.
[1] Wilf, H. (1994)
generatingfunctionology, 2nd ed
Academic Press Inc., San Diego
https://www.math.upenn.edu/~wilf/DownldGF.html | What is "t" in generating functions?
In a sense, an MGF is simply a way of encoding a set of moments into a convenient function in a way that you can do some useful things with the function.
The variable $t$ in no way relates to the rand |
30,756 | Variable selection with glmnet (caret package) | What you're trying to do here is to identify most "important" variables within glmnet and then trying to pass your features to another model. This is not optimal as Max Kuhn writes here:
In many cases, using these models with built-in feature selection will be more efficient than algorithms where the search routine for the right predictors is external to the model. Built-in feature selection typically couples the predictor search algorithm with the parameter estimation and are usually optimized with a single objective function (e.g. error rates or likelihood).
From theoretical perspective: different models allow for different degree of flexibility and protection against overfit. Why would you use set of optimal parameters from one model in another one?
The only scenario I can think of using glmnet as a feature filter if you are forced to use linear regression and you have a wide matrix of features (more features than cases). There is nothing wrong with this, simply the results (RMSE, R^2) may be suboptimal. | Variable selection with glmnet (caret package) | What you're trying to do here is to identify most "important" variables within glmnet and then trying to pass your features to another model. This is not optimal as Max Kuhn writes here:
In many case | Variable selection with glmnet (caret package)
What you're trying to do here is to identify most "important" variables within glmnet and then trying to pass your features to another model. This is not optimal as Max Kuhn writes here:
In many cases, using these models with built-in feature selection will be more efficient than algorithms where the search routine for the right predictors is external to the model. Built-in feature selection typically couples the predictor search algorithm with the parameter estimation and are usually optimized with a single objective function (e.g. error rates or likelihood).
From theoretical perspective: different models allow for different degree of flexibility and protection against overfit. Why would you use set of optimal parameters from one model in another one?
The only scenario I can think of using glmnet as a feature filter if you are forced to use linear regression and you have a wide matrix of features (more features than cases). There is nothing wrong with this, simply the results (RMSE, R^2) may be suboptimal. | Variable selection with glmnet (caret package)
What you're trying to do here is to identify most "important" variables within glmnet and then trying to pass your features to another model. This is not optimal as Max Kuhn writes here:
In many case |
30,757 | Variable selection with glmnet (caret package) | I have read academic papers citing the effectiveness of using Lasso for variable selection as well as actually putting it into practice myself.
The following code block identifies features from your data set.
require(glmnet)
##returns variables from lasso variable selection, use alpha=0 for ridge
ezlasso=function(df,yvar,folds=10,trace=F,alpha=1){
x<-model.matrix(as.formula(paste(yvar,"~.")),data=df)
x=x[,-1] ##remove intercept
glmnet1<-glmnet::cv.glmnet(x=x,y=df[,yvar],type.measure='mse',nfolds=folds,alpha=alpha)
co<-coef(glmnet1,s = "lambda.1se")
inds<-which(co!=0)
variables<-row.names(co)[inds]
variables<-variables[!(variables %in% '(Intercept)')];
return( c(yvar,variables));
}
(I cannot take 100% credit for this code as I am sure it is adapted from some place - most likely here: Using LASSO from lars (or glmnet) package in R for variable selection )
While on the topic of variable selection, I have also found that VIF (variable inflation factor) to be effective especially when cross-validated.
require(VIF)
require(cvTools);
#returns selected variables using VIF and kfolds cross validation
ezvif=function(df,yvar,folds=5,trace=F,ignore=c()){
df=discard(df,ignore);
f=cvFolds(nrow(df),K=folds);
findings=list();
for(v in names(df)){
if(v==yvar)next;
findings[[v]]=0;
}
for(i in 1:folds){
if(trace) message("fold ",i);
rows=f$subsets[f$which!=i] ##leave one out
y=df[rows,yvar];
xdf=df[rows,names(df) != yvar]; #remove output var
if(trace) say("trying ",i,yvar,nrow(df),length(y)," subsize=",min(200,floor(nrow(xdf))));
vifResult=vif(y,xdf,trace=trace,subsize=min(200,floor(nrow(xdf))))
if(trace) print(names(xdf)[vifResult$select]);
for(v in names(xdf)[vifResult$select]){
findings[[v]]=findings[[v]]+1; #vote
}
}
findings=(sort(unlist(findings),decreasing = T))
if(trace) print(findings[findings>0]);
return( c(yvar,names(findings[findings==findings[1]])) )
}
Both of the above ez functions return an vector of variable names. The following code block converts the return values to a formula.
#converts ezvif or ezlasso results into formula
ezformula=function(v,operator=' + '){
return(as.formula(paste(v[1],'~',paste(v[-1],collapse = operator))))
}
I hope this is helpful. | Variable selection with glmnet (caret package) | I have read academic papers citing the effectiveness of using Lasso for variable selection as well as actually putting it into practice myself.
The following code block identifies features from your d | Variable selection with glmnet (caret package)
I have read academic papers citing the effectiveness of using Lasso for variable selection as well as actually putting it into practice myself.
The following code block identifies features from your data set.
require(glmnet)
##returns variables from lasso variable selection, use alpha=0 for ridge
ezlasso=function(df,yvar,folds=10,trace=F,alpha=1){
x<-model.matrix(as.formula(paste(yvar,"~.")),data=df)
x=x[,-1] ##remove intercept
glmnet1<-glmnet::cv.glmnet(x=x,y=df[,yvar],type.measure='mse',nfolds=folds,alpha=alpha)
co<-coef(glmnet1,s = "lambda.1se")
inds<-which(co!=0)
variables<-row.names(co)[inds]
variables<-variables[!(variables %in% '(Intercept)')];
return( c(yvar,variables));
}
(I cannot take 100% credit for this code as I am sure it is adapted from some place - most likely here: Using LASSO from lars (or glmnet) package in R for variable selection )
While on the topic of variable selection, I have also found that VIF (variable inflation factor) to be effective especially when cross-validated.
require(VIF)
require(cvTools);
#returns selected variables using VIF and kfolds cross validation
ezvif=function(df,yvar,folds=5,trace=F,ignore=c()){
df=discard(df,ignore);
f=cvFolds(nrow(df),K=folds);
findings=list();
for(v in names(df)){
if(v==yvar)next;
findings[[v]]=0;
}
for(i in 1:folds){
if(trace) message("fold ",i);
rows=f$subsets[f$which!=i] ##leave one out
y=df[rows,yvar];
xdf=df[rows,names(df) != yvar]; #remove output var
if(trace) say("trying ",i,yvar,nrow(df),length(y)," subsize=",min(200,floor(nrow(xdf))));
vifResult=vif(y,xdf,trace=trace,subsize=min(200,floor(nrow(xdf))))
if(trace) print(names(xdf)[vifResult$select]);
for(v in names(xdf)[vifResult$select]){
findings[[v]]=findings[[v]]+1; #vote
}
}
findings=(sort(unlist(findings),decreasing = T))
if(trace) print(findings[findings>0]);
return( c(yvar,names(findings[findings==findings[1]])) )
}
Both of the above ez functions return an vector of variable names. The following code block converts the return values to a formula.
#converts ezvif or ezlasso results into formula
ezformula=function(v,operator=' + '){
return(as.formula(paste(v[1],'~',paste(v[-1],collapse = operator))))
}
I hope this is helpful. | Variable selection with glmnet (caret package)
I have read academic papers citing the effectiveness of using Lasso for variable selection as well as actually putting it into practice myself.
The following code block identifies features from your d |
30,758 | Standard error of regression coefficient without raw data | With the usual notation, arrange the independent variables by columns in an $n\times (p+1)$ array $X$, with one of them filled with the constant $1$, and arrange the dependent values $Y$ into an $n$-vector (also a column). Assuming the appropriate inverses exist, the desired formulas are
$$b = (X^\prime X)^{-1} X^\prime Y$$
for the $p+1$ parameter estimates $b$,
$$s^2 = (Y^\prime Y - b^\prime X^\prime Y)/(n-p-1)$$
for the residual standard error, and
$$V = (X^\prime X)^{-1} s^2$$
for the variance-covariance matrix of $b$.
When only the summary statistics $\newcommand{\m}{\mathrm m} \m_X$ (the means of the columns of $X$, forming a $p+1$-covector which includes a mean of $1$ for the constant), $m_Y$ (the mean of $Y$), $\text{Cov}(Y)$, $\text{Var}(Y)$, and $\text{Cov}(X,Y)$ are available, first recover the needed values via
$$(X^\prime X)_0 = (n-1)\text{Cov}(X) + n \m_X \m_X^\prime,$$
$$Y^\prime Y = (n-1)\text{Var}(Y) + n m_Y,$$
$$(X^\prime Y)_0 = (n-1)\text{Cov}(X,Y) + n m_Y \m_X.$$
Then to obtain $X^\prime X$, border $(X^\prime X)_0$ symmetrically by a vector of column sums (given by $n \m_X$) with the value $n$ on the diagonal; and to obtain $X^\prime Y$, augment the vector $(X^\prime Y)_0$ with the sum $n m_Y$. For instance, when using the first column for the constant term, these bordered matrices will look like
$$X^\prime X = \pmatrix{
n & n\m_X \\
n\m_X^\prime & (X^\prime X)_0
}$$
and
$$X^\prime Y = \left(n m_Y, (X^\prime Y)_0\right)^\prime$$
in block-matrix form.
If the means are not available--the question did not indicate they are--then replace them all with zeros. The output will estimate an "intercept" of $0$, of course, and its standard error of the intercept will likely be incorrect, but the remaining coefficient estimates and standard errors will be correct.
Code
The following R code generates data, uses the preceding formulas to compute $b$, $s^2$, and the diagonal of $V$ from only the means and covariances of the data (along with the values of $n$ and $p$ of course), and compares them to standard least-squares output derived from the data. In all examples I have run (including multiple regression with $p\gt 1$) agreement is exact to the default output precision (about seven decimal places).
For simplicity--to avoid doing essentially the same set of operations three times--this code first combines all the summary data into a single matrix v and then extracts $X^\prime X$, $X^\prime Y$, and $Y^\prime Y$ from its entries. The comments note what is happening at each step.
n <- 24
p <- 3
beta <- seq(-p, p, length.out=p)# The model
set.seed(17)
x <- matrix(rnorm(n*p), ncol=p) # Independent variables
y <- x %*% beta + rnorm(n) # Dependent variable plus error
#
# Compute the first and second order data summaries.
#
m <- rep(0, p+1) # Default means
m <- colMeans(cbind(x,y)) # If means are available--comment out otherwise
v <- cov(cbind(x,y)) # All variances and covariances
#
# From this point on, only the summaries `m` and `v` are used for the calculations
# (along with `n` and `p`, of course).
#
m <- m * n # Compute column sums
v <- v * (n-1) # Recover sums of squares of residuals
v <- v + outer(m, m)/n # Adjust to obtain the sums of squares
v <- rbind(c(n, m), cbind(m, v))# Border with the sums and the data count
xx <- v[-(p+2), -(p+2)] # Extract X'X
xy <- v[-(p+2), p+2] # Extract X'Y
yy <- v[p+2, p+2] # Extract Y'Y
b <- solve(xx, xy) # Compute the coefficient estimates
s2 <- (yy - b %*% xy) / (n-p-1) # Compute the residual variance estimate
#
# Compare to `lm`.
#
fit <- summary(lm(y ~ x))
(rbind(Correct=coef(fit)[, "Estimate"], From.summary=b)) # Coeff. estimates
(c(Correct=fit$sigma, From.summary=sqrt(s2))) # Residual SE
#
# The SE of the intercept will be incorrect unless true means are provided.
#
se <- sqrt(diag(solve(xx) * c(s2))) # Remove `diag` to compute the full var-covar matrix
(rbind(Correct=coef(fit)[, "Std. Error"], From.summary=se)) # Coeff. SEs | Standard error of regression coefficient without raw data | With the usual notation, arrange the independent variables by columns in an $n\times (p+1)$ array $X$, with one of them filled with the constant $1$, and arrange the dependent values $Y$ into an $n$-v | Standard error of regression coefficient without raw data
With the usual notation, arrange the independent variables by columns in an $n\times (p+1)$ array $X$, with one of them filled with the constant $1$, and arrange the dependent values $Y$ into an $n$-vector (also a column). Assuming the appropriate inverses exist, the desired formulas are
$$b = (X^\prime X)^{-1} X^\prime Y$$
for the $p+1$ parameter estimates $b$,
$$s^2 = (Y^\prime Y - b^\prime X^\prime Y)/(n-p-1)$$
for the residual standard error, and
$$V = (X^\prime X)^{-1} s^2$$
for the variance-covariance matrix of $b$.
When only the summary statistics $\newcommand{\m}{\mathrm m} \m_X$ (the means of the columns of $X$, forming a $p+1$-covector which includes a mean of $1$ for the constant), $m_Y$ (the mean of $Y$), $\text{Cov}(Y)$, $\text{Var}(Y)$, and $\text{Cov}(X,Y)$ are available, first recover the needed values via
$$(X^\prime X)_0 = (n-1)\text{Cov}(X) + n \m_X \m_X^\prime,$$
$$Y^\prime Y = (n-1)\text{Var}(Y) + n m_Y,$$
$$(X^\prime Y)_0 = (n-1)\text{Cov}(X,Y) + n m_Y \m_X.$$
Then to obtain $X^\prime X$, border $(X^\prime X)_0$ symmetrically by a vector of column sums (given by $n \m_X$) with the value $n$ on the diagonal; and to obtain $X^\prime Y$, augment the vector $(X^\prime Y)_0$ with the sum $n m_Y$. For instance, when using the first column for the constant term, these bordered matrices will look like
$$X^\prime X = \pmatrix{
n & n\m_X \\
n\m_X^\prime & (X^\prime X)_0
}$$
and
$$X^\prime Y = \left(n m_Y, (X^\prime Y)_0\right)^\prime$$
in block-matrix form.
If the means are not available--the question did not indicate they are--then replace them all with zeros. The output will estimate an "intercept" of $0$, of course, and its standard error of the intercept will likely be incorrect, but the remaining coefficient estimates and standard errors will be correct.
Code
The following R code generates data, uses the preceding formulas to compute $b$, $s^2$, and the diagonal of $V$ from only the means and covariances of the data (along with the values of $n$ and $p$ of course), and compares them to standard least-squares output derived from the data. In all examples I have run (including multiple regression with $p\gt 1$) agreement is exact to the default output precision (about seven decimal places).
For simplicity--to avoid doing essentially the same set of operations three times--this code first combines all the summary data into a single matrix v and then extracts $X^\prime X$, $X^\prime Y$, and $Y^\prime Y$ from its entries. The comments note what is happening at each step.
n <- 24
p <- 3
beta <- seq(-p, p, length.out=p)# The model
set.seed(17)
x <- matrix(rnorm(n*p), ncol=p) # Independent variables
y <- x %*% beta + rnorm(n) # Dependent variable plus error
#
# Compute the first and second order data summaries.
#
m <- rep(0, p+1) # Default means
m <- colMeans(cbind(x,y)) # If means are available--comment out otherwise
v <- cov(cbind(x,y)) # All variances and covariances
#
# From this point on, only the summaries `m` and `v` are used for the calculations
# (along with `n` and `p`, of course).
#
m <- m * n # Compute column sums
v <- v * (n-1) # Recover sums of squares of residuals
v <- v + outer(m, m)/n # Adjust to obtain the sums of squares
v <- rbind(c(n, m), cbind(m, v))# Border with the sums and the data count
xx <- v[-(p+2), -(p+2)] # Extract X'X
xy <- v[-(p+2), p+2] # Extract X'Y
yy <- v[p+2, p+2] # Extract Y'Y
b <- solve(xx, xy) # Compute the coefficient estimates
s2 <- (yy - b %*% xy) / (n-p-1) # Compute the residual variance estimate
#
# Compare to `lm`.
#
fit <- summary(lm(y ~ x))
(rbind(Correct=coef(fit)[, "Estimate"], From.summary=b)) # Coeff. estimates
(c(Correct=fit$sigma, From.summary=sqrt(s2))) # Residual SE
#
# The SE of the intercept will be incorrect unless true means are provided.
#
se <- sqrt(diag(solve(xx) * c(s2))) # Remove `diag` to compute the full var-covar matrix
(rbind(Correct=coef(fit)[, "Std. Error"], From.summary=se)) # Coeff. SEs | Standard error of regression coefficient without raw data
With the usual notation, arrange the independent variables by columns in an $n\times (p+1)$ array $X$, with one of them filled with the constant $1$, and arrange the dependent values $Y$ into an $n$-v |
30,759 | Standard error of regression coefficient without raw data | One quick and dirty approach when you have the summaries but not the raw data is to generate a dataset with the specified summaries and sample size and then run the simulated data through your regular routines to compute whatever you want. The mvrnorm function in the MASS package for R will generate random normal data with a given mean vector and covariance matrix. I am sure other programs will as well (or you can create your own by generating data and then multiplying by the appropriate matrix).
From the theoretical side you can start by thinking about having all your variables centered (mean subtracted so that the current mean is exactly 0, this really only affects the intercept) then $x'x=var(x)\times (n-1)$, $x'y=cov(x,y)\times(n-1)$, and $y'y=var(y)\times(n-1)$. So just plug those values into the matrix version of the formula for the standard error that you want. | Standard error of regression coefficient without raw data | One quick and dirty approach when you have the summaries but not the raw data is to generate a dataset with the specified summaries and sample size and then run the simulated data through your regular | Standard error of regression coefficient without raw data
One quick and dirty approach when you have the summaries but not the raw data is to generate a dataset with the specified summaries and sample size and then run the simulated data through your regular routines to compute whatever you want. The mvrnorm function in the MASS package for R will generate random normal data with a given mean vector and covariance matrix. I am sure other programs will as well (or you can create your own by generating data and then multiplying by the appropriate matrix).
From the theoretical side you can start by thinking about having all your variables centered (mean subtracted so that the current mean is exactly 0, this really only affects the intercept) then $x'x=var(x)\times (n-1)$, $x'y=cov(x,y)\times(n-1)$, and $y'y=var(y)\times(n-1)$. So just plug those values into the matrix version of the formula for the standard error that you want. | Standard error of regression coefficient without raw data
One quick and dirty approach when you have the summaries but not the raw data is to generate a dataset with the specified summaries and sample size and then run the simulated data through your regular |
30,760 | Transformation Chi-squared to Normal distribution | One option is to exploit the fact that for any continuous random variable $X$ then $F_X(X)$ is uniform (rectangular) on [0, 1]. Then a second transformation using an inverse CDF can produce a continuous random variable with the desired distribution - nothing special about chi squared to normal here. @Glen_b has more detail in his answer.
If you want to do something weird and wonderful, in between those two transformations you could apply a third transformation that maps uniform variables on [0, 1] to other uniform variables on [0, 1]. For example, $u \mapsto 1 - u$, or $u \mapsto u + k \mod 1$ for any $k \in \mathbb{R}$, or even $u \mapsto u + 0.5$ for $u \in [0, 0.5]$ and $u \mapsto 1 - u$ for $u \in (0.5, 1]$.
But if we want a monotone transformation from $X \sim \chi^2_1$ to $Y \sim \mathcal{N}(0,1)$ then we need their corresponding quantiles to be mapped to each other. The following graphs with shaded deciles illustrate the point; note that I have had to cut off the display of the $\chi^2_1$ density near zero.
For the monotonically increasing transformation, that maps dark red to dark red and so on, you would use $Y = \Phi^{-1}(F_{\chi^2_1}(X))$. For the monotonically decreasing transformation, that maps dark red to dark blue and so on, you could use the mapping $u \mapsto 1-u$ before applying the inverse CDF, so $Y = \Phi^{-1}(1 - F_{\chi^2_1}(X))$. Here's what the relationship between $X$ and $Y$ for the increasing transformation looks like, which also gives a clue how bunched up the quantiles for the chi-squared distribution were on the far left!
If you want to salvage the square root transform on $X \sim \chi^2_1$, one option is to use a Rademacher random variable $W$. The Rademacher distribution is discrete, with $$\mathsf{P}(W = -1) = \mathsf{P}(W = 1) = \frac{1}{2}$$
It is essentially a Bernoulli with $p = \frac{1}{2}$ that has been transformed by stretching by a scale factor of two then subtracting one. Now $W\sqrt{X}$ is standard normal — effectively we are deciding at random whether to take the positive or negative root!
It's cheating a little since it is really a transformation of $(W, X)$ not $X$ alone. But I thought it worth mentioning since it seems in the spirit of the question, and a stream of Rademacher variables is easy enough to generate. Incidentally, $Z$ and $WZ$ would be another example of uncorrelated but dependent normal variables. Here's a graph showing where the deciles of the original $\chi^2_1$ get mapped to; remember that anything on the right side of zero is where $W = 1$ and the left side is $W = -1$. Note how values around zero are mapped from low values of $X$ and the tails (both left and right extremes) are mapped from the large values of $X$.
Code for plots (see also this Stack Overflow post):
require(ggplot2)
delta <- 0.0001 #smaller for smoother curves but longer plot times
quantiles <- 10 #10 for deciles, 4 for quartiles, do play and have fun!
chisq.df <- data.frame(x = seq(from=0.01, to=5, by=delta)) #avoid near 0 due to spike in pdf
chisq.df$pdf <- dchisq(chisq.df$x, df=1)
chisq.df$qt <- cut(pchisq(chisq.df$x, df=1), breaks=quantiles, labels=F)
ggplot(chisq.df, aes(x=x, y=pdf)) +
geom_area(aes(group=qt, fill=qt), color="black", size = 0.5) +
scale_fill_gradient2(midpoint=median(unique(chisq.df$qt)), guide="none") +
theme_bw() + xlab("x")
z.df <- data.frame(x = seq(from=-3, to=3, by=delta))
z.df$pdf <- dnorm(z.df$x)
z.df$qt <- cut(pnorm(z.df$x),breaks=quantiles,labels=F)
ggplot(z.df, aes(x=x,y=pdf)) +
geom_area(aes(group=qt, fill=qt), color="black", size = 0.5) +
scale_fill_gradient2(midpoint=median(unique(z.df$qt)), guide="none") +
theme_bw() + xlab("y")
#y as function of x
data.df <- data.frame(x=c(seq(from=0, to=6, by=delta)))
data.df$y <- qnorm(pchisq(data.df$x, df=1))
ggplot(data.df, aes(x,y)) + theme_bw() + geom_line()
#because a chi-squared quartile maps to both left and right areas, take care with plotting order
z.df$qt2 <- cut(pchisq(z.df$x^2, df=1), breaks=quantiles, labels=F)
z.df$w <- as.factor(ifelse(z.df$x >= 0, 1, -1))
ggplot(z.df, aes(x=x,y=pdf)) +
geom_area(data=z.df[z.df$x > 0 | z.df$qt2 == 1,], aes(group=qt2, fill=qt2), color="black", size = 0.5) +
geom_area(data=z.df[z.df$x <0 & z.df$qt2 > 1,], aes(group=qt2, fill=qt2), color="black", size = 0.5) +
scale_fill_gradient2(midpoint=median(unique(z.df$qt)), guide="none") +
theme_bw() + xlab("y") | Transformation Chi-squared to Normal distribution | One option is to exploit the fact that for any continuous random variable $X$ then $F_X(X)$ is uniform (rectangular) on [0, 1]. Then a second transformation using an inverse CDF can produce a continuo | Transformation Chi-squared to Normal distribution
One option is to exploit the fact that for any continuous random variable $X$ then $F_X(X)$ is uniform (rectangular) on [0, 1]. Then a second transformation using an inverse CDF can produce a continuous random variable with the desired distribution - nothing special about chi squared to normal here. @Glen_b has more detail in his answer.
If you want to do something weird and wonderful, in between those two transformations you could apply a third transformation that maps uniform variables on [0, 1] to other uniform variables on [0, 1]. For example, $u \mapsto 1 - u$, or $u \mapsto u + k \mod 1$ for any $k \in \mathbb{R}$, or even $u \mapsto u + 0.5$ for $u \in [0, 0.5]$ and $u \mapsto 1 - u$ for $u \in (0.5, 1]$.
But if we want a monotone transformation from $X \sim \chi^2_1$ to $Y \sim \mathcal{N}(0,1)$ then we need their corresponding quantiles to be mapped to each other. The following graphs with shaded deciles illustrate the point; note that I have had to cut off the display of the $\chi^2_1$ density near zero.
For the monotonically increasing transformation, that maps dark red to dark red and so on, you would use $Y = \Phi^{-1}(F_{\chi^2_1}(X))$. For the monotonically decreasing transformation, that maps dark red to dark blue and so on, you could use the mapping $u \mapsto 1-u$ before applying the inverse CDF, so $Y = \Phi^{-1}(1 - F_{\chi^2_1}(X))$. Here's what the relationship between $X$ and $Y$ for the increasing transformation looks like, which also gives a clue how bunched up the quantiles for the chi-squared distribution were on the far left!
If you want to salvage the square root transform on $X \sim \chi^2_1$, one option is to use a Rademacher random variable $W$. The Rademacher distribution is discrete, with $$\mathsf{P}(W = -1) = \mathsf{P}(W = 1) = \frac{1}{2}$$
It is essentially a Bernoulli with $p = \frac{1}{2}$ that has been transformed by stretching by a scale factor of two then subtracting one. Now $W\sqrt{X}$ is standard normal — effectively we are deciding at random whether to take the positive or negative root!
It's cheating a little since it is really a transformation of $(W, X)$ not $X$ alone. But I thought it worth mentioning since it seems in the spirit of the question, and a stream of Rademacher variables is easy enough to generate. Incidentally, $Z$ and $WZ$ would be another example of uncorrelated but dependent normal variables. Here's a graph showing where the deciles of the original $\chi^2_1$ get mapped to; remember that anything on the right side of zero is where $W = 1$ and the left side is $W = -1$. Note how values around zero are mapped from low values of $X$ and the tails (both left and right extremes) are mapped from the large values of $X$.
Code for plots (see also this Stack Overflow post):
require(ggplot2)
delta <- 0.0001 #smaller for smoother curves but longer plot times
quantiles <- 10 #10 for deciles, 4 for quartiles, do play and have fun!
chisq.df <- data.frame(x = seq(from=0.01, to=5, by=delta)) #avoid near 0 due to spike in pdf
chisq.df$pdf <- dchisq(chisq.df$x, df=1)
chisq.df$qt <- cut(pchisq(chisq.df$x, df=1), breaks=quantiles, labels=F)
ggplot(chisq.df, aes(x=x, y=pdf)) +
geom_area(aes(group=qt, fill=qt), color="black", size = 0.5) +
scale_fill_gradient2(midpoint=median(unique(chisq.df$qt)), guide="none") +
theme_bw() + xlab("x")
z.df <- data.frame(x = seq(from=-3, to=3, by=delta))
z.df$pdf <- dnorm(z.df$x)
z.df$qt <- cut(pnorm(z.df$x),breaks=quantiles,labels=F)
ggplot(z.df, aes(x=x,y=pdf)) +
geom_area(aes(group=qt, fill=qt), color="black", size = 0.5) +
scale_fill_gradient2(midpoint=median(unique(z.df$qt)), guide="none") +
theme_bw() + xlab("y")
#y as function of x
data.df <- data.frame(x=c(seq(from=0, to=6, by=delta)))
data.df$y <- qnorm(pchisq(data.df$x, df=1))
ggplot(data.df, aes(x,y)) + theme_bw() + geom_line()
#because a chi-squared quartile maps to both left and right areas, take care with plotting order
z.df$qt2 <- cut(pchisq(z.df$x^2, df=1), breaks=quantiles, labels=F)
z.df$w <- as.factor(ifelse(z.df$x >= 0, 1, -1))
ggplot(z.df, aes(x=x,y=pdf)) +
geom_area(data=z.df[z.df$x > 0 | z.df$qt2 == 1,], aes(group=qt2, fill=qt2), color="black", size = 0.5) +
geom_area(data=z.df[z.df$x <0 & z.df$qt2 > 1,], aes(group=qt2, fill=qt2), color="black", size = 0.5) +
scale_fill_gradient2(midpoint=median(unique(z.df$qt)), guide="none") +
theme_bw() + xlab("y") | Transformation Chi-squared to Normal distribution
One option is to exploit the fact that for any continuous random variable $X$ then $F_X(X)$ is uniform (rectangular) on [0, 1]. Then a second transformation using an inverse CDF can produce a continuo |
30,761 | Transformation Chi-squared to Normal distribution | [Well, I couldn't locate the duplicate that I thought there was; the nearest I came was the mention of the fact toward the end of this answer. (It's possible it was only discussed in comments on some question, but perhaps there was a duplicate and I just missed it.) I'll give an answer here after all.]
If $X$ is chi-square, with $F$ as its CDF, and $Φ$ is the cdf of the normal, then $Φ^{−1}(F(X))$ is normal. This is obvious since the probability integral transform of $X$ gives a uniform, and $Φ^{−1}(U)$ is normal. So we have a monotonic transformation of the chi-squared to normal.
The same trick works with any two continuous variables.
This gives us a neat counterexample to the various versions of the question "are uncorrelated normal Y,Z bivariate normal?" that come up, since if Z is standard normal and $Y=Φ^{−1}(F_{χ^2_1}(Z^2))$, then $Z,Y$ are both normal and they're uncorrelated, but they're definitely dependent (and have a rather pretty bivariate relationship)
The transformation $T(z)=Φ^{−1}(F_{χ^2_1}(z^2))$:
Histogram of a large sample of $Z+Y$ values: | Transformation Chi-squared to Normal distribution | [Well, I couldn't locate the duplicate that I thought there was; the nearest I came was the mention of the fact toward the end of this answer. (It's possible it was only discussed in comments on some | Transformation Chi-squared to Normal distribution
[Well, I couldn't locate the duplicate that I thought there was; the nearest I came was the mention of the fact toward the end of this answer. (It's possible it was only discussed in comments on some question, but perhaps there was a duplicate and I just missed it.) I'll give an answer here after all.]
If $X$ is chi-square, with $F$ as its CDF, and $Φ$ is the cdf of the normal, then $Φ^{−1}(F(X))$ is normal. This is obvious since the probability integral transform of $X$ gives a uniform, and $Φ^{−1}(U)$ is normal. So we have a monotonic transformation of the chi-squared to normal.
The same trick works with any two continuous variables.
This gives us a neat counterexample to the various versions of the question "are uncorrelated normal Y,Z bivariate normal?" that come up, since if Z is standard normal and $Y=Φ^{−1}(F_{χ^2_1}(Z^2))$, then $Z,Y$ are both normal and they're uncorrelated, but they're definitely dependent (and have a rather pretty bivariate relationship)
The transformation $T(z)=Φ^{−1}(F_{χ^2_1}(z^2))$:
Histogram of a large sample of $Z+Y$ values: | Transformation Chi-squared to Normal distribution
[Well, I couldn't locate the duplicate that I thought there was; the nearest I came was the mention of the fact toward the end of this answer. (It's possible it was only discussed in comments on some |
30,762 | How to prove that the Fourier Transform of white noise is flat? | The power spectrum at frequency $\lambda \in [-\pi,\pi]$ can be obtained by taking the Fourier transform of the autocovariances $\gamma(\tau)$ of orders $\tau=-\infty,...,-1,0,1,...\infty$:
$$
f(\lambda) = \frac{1}{2\pi} \sum_{\tau=-\infty}^\infty \gamma(\tau) e^{-i\lambda\tau} \,.
$$
Using the facts that in a white noise process $\gamma(-\tau) = \gamma(\tau)$
and $e^{-i\lambda\tau} = \cos(\lambda\tau) - i \sin(\lambda\tau)$,
$\cos(0)=1$ and that $\sum_{\tau=-\infty}^\infty \sin(\lambda\tau) = 0$ for a given $\lambda$, the above expression can be written as:
$$
f(\lambda) = \frac{1}{2\pi} \left(
\gamma(0) + 2 \sum_{\tau=1}^\infty \gamma(\tau) \cos(\lambda\tau) \right) \,.
$$
$\gamma(0)$ is the variance of the process, while the remaining covariances are zero in a white noise process, $\gamma(\tau)=0$ for $\tau\neq 0$. Thus, we are left with the constant:
$$
f(\lambda) = \frac{\gamma(0)}{2\pi} \,.
$$
According to this view in the frequency-domain, a white noise process can be viewed as the sum of an infinite number of cycles with different frequencies where each cycle has the same weight. | How to prove that the Fourier Transform of white noise is flat? | The power spectrum at frequency $\lambda \in [-\pi,\pi]$ can be obtained by taking the Fourier transform of the autocovariances $\gamma(\tau)$ of orders $\tau=-\infty,...,-1,0,1,...\infty$:
$$
f(\lamb | How to prove that the Fourier Transform of white noise is flat?
The power spectrum at frequency $\lambda \in [-\pi,\pi]$ can be obtained by taking the Fourier transform of the autocovariances $\gamma(\tau)$ of orders $\tau=-\infty,...,-1,0,1,...\infty$:
$$
f(\lambda) = \frac{1}{2\pi} \sum_{\tau=-\infty}^\infty \gamma(\tau) e^{-i\lambda\tau} \,.
$$
Using the facts that in a white noise process $\gamma(-\tau) = \gamma(\tau)$
and $e^{-i\lambda\tau} = \cos(\lambda\tau) - i \sin(\lambda\tau)$,
$\cos(0)=1$ and that $\sum_{\tau=-\infty}^\infty \sin(\lambda\tau) = 0$ for a given $\lambda$, the above expression can be written as:
$$
f(\lambda) = \frac{1}{2\pi} \left(
\gamma(0) + 2 \sum_{\tau=1}^\infty \gamma(\tau) \cos(\lambda\tau) \right) \,.
$$
$\gamma(0)$ is the variance of the process, while the remaining covariances are zero in a white noise process, $\gamma(\tau)=0$ for $\tau\neq 0$. Thus, we are left with the constant:
$$
f(\lambda) = \frac{\gamma(0)}{2\pi} \,.
$$
According to this view in the frequency-domain, a white noise process can be viewed as the sum of an infinite number of cycles with different frequencies where each cycle has the same weight. | How to prove that the Fourier Transform of white noise is flat?
The power spectrum at frequency $\lambda \in [-\pi,\pi]$ can be obtained by taking the Fourier transform of the autocovariances $\gamma(\tau)$ of orders $\tau=-\infty,...,-1,0,1,...\infty$:
$$
f(\lamb |
30,763 | How to prove that the Fourier Transform of white noise is flat? | The question in the title is not the same as the question in the
text or as described in the comments.
The Fourier transform of an
infinitely long sequence is a discrete-time
Fourier transform which is a (complex-valued) periodic function of the frequency
variable $\omega$. See also, @javlacalle's answer. Thus, it cannot be "flat" except in the trivial case when the function is a constant, or one
includes any complex number of magnitude $1$ in the notion of "flat".
Furthermore, when the sequence is a realization of a white noise (normal)
process (which is a sequence of i.i.d. (normal) random variables), then the
Fourier transform of the sequence differs from realization to realization,
and it boggles the mind that all of these Fourier transforms turn out to
be "flat" in any sense of the word.
So, what is asked for in the title of the question is meaningless.
The question asked in the text of the question is, as pointed
out by whuber, essentially the definition of white noise. It is
better to approach the problem of defining white noise by
starting with a sequence
of i.i.d random variables of finite variance $\sigma^2$ and noting that
the autocovariance function is a unit pulse function. To borrow
notation from javlacalle, $\gamma(0) = \sigma^2$, and $\gamma(n) = 0$
for all other integers $n$. From this, it follows that the
power spectral density (Fourier transform of the autocovariance
as per the Wiener-Khinchine theorem) is a constant (which is why the
noise process is called white noise, in mistaken analogy with white light
which is a flat mixture of wavelengths, not frequencies). | How to prove that the Fourier Transform of white noise is flat? | The question in the title is not the same as the question in the
text or as described in the comments.
The Fourier transform of an
infinitely long sequence is a discrete-time
Fourier transform which | How to prove that the Fourier Transform of white noise is flat?
The question in the title is not the same as the question in the
text or as described in the comments.
The Fourier transform of an
infinitely long sequence is a discrete-time
Fourier transform which is a (complex-valued) periodic function of the frequency
variable $\omega$. See also, @javlacalle's answer. Thus, it cannot be "flat" except in the trivial case when the function is a constant, or one
includes any complex number of magnitude $1$ in the notion of "flat".
Furthermore, when the sequence is a realization of a white noise (normal)
process (which is a sequence of i.i.d. (normal) random variables), then the
Fourier transform of the sequence differs from realization to realization,
and it boggles the mind that all of these Fourier transforms turn out to
be "flat" in any sense of the word.
So, what is asked for in the title of the question is meaningless.
The question asked in the text of the question is, as pointed
out by whuber, essentially the definition of white noise. It is
better to approach the problem of defining white noise by
starting with a sequence
of i.i.d random variables of finite variance $\sigma^2$ and noting that
the autocovariance function is a unit pulse function. To borrow
notation from javlacalle, $\gamma(0) = \sigma^2$, and $\gamma(n) = 0$
for all other integers $n$. From this, it follows that the
power spectral density (Fourier transform of the autocovariance
as per the Wiener-Khinchine theorem) is a constant (which is why the
noise process is called white noise, in mistaken analogy with white light
which is a flat mixture of wavelengths, not frequencies). | How to prove that the Fourier Transform of white noise is flat?
The question in the title is not the same as the question in the
text or as described in the comments.
The Fourier transform of an
infinitely long sequence is a discrete-time
Fourier transform which |
30,764 | How to prove that the Fourier Transform of white noise is flat? | Actually, your question is pretty legitimate. However, it needs to be asked in a slightly different manner. You need to specify the distribution of the periodogram ordinates and get a statistical test to examine whether your real data conform to that. One attempt is our recent paper | How to prove that the Fourier Transform of white noise is flat? | Actually, your question is pretty legitimate. However, it needs to be asked in a slightly different manner. You need to specify the distribution of the periodogram ordinates and get a statistical test | How to prove that the Fourier Transform of white noise is flat?
Actually, your question is pretty legitimate. However, it needs to be asked in a slightly different manner. You need to specify the distribution of the periodogram ordinates and get a statistical test to examine whether your real data conform to that. One attempt is our recent paper | How to prove that the Fourier Transform of white noise is flat?
Actually, your question is pretty legitimate. However, it needs to be asked in a slightly different manner. You need to specify the distribution of the periodogram ordinates and get a statistical test |
30,765 | What is the meaning of the R factanal output? | The chi-square statistic and p-value in factanal are testing the hypothesis that the model fits the data perfectly. When the p value is low, as it is here, we can reject this hypothesis - so in this case, the 2-factor model does not fit the data perfectly (this is opposite how it seems you were interpreting the output).
It's worth noting that 89.4% of the variance explained by two factors is very high, so I'm not sure why the 'only'.
The factors themselves are uncorrelated (orthogonal) but that doesn't mean individual measures cannot correlate with both factors. Think about the directions North and East on a compass - they're uncorrelated, but North-East would 'load' onto both of them positively.
Uniquenesses are the variance in each item that is not explained by the two factors.
This link might be useful to your interpretation. | What is the meaning of the R factanal output? | The chi-square statistic and p-value in factanal are testing the hypothesis that the model fits the data perfectly. When the p value is low, as it is here, we can reject this hypothesis - so in this c | What is the meaning of the R factanal output?
The chi-square statistic and p-value in factanal are testing the hypothesis that the model fits the data perfectly. When the p value is low, as it is here, we can reject this hypothesis - so in this case, the 2-factor model does not fit the data perfectly (this is opposite how it seems you were interpreting the output).
It's worth noting that 89.4% of the variance explained by two factors is very high, so I'm not sure why the 'only'.
The factors themselves are uncorrelated (orthogonal) but that doesn't mean individual measures cannot correlate with both factors. Think about the directions North and East on a compass - they're uncorrelated, but North-East would 'load' onto both of them positively.
Uniquenesses are the variance in each item that is not explained by the two factors.
This link might be useful to your interpretation. | What is the meaning of the R factanal output?
The chi-square statistic and p-value in factanal are testing the hypothesis that the model fits the data perfectly. When the p value is low, as it is here, we can reject this hypothesis - so in this c |
30,766 | Why is the eigenvector in PCA taken to be unit norm? | The main aim of Principal Component Analysis (PCA) is to look for the directions on $\mathbb{R}^p$ that maximize the variance of the projected random vector $X=(X_1,\ldots,X_p)$. Specifically, the first PC can be defined as the unit vector $v_{(1)}\in\mathbb{R}^p$ such that
$$v_{(1)}=\arg\max_{v\in\mathbb{R}^p,||v||=1}\mathbb{V}\mathrm{ar}\big[v^TX\big].$$
If you allow vectors that are not of unit norm in the maximization problem, then you will not get a proper solution, since variance of the projection can become arbitrarily large as long as the norm of the vector increases. For example, if $w=\lambda v$, with $v,w\in\mathbb{R}^p$ and $\lambda\to\infty$, then
$$\mathbb{V}\mathrm{ar}\big[w^TX\big]=\lambda^2\mathbb{V}\mathrm{ar}\big[v^TX\big]\to\infty\quad (\text{if }\mathbb{V}\mathrm{ar}\big[v^TX\big]\neq0).$$
This is the reason why you need an standardization of unit norm to constraint the search and avoid improper solutions. | Why is the eigenvector in PCA taken to be unit norm? | The main aim of Principal Component Analysis (PCA) is to look for the directions on $\mathbb{R}^p$ that maximize the variance of the projected random vector $X=(X_1,\ldots,X_p)$. Specifically, the fir | Why is the eigenvector in PCA taken to be unit norm?
The main aim of Principal Component Analysis (PCA) is to look for the directions on $\mathbb{R}^p$ that maximize the variance of the projected random vector $X=(X_1,\ldots,X_p)$. Specifically, the first PC can be defined as the unit vector $v_{(1)}\in\mathbb{R}^p$ such that
$$v_{(1)}=\arg\max_{v\in\mathbb{R}^p,||v||=1}\mathbb{V}\mathrm{ar}\big[v^TX\big].$$
If you allow vectors that are not of unit norm in the maximization problem, then you will not get a proper solution, since variance of the projection can become arbitrarily large as long as the norm of the vector increases. For example, if $w=\lambda v$, with $v,w\in\mathbb{R}^p$ and $\lambda\to\infty$, then
$$\mathbb{V}\mathrm{ar}\big[w^TX\big]=\lambda^2\mathbb{V}\mathrm{ar}\big[v^TX\big]\to\infty\quad (\text{if }\mathbb{V}\mathrm{ar}\big[v^TX\big]\neq0).$$
This is the reason why you need an standardization of unit norm to constraint the search and avoid improper solutions. | Why is the eigenvector in PCA taken to be unit norm?
The main aim of Principal Component Analysis (PCA) is to look for the directions on $\mathbb{R}^p$ that maximize the variance of the projected random vector $X=(X_1,\ldots,X_p)$. Specifically, the fir |
30,767 | Why is the eigenvector in PCA taken to be unit norm? | It is not true that they "should be of unit length"; PCA works fine without using unit vectors given your data $x$ as long as you use a fixed arbitrary length $l$.
Having said that you want to have the eigenvectors $\alpha_k$ of your covariance matrix $C$ to be unit vectors, ie. $\alpha_k^T \alpha_k = 1$, so you can:
Use the associated eigenvalue $\lambda_k$ as the variance of $\alpha_k^T x$.
Use the eigenvectors as an axis of the ellipsoid fitted to $x$.
The first chapter from Jolliffe's Principal Component Analysis (Introduction) gives a more detailed (and nicer) exposition of these issues. | Why is the eigenvector in PCA taken to be unit norm? | It is not true that they "should be of unit length"; PCA works fine without using unit vectors given your data $x$ as long as you use a fixed arbitrary length $l$.
Having said that you want to have t | Why is the eigenvector in PCA taken to be unit norm?
It is not true that they "should be of unit length"; PCA works fine without using unit vectors given your data $x$ as long as you use a fixed arbitrary length $l$.
Having said that you want to have the eigenvectors $\alpha_k$ of your covariance matrix $C$ to be unit vectors, ie. $\alpha_k^T \alpha_k = 1$, so you can:
Use the associated eigenvalue $\lambda_k$ as the variance of $\alpha_k^T x$.
Use the eigenvectors as an axis of the ellipsoid fitted to $x$.
The first chapter from Jolliffe's Principal Component Analysis (Introduction) gives a more detailed (and nicer) exposition of these issues. | Why is the eigenvector in PCA taken to be unit norm?
It is not true that they "should be of unit length"; PCA works fine without using unit vectors given your data $x$ as long as you use a fixed arbitrary length $l$.
Having said that you want to have t |
30,768 | Difference between identity and diagonal covariance matrices | An identity covariance matrix, $\Sigma=I$ has variance = 1 for all variables.
A covariance matrix of the form, $\Sigma=\sigma^2I$ has variance = $\sigma^2$ for all variables.
A diagonal covariance matrix has variance $\sigma^2_i$ for the $i^\text{th}$ variable.
(All three have zero covariances between variates) | Difference between identity and diagonal covariance matrices | An identity covariance matrix, $\Sigma=I$ has variance = 1 for all variables.
A covariance matrix of the form, $\Sigma=\sigma^2I$ has variance = $\sigma^2$ for all variables.
A diagonal covariance mat | Difference between identity and diagonal covariance matrices
An identity covariance matrix, $\Sigma=I$ has variance = 1 for all variables.
A covariance matrix of the form, $\Sigma=\sigma^2I$ has variance = $\sigma^2$ for all variables.
A diagonal covariance matrix has variance $\sigma^2_i$ for the $i^\text{th}$ variable.
(All three have zero covariances between variates) | Difference between identity and diagonal covariance matrices
An identity covariance matrix, $\Sigma=I$ has variance = 1 for all variables.
A covariance matrix of the form, $\Sigma=\sigma^2I$ has variance = $\sigma^2$ for all variables.
A diagonal covariance mat |
30,769 | Difference between identity and diagonal covariance matrices | An identity matrix is by definition a matrix with 1's on the diagonal and 0's elsewhere. If you choose to use an identity matrix as your covariance matrix, then you are totally ignoring the data for calculating the variances. Is that really what you mean to do? The only way that could make sense is if you had already standardized the data to have variance 1. | Difference between identity and diagonal covariance matrices | An identity matrix is by definition a matrix with 1's on the diagonal and 0's elsewhere. If you choose to use an identity matrix as your covariance matrix, then you are totally ignoring the data for c | Difference between identity and diagonal covariance matrices
An identity matrix is by definition a matrix with 1's on the diagonal and 0's elsewhere. If you choose to use an identity matrix as your covariance matrix, then you are totally ignoring the data for calculating the variances. Is that really what you mean to do? The only way that could make sense is if you had already standardized the data to have variance 1. | Difference between identity and diagonal covariance matrices
An identity matrix is by definition a matrix with 1's on the diagonal and 0's elsewhere. If you choose to use an identity matrix as your covariance matrix, then you are totally ignoring the data for c |
30,770 | Empirical logit transformation on percentage data | I've had luck with setting epsilon to half of the smallest non-zero value and replacing all 0 values with epsilon and all 1 values with 1-epsilon. Then apply the logit transformation.
This method keeps the original form of the logit transformation, but allows 1 and 0 to be transformed to values that match the overall shape of the intended transformation (note the black dots in the figure at raw=0 and 1). In particular, it preserves the quality that 0.5 is transformed to 0, and the rest of the values are symmetric.
On the other hand, adding the smallest non-zero value as described in the paper changes the shape of the curve and destroys the symmetry. | Empirical logit transformation on percentage data | I've had luck with setting epsilon to half of the smallest non-zero value and replacing all 0 values with epsilon and all 1 values with 1-epsilon. Then apply the logit transformation.
This method kee | Empirical logit transformation on percentage data
I've had luck with setting epsilon to half of the smallest non-zero value and replacing all 0 values with epsilon and all 1 values with 1-epsilon. Then apply the logit transformation.
This method keeps the original form of the logit transformation, but allows 1 and 0 to be transformed to values that match the overall shape of the intended transformation (note the black dots in the figure at raw=0 and 1). In particular, it preserves the quality that 0.5 is transformed to 0, and the rest of the values are symmetric.
On the other hand, adding the smallest non-zero value as described in the paper changes the shape of the curve and destroys the symmetry. | Empirical logit transformation on percentage data
I've had luck with setting epsilon to half of the smallest non-zero value and replacing all 0 values with epsilon and all 1 values with 1-epsilon. Then apply the logit transformation.
This method kee |
30,771 | Empirical logit transformation on percentage data | One approach, which would solve the problem you are having, is to use a robust regression method on the raw, untransformed values. For example, in R, you could do the following:
example = data.frame(outcome = c(0,0,0.3,0.7,1),
predictor = c('left','left','left','right','right'))
m = glm(outcome ~ predictor,example,family=quasibinomial())
summary(m) | Empirical logit transformation on percentage data | One approach, which would solve the problem you are having, is to use a robust regression method on the raw, untransformed values. For example, in R, you could do the following:
example = data.frame(o | Empirical logit transformation on percentage data
One approach, which would solve the problem you are having, is to use a robust regression method on the raw, untransformed values. For example, in R, you could do the following:
example = data.frame(outcome = c(0,0,0.3,0.7,1),
predictor = c('left','left','left','right','right'))
m = glm(outcome ~ predictor,example,family=quasibinomial())
summary(m) | Empirical logit transformation on percentage data
One approach, which would solve the problem you are having, is to use a robust regression method on the raw, untransformed values. For example, in R, you could do the following:
example = data.frame(o |
30,772 | What to do AFTER nested cross-validation? | To answer your initial question (What to do AFTER Nested Cross-Validation?):
Nested cross-validation gives you several scores based on test data that the algorithm has not yet seen. Ordinary CV ("non-nested") gives you just one such score based on one held-out test set. So you can better evaluate the true performance of your model.
After nested CV you fit the chosen model on the whole dataset. And then you use the model to make predictions on new, unlabeled data (that are not part of your 1000 obs.).
I'm not 100% sure that you perform proper nested CV with an outer and an inner loop. To understand nested CV, I found this description helpful:
(Petersohn, Temporal Video Segmentation, Vogt Verlag, 2010, p. 34)
Thoughts on bootstrapping as a better alternative than (nested) CV can be found here.
P.S.: I presume that you will more likely get answers if you only ask 1 or 2 questions instead of 7 in one post. Maybe you want to split them up so that others can find them more easily. | What to do AFTER nested cross-validation? | To answer your initial question (What to do AFTER Nested Cross-Validation?):
Nested cross-validation gives you several scores based on test data that the algorithm has not yet seen. Ordinary CV ("non- | What to do AFTER nested cross-validation?
To answer your initial question (What to do AFTER Nested Cross-Validation?):
Nested cross-validation gives you several scores based on test data that the algorithm has not yet seen. Ordinary CV ("non-nested") gives you just one such score based on one held-out test set. So you can better evaluate the true performance of your model.
After nested CV you fit the chosen model on the whole dataset. And then you use the model to make predictions on new, unlabeled data (that are not part of your 1000 obs.).
I'm not 100% sure that you perform proper nested CV with an outer and an inner loop. To understand nested CV, I found this description helpful:
(Petersohn, Temporal Video Segmentation, Vogt Verlag, 2010, p. 34)
Thoughts on bootstrapping as a better alternative than (nested) CV can be found here.
P.S.: I presume that you will more likely get answers if you only ask 1 or 2 questions instead of 7 in one post. Maybe you want to split them up so that others can find them more easily. | What to do AFTER nested cross-validation?
To answer your initial question (What to do AFTER Nested Cross-Validation?):
Nested cross-validation gives you several scores based on test data that the algorithm has not yet seen. Ordinary CV ("non- |
30,773 | What to do AFTER nested cross-validation? | Question #5: Which value of lambda [of all the λs returned for the different surrogate models] do I choose?
Obviously, if the λs are practically the same, there is no difficulty as there is essentially no choice involved.
If the λs you find for the different surrogate models bounce all over the place, you are in trouble: that is a symptom that either your sample size is too small to auto-tune λ based on your data set, or that the models are (still) very unstable. In both cases, IMHO you'd need to step back and think again about your modeling approach.
I encounter nearly only situations with extremely small sample sizes in my work. Therefore, I'd always decide for the hyperparameter that yields the least complex model (of any kind of regularization).
While your situation may be different, the fact that you use the LASSO indicates that there is a problem with model complexity, so I guess that would be a sensible approach for you as well.
Question #3: Which value of lambda (lambda.min or lambda.1se) do I want to keep?
The same reasoning about model complexity applies. I'd go for lambda.1se.
Question #6 [and #4]: In the end, I just want one LASSO logistic regression model, with one unique value for each hyper-parameter ... correct?
yes
Question #7: If the answer to Question #5 is yes, how do we obtain an estimate for the AUC value that this model will produce? Is this estimate equivalent to the average of the k = 5 AUC values obtain in Step 5?
No it is not the AUC from step 5. This is measured by the outer loop of the nested validation.
I think it is easiest to think of your model training as including the autotuning of λ. I.e. write a training function that does all that is necessary to auto-tune λ using e.g. [iterated] cross validation and then return a model trained on all data that is handed to the training function. Perform the usual resampling validation for these models.
Questions #1 and #2
... are best answered by reading the code: you work in R, so you can read the code and even work though it step by step. In addition, read up what the Elements of Statistical Learning say about hyperparameter tuning. AFAIK, that book is the origin of the 1SE idea for hyperparameter tuning.
Long anwer:
The key idea behind the lambda.1se is that the observed error is subject not only to bias but also to random error (variance). Just picking the lowest observed error risks "skimming" variance: the more models you test, the more likely is observing an accidentally good looking model. lambda.1se tries to guard against this.
There are (at least) 3 different sources of variance here:
finite test set variance: the actual composition of the finite test set for the surrogate model in question ("finite test set error")
model instability: the variance of the true performance of the surrogate models around the average performance of models of that training sample size for the given problem
variance of the given data set with respect to all possible data sets of size $n$ for the given problem.
This last type of variance is important if you want to compare e.g. algorithms, but less so if you want to estimate the performance you can achieve for the data set at hand.
The finite test set variance can be overwhelmingly large for small sample size problems. You can get an idea of the order of magnitude by modeling the testing procedure as a Bernoulli trial: you can know the size of this variance as it is tied to the observed performance. You could in principle construct confidence intervals around your observed performance using this, and decide to use the least complex model that cannot reliably be distinguished from the best performace observation you got. This is basically the idea behind lambda.1se.
Model instability causes additional variance (which by the way increases with increasing model complexity). But usually, one characteristic of a good model is that it is actually stable. Regarding the λ, stable models (and a data set that is large enough to do the estimation of λ reliably)
imply that always the same λ would be returned. The other way round, λs that vary a lot indicate that the optimization of λ was not successful. | What to do AFTER nested cross-validation? | Question #5: Which value of lambda [of all the λs returned for the different surrogate models] do I choose?
Obviously, if the λs are practically the same, there is no difficulty as there is essentia | What to do AFTER nested cross-validation?
Question #5: Which value of lambda [of all the λs returned for the different surrogate models] do I choose?
Obviously, if the λs are practically the same, there is no difficulty as there is essentially no choice involved.
If the λs you find for the different surrogate models bounce all over the place, you are in trouble: that is a symptom that either your sample size is too small to auto-tune λ based on your data set, or that the models are (still) very unstable. In both cases, IMHO you'd need to step back and think again about your modeling approach.
I encounter nearly only situations with extremely small sample sizes in my work. Therefore, I'd always decide for the hyperparameter that yields the least complex model (of any kind of regularization).
While your situation may be different, the fact that you use the LASSO indicates that there is a problem with model complexity, so I guess that would be a sensible approach for you as well.
Question #3: Which value of lambda (lambda.min or lambda.1se) do I want to keep?
The same reasoning about model complexity applies. I'd go for lambda.1se.
Question #6 [and #4]: In the end, I just want one LASSO logistic regression model, with one unique value for each hyper-parameter ... correct?
yes
Question #7: If the answer to Question #5 is yes, how do we obtain an estimate for the AUC value that this model will produce? Is this estimate equivalent to the average of the k = 5 AUC values obtain in Step 5?
No it is not the AUC from step 5. This is measured by the outer loop of the nested validation.
I think it is easiest to think of your model training as including the autotuning of λ. I.e. write a training function that does all that is necessary to auto-tune λ using e.g. [iterated] cross validation and then return a model trained on all data that is handed to the training function. Perform the usual resampling validation for these models.
Questions #1 and #2
... are best answered by reading the code: you work in R, so you can read the code and even work though it step by step. In addition, read up what the Elements of Statistical Learning say about hyperparameter tuning. AFAIK, that book is the origin of the 1SE idea for hyperparameter tuning.
Long anwer:
The key idea behind the lambda.1se is that the observed error is subject not only to bias but also to random error (variance). Just picking the lowest observed error risks "skimming" variance: the more models you test, the more likely is observing an accidentally good looking model. lambda.1se tries to guard against this.
There are (at least) 3 different sources of variance here:
finite test set variance: the actual composition of the finite test set for the surrogate model in question ("finite test set error")
model instability: the variance of the true performance of the surrogate models around the average performance of models of that training sample size for the given problem
variance of the given data set with respect to all possible data sets of size $n$ for the given problem.
This last type of variance is important if you want to compare e.g. algorithms, but less so if you want to estimate the performance you can achieve for the data set at hand.
The finite test set variance can be overwhelmingly large for small sample size problems. You can get an idea of the order of magnitude by modeling the testing procedure as a Bernoulli trial: you can know the size of this variance as it is tied to the observed performance. You could in principle construct confidence intervals around your observed performance using this, and decide to use the least complex model that cannot reliably be distinguished from the best performace observation you got. This is basically the idea behind lambda.1se.
Model instability causes additional variance (which by the way increases with increasing model complexity). But usually, one characteristic of a good model is that it is actually stable. Regarding the λ, stable models (and a data set that is large enough to do the estimation of λ reliably)
imply that always the same λ would be returned. The other way round, λs that vary a lot indicate that the optimization of λ was not successful. | What to do AFTER nested cross-validation?
Question #5: Which value of lambda [of all the λs returned for the different surrogate models] do I choose?
Obviously, if the λs are practically the same, there is no difficulty as there is essentia |
30,774 | Why are Pearson's residuals from a negative binomial regression smaller than those from a poisson regression? | This is rather straightforward, but the "without using equations" is a substantial handicap. I can explain it in words, but those words will necessarily mirror equations. I hope that will be acceptable / still of some value to you. (The relevant equations are not difficult.)
There are several types of residuals. Raw residuals are simply the difference between the observed response values (in your case the counts) and the model's predicted response values. Pearson residuals divide those by the standard deviation (the square root of the variance function for the particular version of the generalized linear model that you are using).
The standard deviation associated with the Poisson distribution is smaller than that of the negative binomial. Thus, when you divide by a larger denominator, the quotient is smaller.
In addition, the negative binomial is more appropriate to your case, because your counts will be distributed as a uniform in the population. That is, their variance will not equal their mean. | Why are Pearson's residuals from a negative binomial regression smaller than those from a poisson re | This is rather straightforward, but the "without using equations" is a substantial handicap. I can explain it in words, but those words will necessarily mirror equations. I hope that will be accepta | Why are Pearson's residuals from a negative binomial regression smaller than those from a poisson regression?
This is rather straightforward, but the "without using equations" is a substantial handicap. I can explain it in words, but those words will necessarily mirror equations. I hope that will be acceptable / still of some value to you. (The relevant equations are not difficult.)
There are several types of residuals. Raw residuals are simply the difference between the observed response values (in your case the counts) and the model's predicted response values. Pearson residuals divide those by the standard deviation (the square root of the variance function for the particular version of the generalized linear model that you are using).
The standard deviation associated with the Poisson distribution is smaller than that of the negative binomial. Thus, when you divide by a larger denominator, the quotient is smaller.
In addition, the negative binomial is more appropriate to your case, because your counts will be distributed as a uniform in the population. That is, their variance will not equal their mean. | Why are Pearson's residuals from a negative binomial regression smaller than those from a poisson re
This is rather straightforward, but the "without using equations" is a substantial handicap. I can explain it in words, but those words will necessarily mirror equations. I hope that will be accepta |
30,775 | Why are Pearson's residuals from a negative binomial regression smaller than those from a poisson regression? | For the Poisson model, if the expection for the $i$th observation $Y_i$ is $\mu_i$ its variance is $\mu_i$, & the Pearson residual therefore
$$\frac{y_i-\hat\mu_i}{\sqrt{\hat\mu_i}}$$
where $\hat\mu$ is the estimate of the mean. The parametrization of the negative binomial model used in MASS is explained here. If the expection for the $i$th observation $Y_i$ is $\mu_i$ its variance is $\mu_i + \frac{\mu^2}{\theta}$, & the Pearson residual therefore
$$\frac{y_i-\tilde\mu_i}{\sqrt{\tilde\mu_i+\frac{\tilde\mu'^2}{\theta}}}$$
where $\tilde\mu$ is the estimate of the mean. The smaller the value of $\theta$— i.e. the more extra-Poisson variance—, the smaller the residual compared to its Poisson equivalent. [But as @whuber has pointed out, the estimates of the means are not the same, $\hat\mu\neq\tilde\mu$, because the estimation procedure weights observations according to their assumed variance. If you were to make replicate measurements for the $i$th predictor pattern, they'd get closer, & in general adding a parameter should give a better fit across all observations, though I don't know how to demonstrate this rigorously. All the same, the population quantities you're estimating are larger if the Poisson model holds, so it shouldn't be a surprise.] | Why are Pearson's residuals from a negative binomial regression smaller than those from a poisson re | For the Poisson model, if the expection for the $i$th observation $Y_i$ is $\mu_i$ its variance is $\mu_i$, & the Pearson residual therefore
$$\frac{y_i-\hat\mu_i}{\sqrt{\hat\mu_i}}$$
where $\hat\mu$ | Why are Pearson's residuals from a negative binomial regression smaller than those from a poisson regression?
For the Poisson model, if the expection for the $i$th observation $Y_i$ is $\mu_i$ its variance is $\mu_i$, & the Pearson residual therefore
$$\frac{y_i-\hat\mu_i}{\sqrt{\hat\mu_i}}$$
where $\hat\mu$ is the estimate of the mean. The parametrization of the negative binomial model used in MASS is explained here. If the expection for the $i$th observation $Y_i$ is $\mu_i$ its variance is $\mu_i + \frac{\mu^2}{\theta}$, & the Pearson residual therefore
$$\frac{y_i-\tilde\mu_i}{\sqrt{\tilde\mu_i+\frac{\tilde\mu'^2}{\theta}}}$$
where $\tilde\mu$ is the estimate of the mean. The smaller the value of $\theta$— i.e. the more extra-Poisson variance—, the smaller the residual compared to its Poisson equivalent. [But as @whuber has pointed out, the estimates of the means are not the same, $\hat\mu\neq\tilde\mu$, because the estimation procedure weights observations according to their assumed variance. If you were to make replicate measurements for the $i$th predictor pattern, they'd get closer, & in general adding a parameter should give a better fit across all observations, though I don't know how to demonstrate this rigorously. All the same, the population quantities you're estimating are larger if the Poisson model holds, so it shouldn't be a surprise.] | Why are Pearson's residuals from a negative binomial regression smaller than those from a poisson re
For the Poisson model, if the expection for the $i$th observation $Y_i$ is $\mu_i$ its variance is $\mu_i$, & the Pearson residual therefore
$$\frac{y_i-\hat\mu_i}{\sqrt{\hat\mu_i}}$$
where $\hat\mu$ |
30,776 | Intuition about a joint entropy | as a general rule, additional information never increases the entropy, which is formally stated as:
\begin{equation}
H(X|Y) \leq H(X) \, \, \, *
\end{equation}
the equality holds if $X$ and $Y$ are independent, which implies $H(X|Y) = H(X)$.
This result can be used to prove the joint entropy $H(X_1, X_2, ..., X_n) \leq \sum_{i=1}^{n} H(X_i)$. To demonstrate it, consider a simple case $H(X,Y)$. According to the chain rule, we can write the join entropy as below
\begin{equation}
H(X,Y) = H(X|Y) + H(Y)
\end{equation}
Considering inequality $*$, $H(X|Y)$ never increases the entropy of variable $X$, and hence $H(X,Y) \leq H(X) + H(Y)$. Using induction one can generalize this result to the cases that involve more than two variables.
Hope it has helped to reduce the ambiguity (or your entropy) about the joint entropy! | Intuition about a joint entropy | as a general rule, additional information never increases the entropy, which is formally stated as:
\begin{equation}
H(X|Y) \leq H(X) \, \, \, *
\end{equation}
the equality holds if $X$ and $Y$ are in | Intuition about a joint entropy
as a general rule, additional information never increases the entropy, which is formally stated as:
\begin{equation}
H(X|Y) \leq H(X) \, \, \, *
\end{equation}
the equality holds if $X$ and $Y$ are independent, which implies $H(X|Y) = H(X)$.
This result can be used to prove the joint entropy $H(X_1, X_2, ..., X_n) \leq \sum_{i=1}^{n} H(X_i)$. To demonstrate it, consider a simple case $H(X,Y)$. According to the chain rule, we can write the join entropy as below
\begin{equation}
H(X,Y) = H(X|Y) + H(Y)
\end{equation}
Considering inequality $*$, $H(X|Y)$ never increases the entropy of variable $X$, and hence $H(X,Y) \leq H(X) + H(Y)$. Using induction one can generalize this result to the cases that involve more than two variables.
Hope it has helped to reduce the ambiguity (or your entropy) about the joint entropy! | Intuition about a joint entropy
as a general rule, additional information never increases the entropy, which is formally stated as:
\begin{equation}
H(X|Y) \leq H(X) \, \, \, *
\end{equation}
the equality holds if $X$ and $Y$ are in |
30,777 | Intuition about a joint entropy | There is another point of view of the Shannon entropy. Imagine you want to guess through questions what the concrete value of a variable is. For simplicity, imagine that the value can only take eight different values $\left(0,1,..., 8\right)$, and all are equally probable.
The most efficient way is to perform a binary search. First you ask whether is greater or less than 4. Then compare it against 2 or 6, and so on. In total you won't need more than three questions (which is the number of bits of this concrete distribution).
We can carry on the analogy for the case of two variables. If they are not independent, then knowing the value of one of them helps you make better guesses (in average) for the next question (this is reflected in the results pointed out by omidi). Hence, the entropy is lower, unless they are completely independent, where you need to guess their values independently. Saying that the entropy is lower means (for this concrete example) that you need to make less questions in average (i.e. more often than not you will make good guesses). | Intuition about a joint entropy | There is another point of view of the Shannon entropy. Imagine you want to guess through questions what the concrete value of a variable is. For simplicity, imagine that the value can only take eight | Intuition about a joint entropy
There is another point of view of the Shannon entropy. Imagine you want to guess through questions what the concrete value of a variable is. For simplicity, imagine that the value can only take eight different values $\left(0,1,..., 8\right)$, and all are equally probable.
The most efficient way is to perform a binary search. First you ask whether is greater or less than 4. Then compare it against 2 or 6, and so on. In total you won't need more than three questions (which is the number of bits of this concrete distribution).
We can carry on the analogy for the case of two variables. If they are not independent, then knowing the value of one of them helps you make better guesses (in average) for the next question (this is reflected in the results pointed out by omidi). Hence, the entropy is lower, unless they are completely independent, where you need to guess their values independently. Saying that the entropy is lower means (for this concrete example) that you need to make less questions in average (i.e. more often than not you will make good guesses). | Intuition about a joint entropy
There is another point of view of the Shannon entropy. Imagine you want to guess through questions what the concrete value of a variable is. For simplicity, imagine that the value can only take eight |
30,778 | Intuition about a joint entropy | It appears you are making the thought "if more information when known, then more entropy when unknown". This is not a correct intuition, because, if the distribution is unknown, we don't even know its entropy. If the distribution is known, then
entropy quantifies the information amount needed to describe uncertainty about the realization of the random variable, which remains unknown (we only know the structure surrounding this uncertainty, by knowing the distribution). Entropy does not quantify the information "present" in the distribution. On the contrary: the more information "included" in the distribution, the less information "needed" to describe uncertainty, and so the less the entropy is. Consider the uniform distribution: it contains very little information, because all possible values of the variable are equiprobable: hence it has maximum entropy among all distributions with bounded support.
As for Joint Entropy, you may think of it as follows: the joint distribution contains information about whether two variables are dependent or not, plus information sufficient to derive the marginal distributions. The marginal distributions do not contain information about whether two random variables are dependent or independent. So the joint distribution has more information, and affords us less uncertainty surrounding the random variables involved:
More information included in the distribution $\rightarrow$ less uncertainty surrounding the variables $\rightarrow$ less information needed to describe this uncertainty $\rightarrow$ less entropy. | Intuition about a joint entropy | It appears you are making the thought "if more information when known, then more entropy when unknown". This is not a correct intuition, because, if the distribution is unknown, we don't even know its | Intuition about a joint entropy
It appears you are making the thought "if more information when known, then more entropy when unknown". This is not a correct intuition, because, if the distribution is unknown, we don't even know its entropy. If the distribution is known, then
entropy quantifies the information amount needed to describe uncertainty about the realization of the random variable, which remains unknown (we only know the structure surrounding this uncertainty, by knowing the distribution). Entropy does not quantify the information "present" in the distribution. On the contrary: the more information "included" in the distribution, the less information "needed" to describe uncertainty, and so the less the entropy is. Consider the uniform distribution: it contains very little information, because all possible values of the variable are equiprobable: hence it has maximum entropy among all distributions with bounded support.
As for Joint Entropy, you may think of it as follows: the joint distribution contains information about whether two variables are dependent or not, plus information sufficient to derive the marginal distributions. The marginal distributions do not contain information about whether two random variables are dependent or independent. So the joint distribution has more information, and affords us less uncertainty surrounding the random variables involved:
More information included in the distribution $\rightarrow$ less uncertainty surrounding the variables $\rightarrow$ less information needed to describe this uncertainty $\rightarrow$ less entropy. | Intuition about a joint entropy
It appears you are making the thought "if more information when known, then more entropy when unknown". This is not a correct intuition, because, if the distribution is unknown, we don't even know its |
30,779 | Prediction of a binary variable | A binary logistic regression is generally used for fitting a model to a binary output, but formally the results of logistic regression are not themselves binary, they are continuous probability values (pushed to zero or 1 by a logit transformaion, but continuous between 0 and 1 nonetheless). It sounds like the software you are using is rounding the output for you, which you don't want. Here's a simple example demonstrating how you could accomplish this in R, since it sounds like you are amenable to trying new software:
# generate sample data
set.seed(123)
x = rnorm(100)
y= as.numeric(x>0)
# let's shuffle a handful so we don't fit a perfect model
ix = sample(1:100, 10)
y[ix]= 1-y[ix]
# Let's take a look at our observations
df = data.frame(x,y)
plot(df)
# Build the model
m = glm(y~x, family=binomial(logit), data=df)
# Look at results
summary(m)
# generate predictions. Here, since I'm not passing in new data
# it will use the training data set to generate predictions
y.pred = predict(m, type="response")
plot(x, y.pred, col=(round(y.pred)+1)) | Prediction of a binary variable | A binary logistic regression is generally used for fitting a model to a binary output, but formally the results of logistic regression are not themselves binary, they are continuous probability values | Prediction of a binary variable
A binary logistic regression is generally used for fitting a model to a binary output, but formally the results of logistic regression are not themselves binary, they are continuous probability values (pushed to zero or 1 by a logit transformaion, but continuous between 0 and 1 nonetheless). It sounds like the software you are using is rounding the output for you, which you don't want. Here's a simple example demonstrating how you could accomplish this in R, since it sounds like you are amenable to trying new software:
# generate sample data
set.seed(123)
x = rnorm(100)
y= as.numeric(x>0)
# let's shuffle a handful so we don't fit a perfect model
ix = sample(1:100, 10)
y[ix]= 1-y[ix]
# Let's take a look at our observations
df = data.frame(x,y)
plot(df)
# Build the model
m = glm(y~x, family=binomial(logit), data=df)
# Look at results
summary(m)
# generate predictions. Here, since I'm not passing in new data
# it will use the training data set to generate predictions
y.pred = predict(m, type="response")
plot(x, y.pred, col=(round(y.pred)+1)) | Prediction of a binary variable
A binary logistic regression is generally used for fitting a model to a binary output, but formally the results of logistic regression are not themselves binary, they are continuous probability values |
30,780 | Prediction of a binary variable | Yes, you can get the predicted probability that an observation is yes from a logistic regression model. If you have the estimated coefficients from your model fit, you can use those to get the predicted probabilities thusly:
$$
\widehat{p(y_i=1)} = \frac{\exp(\hat\beta_0 + \hat\beta_AA_i + \hat\beta_BB_i + \hat\beta_CC_i)}{1+\exp(\hat\beta_0 + \hat\beta_AA_i + \hat\beta_BB_i + \hat\beta_CC_i)}
$$ | Prediction of a binary variable | Yes, you can get the predicted probability that an observation is yes from a logistic regression model. If you have the estimated coefficients from your model fit, you can use those to get the predic | Prediction of a binary variable
Yes, you can get the predicted probability that an observation is yes from a logistic regression model. If you have the estimated coefficients from your model fit, you can use those to get the predicted probabilities thusly:
$$
\widehat{p(y_i=1)} = \frac{\exp(\hat\beta_0 + \hat\beta_AA_i + \hat\beta_BB_i + \hat\beta_CC_i)}{1+\exp(\hat\beta_0 + \hat\beta_AA_i + \hat\beta_BB_i + \hat\beta_CC_i)}
$$ | Prediction of a binary variable
Yes, you can get the predicted probability that an observation is yes from a logistic regression model. If you have the estimated coefficients from your model fit, you can use those to get the predic |
30,781 | One sided confidence interval for hypothesis testing | One sided confidence intervals are dual to one tailed hypothesis tests just as regular two sided CIs are dual to two tailed tests.
If $\theta$ is a parameter, and we say that $(a,\infty)$ is a one sided CI for $\theta$, then this means that $a$ was found by a process that will yield a value below the true value of $\theta$ $95\%$ of the time.
In your case, the parameter of interest is the difference of means: $\mu_x-\mu_y$. If you construct a one sided confidence interval for this parameter, of the form $(a,\infty)$, then you can say with 95% confidence that $a<\mu_x-\mu_y$. Thus, if $0\leq a$, you may reject the null hypothesis. | One sided confidence interval for hypothesis testing | One sided confidence intervals are dual to one tailed hypothesis tests just as regular two sided CIs are dual to two tailed tests.
If $\theta$ is a parameter, and we say that $(a,\infty)$ is a one s | One sided confidence interval for hypothesis testing
One sided confidence intervals are dual to one tailed hypothesis tests just as regular two sided CIs are dual to two tailed tests.
If $\theta$ is a parameter, and we say that $(a,\infty)$ is a one sided CI for $\theta$, then this means that $a$ was found by a process that will yield a value below the true value of $\theta$ $95\%$ of the time.
In your case, the parameter of interest is the difference of means: $\mu_x-\mu_y$. If you construct a one sided confidence interval for this parameter, of the form $(a,\infty)$, then you can say with 95% confidence that $a<\mu_x-\mu_y$. Thus, if $0\leq a$, you may reject the null hypothesis. | One sided confidence interval for hypothesis testing
One sided confidence intervals are dual to one tailed hypothesis tests just as regular two sided CIs are dual to two tailed tests.
If $\theta$ is a parameter, and we say that $(a,\infty)$ is a one s |
30,782 | One sided confidence interval for hypothesis testing | Rejecting a null is the same thing as achieving significance. If you understand "how to use confidence intervals to reject a null hypothesis", you've already done the other thing.
In short, if the interval for $\mu_x-\mu_y$ doesn't include zero, your reject the null; equivalently you have achieved significance, thereby concluding $\mu_x > \mu_y$ | One sided confidence interval for hypothesis testing | Rejecting a null is the same thing as achieving significance. If you understand "how to use confidence intervals to reject a null hypothesis", you've already done the other thing.
In short, if the int | One sided confidence interval for hypothesis testing
Rejecting a null is the same thing as achieving significance. If you understand "how to use confidence intervals to reject a null hypothesis", you've already done the other thing.
In short, if the interval for $\mu_x-\mu_y$ doesn't include zero, your reject the null; equivalently you have achieved significance, thereby concluding $\mu_x > \mu_y$ | One sided confidence interval for hypothesis testing
Rejecting a null is the same thing as achieving significance. If you understand "how to use confidence intervals to reject a null hypothesis", you've already done the other thing.
In short, if the int |
30,783 | One sided confidence interval for hypothesis testing | Use a one sided z or t
confidence interval or
hypothesis test:
If you are specifically asked a
question about whether the
unknown mean is more than or
not more than a specified value,
or if you are specifically asked a
qustion about whether the
unknown mean is less than or
not less than a specified value,
or if the practical consequences
of the unknown mean being
more than the specified value
are similar to the practical
consequences of the unknown
mean being equal the specified
value
while the practical consequences
of the unknown mean being less
than the specified value are
radically different,
or vice versa.
Use a two sided z or t
confidence interval or
hypothesis test:
if you are specifically asked a
question about whether or not
the unknown mean is equal to a
specified value,
or if the poractical consequences
of the unknown mean being
above the specified value are
similar to those of its being
below the specified value,
while the practical consequences
of the unknown mean being
equal to the specified value are
radically different, | One sided confidence interval for hypothesis testing | Use a one sided z or t
confidence interval or
hypothesis test:
If you are specifically asked a
question about whether the
unknown mean is more than or
not more than a specified value,
or if you are sp | One sided confidence interval for hypothesis testing
Use a one sided z or t
confidence interval or
hypothesis test:
If you are specifically asked a
question about whether the
unknown mean is more than or
not more than a specified value,
or if you are specifically asked a
qustion about whether the
unknown mean is less than or
not less than a specified value,
or if the practical consequences
of the unknown mean being
more than the specified value
are similar to the practical
consequences of the unknown
mean being equal the specified
value
while the practical consequences
of the unknown mean being less
than the specified value are
radically different,
or vice versa.
Use a two sided z or t
confidence interval or
hypothesis test:
if you are specifically asked a
question about whether or not
the unknown mean is equal to a
specified value,
or if the poractical consequences
of the unknown mean being
above the specified value are
similar to those of its being
below the specified value,
while the practical consequences
of the unknown mean being
equal to the specified value are
radically different, | One sided confidence interval for hypothesis testing
Use a one sided z or t
confidence interval or
hypothesis test:
If you are specifically asked a
question about whether the
unknown mean is more than or
not more than a specified value,
or if you are sp |
30,784 | One sided confidence interval for hypothesis testing | While that answer works well for a Stats class, a real-world example would look something like this:
Your boss asks you to calculate how long it will take to complete a project. You take into account all of the activities (using proper Project Management techniques) and you tell her: "I'm 90% confident that it will be completed in 90 days." She replies: "Sounds good, but how long will it take for you to be 99% confident". Again, using the proper PM techniques and a "one-tailed z" you tell her "I am 99% confident that it will be complete in 95 days." In other words, there's a 10% chance the project will exceed 90 days and a 1% chance that the project will exceed 95 days. Because we're dealing with a range of values, the "tail" is only above or greater than your target day, rather than half above and half below like in a two-tail when you are predicting a single value. | One sided confidence interval for hypothesis testing | While that answer works well for a Stats class, a real-world example would look something like this:
Your boss asks you to calculate how long it will take to complete a project. You take into account | One sided confidence interval for hypothesis testing
While that answer works well for a Stats class, a real-world example would look something like this:
Your boss asks you to calculate how long it will take to complete a project. You take into account all of the activities (using proper Project Management techniques) and you tell her: "I'm 90% confident that it will be completed in 90 days." She replies: "Sounds good, but how long will it take for you to be 99% confident". Again, using the proper PM techniques and a "one-tailed z" you tell her "I am 99% confident that it will be complete in 95 days." In other words, there's a 10% chance the project will exceed 90 days and a 1% chance that the project will exceed 95 days. Because we're dealing with a range of values, the "tail" is only above or greater than your target day, rather than half above and half below like in a two-tail when you are predicting a single value. | One sided confidence interval for hypothesis testing
While that answer works well for a Stats class, a real-world example would look something like this:
Your boss asks you to calculate how long it will take to complete a project. You take into account |
30,785 | Why are the number of false positives independent of sample size, if we use p-values to compare two independent datasets? | I think this question is caused by a fundamental confusion about how the Neyman-Pearson paradigm for statistical hypothesis testing works, but a very interesting one. The central analogy I will use to discuss this is the idea of absolute vs. relative reference in computer science, which will be most familiar to people via how it plays out in writing functions in Excel. (There is a quick guide to this here.) The idea is that something can be fixed to a given position on an absolute scale, or can only be in a position relative to something else; the result of which is that if the 'something else' changes, the latter will change also, but the former would remain the same.
The central concept in hypothesis testing, on which everything else is built, is that of a sampling distribution. That is, how a sample statistic (like a sample slope) will bounce around if an otherwise identical study is conducted over and over ad infinitum. (For additional reference, I have discussed this here and here.) For statistical inference you need to know three things about the sampling distribution: its shape, its mean, and its standard deviation (called the standard error). Given some standard assumptions, if the errors are normally distributed, the sampling distribution of the slope of a regression model will be normally distributed, centered on the true value, with a $SE=\frac{1}{\sqrt{N-1}}\sqrt{s^2/\Sigma(x_i-\bar x)^2}$. (If the residual variance, $s^2$, is estimated from your data, the sampling distribution will be $t_{df=N-(1+p)}$, where $N$ is the number of data you have, and $p$ is the number of predictor variables.)
In the Neyman-Pearson version of hypothesis testing, we have a reference, or null, value for a sample statistic, and an alternative value. For instance, when assessing the relationship between two variables, the null value of the slope is typically 0, because that would mean there is no relationship between the variables, which is often an important possibility to rule out for our theoretical understanding of a topic. The alternative value can be anything, it might be a value posited by some theory, or it might be the smallest value that someone would care about from a practical standpoint, or might be something else. Let's say that the null and alternative hypotheses regarding the true value of the slope of the relationship between $X$ and $Y$ in the population are 0 and 1, respectively. These numbers refer to an absolute scale--no matter what you choose for $\alpha$, $\beta$ / power, $N$, etc., they will remain the same. If we stipulate some values ($\alpha=.05$, $s^2=1$, $\text{Var}(X)=1$, & $N=10$), we can calculate some things like what the sampling distributions would look like under the null and alternative hypotheses, or how much power the test would have.
Now, how should we go about deciding whether to reject the null hypothesis under this scenario? There are at least two ways: We could check if our p-value is less than $\alpha$, or just check if our beta is greater than the absolute numerical value that corresponds to those numbers in this situation. The key thing to realize is that the former is relative to the sampling distribution under the null hypothesis, but the latter is an absolute position on the number line. If we calculated the sampling distributions again, but with $N = 25$, they would look different (i.e., they would have narrower standard deviations), but those values defined relative to the sampling distribution of the null would have the same relationship to the null hypothesis test, because they are defined that way. That is, e.g., the upper 2.5% of the null sampling distribution would still comprise 2.5% of the total area under the curve, but that line would have moved relative to the absolute numerical scale underneath it. On the other hand, if we only rejected the null if our estimated beta were greater than the value we calculated above, we would be less and less likely to reject the null if we kept that value in place and continually increased $N$.
Consider the figure below. The alpha threshold is defined as that point which demarcates the outermost 5% of the area under the curve (here I have displayed only the upper tail, the lower tail would work the same way). When $N = 10$, this happens to fall at $X = .735$ (given the values we stipulated above). When $N$ increases to $25$, the standard error shrinks and the sampling distribution becomes 'narrower'. Because $\alpha = .05$ is defined relative to the sampling distribution, it shifts in along with the rest of the sampling distribution. The corresponding value of the sample slope becomes $.417$. If the threshold had stayed in the same place on the absolute scale ($.735$), the rate of false positives would have fallen to $.00056$.
Note that this latter approach to hypothesis testing, that of comparing the observed actual value of the sample slope to a fixed cutoff point, is very much not the way hypothesis testing is done, but I believe this is the basis for your confusion. | Why are the number of false positives independent of sample size, if we use p-values to compare two | I think this question is caused by a fundamental confusion about how the Neyman-Pearson paradigm for statistical hypothesis testing works, but a very interesting one. The central analogy I will use t | Why are the number of false positives independent of sample size, if we use p-values to compare two independent datasets?
I think this question is caused by a fundamental confusion about how the Neyman-Pearson paradigm for statistical hypothesis testing works, but a very interesting one. The central analogy I will use to discuss this is the idea of absolute vs. relative reference in computer science, which will be most familiar to people via how it plays out in writing functions in Excel. (There is a quick guide to this here.) The idea is that something can be fixed to a given position on an absolute scale, or can only be in a position relative to something else; the result of which is that if the 'something else' changes, the latter will change also, but the former would remain the same.
The central concept in hypothesis testing, on which everything else is built, is that of a sampling distribution. That is, how a sample statistic (like a sample slope) will bounce around if an otherwise identical study is conducted over and over ad infinitum. (For additional reference, I have discussed this here and here.) For statistical inference you need to know three things about the sampling distribution: its shape, its mean, and its standard deviation (called the standard error). Given some standard assumptions, if the errors are normally distributed, the sampling distribution of the slope of a regression model will be normally distributed, centered on the true value, with a $SE=\frac{1}{\sqrt{N-1}}\sqrt{s^2/\Sigma(x_i-\bar x)^2}$. (If the residual variance, $s^2$, is estimated from your data, the sampling distribution will be $t_{df=N-(1+p)}$, where $N$ is the number of data you have, and $p$ is the number of predictor variables.)
In the Neyman-Pearson version of hypothesis testing, we have a reference, or null, value for a sample statistic, and an alternative value. For instance, when assessing the relationship between two variables, the null value of the slope is typically 0, because that would mean there is no relationship between the variables, which is often an important possibility to rule out for our theoretical understanding of a topic. The alternative value can be anything, it might be a value posited by some theory, or it might be the smallest value that someone would care about from a practical standpoint, or might be something else. Let's say that the null and alternative hypotheses regarding the true value of the slope of the relationship between $X$ and $Y$ in the population are 0 and 1, respectively. These numbers refer to an absolute scale--no matter what you choose for $\alpha$, $\beta$ / power, $N$, etc., they will remain the same. If we stipulate some values ($\alpha=.05$, $s^2=1$, $\text{Var}(X)=1$, & $N=10$), we can calculate some things like what the sampling distributions would look like under the null and alternative hypotheses, or how much power the test would have.
Now, how should we go about deciding whether to reject the null hypothesis under this scenario? There are at least two ways: We could check if our p-value is less than $\alpha$, or just check if our beta is greater than the absolute numerical value that corresponds to those numbers in this situation. The key thing to realize is that the former is relative to the sampling distribution under the null hypothesis, but the latter is an absolute position on the number line. If we calculated the sampling distributions again, but with $N = 25$, they would look different (i.e., they would have narrower standard deviations), but those values defined relative to the sampling distribution of the null would have the same relationship to the null hypothesis test, because they are defined that way. That is, e.g., the upper 2.5% of the null sampling distribution would still comprise 2.5% of the total area under the curve, but that line would have moved relative to the absolute numerical scale underneath it. On the other hand, if we only rejected the null if our estimated beta were greater than the value we calculated above, we would be less and less likely to reject the null if we kept that value in place and continually increased $N$.
Consider the figure below. The alpha threshold is defined as that point which demarcates the outermost 5% of the area under the curve (here I have displayed only the upper tail, the lower tail would work the same way). When $N = 10$, this happens to fall at $X = .735$ (given the values we stipulated above). When $N$ increases to $25$, the standard error shrinks and the sampling distribution becomes 'narrower'. Because $\alpha = .05$ is defined relative to the sampling distribution, it shifts in along with the rest of the sampling distribution. The corresponding value of the sample slope becomes $.417$. If the threshold had stayed in the same place on the absolute scale ($.735$), the rate of false positives would have fallen to $.00056$.
Note that this latter approach to hypothesis testing, that of comparing the observed actual value of the sample slope to a fixed cutoff point, is very much not the way hypothesis testing is done, but I believe this is the basis for your confusion. | Why are the number of false positives independent of sample size, if we use p-values to compare two
I think this question is caused by a fundamental confusion about how the Neyman-Pearson paradigm for statistical hypothesis testing works, but a very interesting one. The central analogy I will use t |
30,786 | Why are the number of false positives independent of sample size, if we use p-values to compare two independent datasets? | You chose the proportion when you chose this rule: " (i.e. p-value < 0.05)"
That literally means you're choosing to have 5% false positives when H0 is true.
http://en.wikipedia.org/wiki/P-value
If you want the proportion of false positives to go down, you shouldn't hold your significance level constant as you increase sample size. Indeed there's very good arguments to have your significance level decrease with large sample sizes (such as on the basis of minimizing the total cost of wrong decisions) | Why are the number of false positives independent of sample size, if we use p-values to compare two | You chose the proportion when you chose this rule: " (i.e. p-value < 0.05)"
That literally means you're choosing to have 5% false positives when H0 is true.
http://en.wikipedia.org/wiki/P-value
If yo | Why are the number of false positives independent of sample size, if we use p-values to compare two independent datasets?
You chose the proportion when you chose this rule: " (i.e. p-value < 0.05)"
That literally means you're choosing to have 5% false positives when H0 is true.
http://en.wikipedia.org/wiki/P-value
If you want the proportion of false positives to go down, you shouldn't hold your significance level constant as you increase sample size. Indeed there's very good arguments to have your significance level decrease with large sample sizes (such as on the basis of minimizing the total cost of wrong decisions) | Why are the number of false positives independent of sample size, if we use p-values to compare two
You chose the proportion when you chose this rule: " (i.e. p-value < 0.05)"
That literally means you're choosing to have 5% false positives when H0 is true.
http://en.wikipedia.org/wiki/P-value
If yo |
30,787 | Enormous coefficients in logistic regression - what does it mean and what to do? | I would suggest that the massive coefficients, and the correspondingly massive standard errors, would almost definitely be caused by quasi-complete or complete separation. That is, for some combination of parameters, either everyone had the outcome or nobody had the outcome, and so the coefficient heads towards infinity (or negative infinity.)
This tends to happen especially when one specifies a lot of interaction terms, as the chances of having a combination of factors which results in some "empty" (no outcomes in cell, or everyone has outcomes) cells will increase.
See the following page for some further details and suggested strategies (link updated March 2021):
https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faqwhat-is-complete-or-quasi-complete-separation-in-logisticprobit-regression-and-how-do-we-deal-with-them/
More generally, it means that you're probably trying to do "too much" with your model for the size of your dataset (particularly the number of outcomes observed).
EDIT: A couple of pragmatic suggestions
You might try (1) quick and simple: drop the interaction terms from your model, to see if that helps (whether this makes sense from a research question perspective is an entirely different issue); or (2) get R to make you a bi-i-i-i-g contingency table for (e.g. rows) the combinations described in the interactions by (e.g. columns) the outcome variable. You might be able to see some evidence of separation here. | Enormous coefficients in logistic regression - what does it mean and what to do? | I would suggest that the massive coefficients, and the correspondingly massive standard errors, would almost definitely be caused by quasi-complete or complete separation. That is, for some combinati | Enormous coefficients in logistic regression - what does it mean and what to do?
I would suggest that the massive coefficients, and the correspondingly massive standard errors, would almost definitely be caused by quasi-complete or complete separation. That is, for some combination of parameters, either everyone had the outcome or nobody had the outcome, and so the coefficient heads towards infinity (or negative infinity.)
This tends to happen especially when one specifies a lot of interaction terms, as the chances of having a combination of factors which results in some "empty" (no outcomes in cell, or everyone has outcomes) cells will increase.
See the following page for some further details and suggested strategies (link updated March 2021):
https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faqwhat-is-complete-or-quasi-complete-separation-in-logisticprobit-regression-and-how-do-we-deal-with-them/
More generally, it means that you're probably trying to do "too much" with your model for the size of your dataset (particularly the number of outcomes observed).
EDIT: A couple of pragmatic suggestions
You might try (1) quick and simple: drop the interaction terms from your model, to see if that helps (whether this makes sense from a research question perspective is an entirely different issue); or (2) get R to make you a bi-i-i-i-g contingency table for (e.g. rows) the combinations described in the interactions by (e.g. columns) the outcome variable. You might be able to see some evidence of separation here. | Enormous coefficients in logistic regression - what does it mean and what to do?
I would suggest that the massive coefficients, and the correspondingly massive standard errors, would almost definitely be caused by quasi-complete or complete separation. That is, for some combinati |
30,788 | Unknown p-value calculation | The second thing looks like it is an approximation to the calculation being used for the x+y < 20 case, but based off the Stirling approximation.
Normally when it's being used for this sort of approximation, people would use at least the next additional term (the factor of $\sqrt{2\pi n}$ in the approximation for $n!$), which would improve the relative approximation substantially for small $n$.
For example, if $x$ and $y$ are both 10, the first calculation gives about 0.088 while the approximation when the factor of $\sqrt{2\pi n}$ is included in all the terms is about 0.089, close enough for most purposes... but omitting that term in the approximation gives 0.5 - which is really not close enough! The author of that function clearly hasn't bothered to check the accuracy of his approximation at the boundary case.
For this purpose, the author should probably simply have called the built in lgamma function - specifically, by using this instead of what he has for log_p1:
log_p1 <- lgamma(x+y+1)-lgamma(x+1)-lgamma(y+1)-(x+y+1)*log(2)
which results in the answer he's trying to approximate (since lgamma(x+1) actually returns $\log(x!)$, the very thing he's trying to approximate - poorly - via Stirling approximation).
Similarly, I'm not sure why the author doesn't use the built in choose function in the first part, a function which comes in the standard distribution of R. For that matter, the relevant distribution function is probably built-in too.
You don't really need two separate cases; the lgamma one works just fine right down to the smallest values. On the other hand, the choose function works for pretty big values (e.g. choose(1000,500) works just fine). The safer option is probably lgamma, though you'd need to have quite large $x$ and $y$ before it was an issue.
With more information it should be possible to identify the source of the test. My guess is the writer has taken it from somewhere, so it should be possible to track it down. Do you have some context for this?
When you say 'optimize' do you mean make it faster, shorter, more maintainable, or something else?
Edit after quickly reading over the paper:
The authors appear to be wrong on a number of points. Fisher's exact test doesn't assume the margins are fixed, it simply conditions on them, which is not the same thing at all, as discussed, for example, here, with references. Indeed, they seem pretty much completely ignorant of the debate over conditioning on margins and why it is done. The links there are worth reading.
The paper's authors do at least seem to understand that the probabilities they give must be cumulated to give p-values; for example near the middle of the first column of page 5 (emphasis mine):
The statistical significance according to
Fisher’s exact test for such a result is 4.6% (two-tail
P-value, i.e., the probability for such a table to occur
in the hypothesis that actin EST frequencies are independent
of the cDNA libraries). In comparison,
the P-value computed from the cumulative form
(Equation 9, see Methods) of Equation 2 (i.e., for the
relative frequency of actin ESTs to be the same in
both libraries, given that at least 11 cognate ESTs are
observed in the liver library after two were observed
in the brain library) is 1.6%.
(though I am not sure I agree with their calculation of the value there; I'd have to check carefully to see what they're actually doing with the other tail.)
I don't think the program does that.
Beware, however, that their analysis is not a standard binomial test; they use a Bayesian argument to derive a p-value in an otherwise frequentist test. They also seem - somewhat oddly, to my mind - to condition on $x$, rather than $x+y$. This means that they must end up with something like a negative binomial rather than a binomial, but I find the paper really badly organized and terribly badly explained (and I'm used to working out what's going on in statistics papers), so I am not going to be certain unless I go through carefully.
I'm not even convinced that the sum of their probabilities are 1 at this point.
There's a lot more to be said here, but the question isn't about the paper, it's about the implementation in the program.
--
Anyway, the upshot is, at least the paper correctly identifies that p-values consist of a sum of probabilities like those in equation 2, but the program doesn't. (See eqn 9a and 9b in the Methods section of the paper.)
The code is simply wrong on that.
[You could use pbinom, as @whuber's comment would imply, to work out the individual probabilities (but not the tail, since it's not a binomial test as they structure it) but then there's an extra factor of 1/2 in their equation 2 so if you want to replicate the results in the paper, you need to alter them.]
You can obtain it, with some fiddling, from pnbinom -
The usual forms of the negative binomial are either the number of trials to the $k^\rm{th}$ success or the number of failures to the $k^\rm{th}$ success. The two are equivalent; Wikipedia gives the second form here. The probability function is:
$$
{k+r-1 \choose k}\cdot (1-p)^r p^k,\!
$$
The equation 2 on p4 (and so also eqn 1 on p3) is a negative binomial, but shifted by 1. Let $p = N_1/(N_1+N_2)$, $k=x$ and $r = y+1$.
This makes me concerned that since the limits on $y$ have not been similarly shifted, that their probabilities may not even add to 1.
That would be bad. | Unknown p-value calculation | The second thing looks like it is an approximation to the calculation being used for the x+y < 20 case, but based off the Stirling approximation.
Normally when it's being used for this sort of approxi | Unknown p-value calculation
The second thing looks like it is an approximation to the calculation being used for the x+y < 20 case, but based off the Stirling approximation.
Normally when it's being used for this sort of approximation, people would use at least the next additional term (the factor of $\sqrt{2\pi n}$ in the approximation for $n!$), which would improve the relative approximation substantially for small $n$.
For example, if $x$ and $y$ are both 10, the first calculation gives about 0.088 while the approximation when the factor of $\sqrt{2\pi n}$ is included in all the terms is about 0.089, close enough for most purposes... but omitting that term in the approximation gives 0.5 - which is really not close enough! The author of that function clearly hasn't bothered to check the accuracy of his approximation at the boundary case.
For this purpose, the author should probably simply have called the built in lgamma function - specifically, by using this instead of what he has for log_p1:
log_p1 <- lgamma(x+y+1)-lgamma(x+1)-lgamma(y+1)-(x+y+1)*log(2)
which results in the answer he's trying to approximate (since lgamma(x+1) actually returns $\log(x!)$, the very thing he's trying to approximate - poorly - via Stirling approximation).
Similarly, I'm not sure why the author doesn't use the built in choose function in the first part, a function which comes in the standard distribution of R. For that matter, the relevant distribution function is probably built-in too.
You don't really need two separate cases; the lgamma one works just fine right down to the smallest values. On the other hand, the choose function works for pretty big values (e.g. choose(1000,500) works just fine). The safer option is probably lgamma, though you'd need to have quite large $x$ and $y$ before it was an issue.
With more information it should be possible to identify the source of the test. My guess is the writer has taken it from somewhere, so it should be possible to track it down. Do you have some context for this?
When you say 'optimize' do you mean make it faster, shorter, more maintainable, or something else?
Edit after quickly reading over the paper:
The authors appear to be wrong on a number of points. Fisher's exact test doesn't assume the margins are fixed, it simply conditions on them, which is not the same thing at all, as discussed, for example, here, with references. Indeed, they seem pretty much completely ignorant of the debate over conditioning on margins and why it is done. The links there are worth reading.
The paper's authors do at least seem to understand that the probabilities they give must be cumulated to give p-values; for example near the middle of the first column of page 5 (emphasis mine):
The statistical significance according to
Fisher’s exact test for such a result is 4.6% (two-tail
P-value, i.e., the probability for such a table to occur
in the hypothesis that actin EST frequencies are independent
of the cDNA libraries). In comparison,
the P-value computed from the cumulative form
(Equation 9, see Methods) of Equation 2 (i.e., for the
relative frequency of actin ESTs to be the same in
both libraries, given that at least 11 cognate ESTs are
observed in the liver library after two were observed
in the brain library) is 1.6%.
(though I am not sure I agree with their calculation of the value there; I'd have to check carefully to see what they're actually doing with the other tail.)
I don't think the program does that.
Beware, however, that their analysis is not a standard binomial test; they use a Bayesian argument to derive a p-value in an otherwise frequentist test. They also seem - somewhat oddly, to my mind - to condition on $x$, rather than $x+y$. This means that they must end up with something like a negative binomial rather than a binomial, but I find the paper really badly organized and terribly badly explained (and I'm used to working out what's going on in statistics papers), so I am not going to be certain unless I go through carefully.
I'm not even convinced that the sum of their probabilities are 1 at this point.
There's a lot more to be said here, but the question isn't about the paper, it's about the implementation in the program.
--
Anyway, the upshot is, at least the paper correctly identifies that p-values consist of a sum of probabilities like those in equation 2, but the program doesn't. (See eqn 9a and 9b in the Methods section of the paper.)
The code is simply wrong on that.
[You could use pbinom, as @whuber's comment would imply, to work out the individual probabilities (but not the tail, since it's not a binomial test as they structure it) but then there's an extra factor of 1/2 in their equation 2 so if you want to replicate the results in the paper, you need to alter them.]
You can obtain it, with some fiddling, from pnbinom -
The usual forms of the negative binomial are either the number of trials to the $k^\rm{th}$ success or the number of failures to the $k^\rm{th}$ success. The two are equivalent; Wikipedia gives the second form here. The probability function is:
$$
{k+r-1 \choose k}\cdot (1-p)^r p^k,\!
$$
The equation 2 on p4 (and so also eqn 1 on p3) is a negative binomial, but shifted by 1. Let $p = N_1/(N_1+N_2)$, $k=x$ and $r = y+1$.
This makes me concerned that since the limits on $y$ have not been similarly shifted, that their probabilities may not even add to 1.
That would be bad. | Unknown p-value calculation
The second thing looks like it is an approximation to the calculation being used for the x+y < 20 case, but based off the Stirling approximation.
Normally when it's being used for this sort of approxi |
30,789 | What's a stationary VAR? | VAR is actualy an equation. We say that process $X_t$ is VAR when it satisfies the following equation:
$$X_t=\alpha+\Phi_1X_{t-1}+...+\Phi_pX_{t-p}+\varepsilon_t,$$
where $\Phi_i$ are matrices and $\varepsilon_t$ is white noise process. If the process satistfying this equation is stationary we say that the VAR is stationary. Given matrices $\Phi_i$ you can check whether the solution is stationary or not. If the roots of the following equation are in modulo greater than 1, then the solution is stationary:
$$|I-\lambda\Phi_1-...-\lambda^p\Phi_p|=0,$$
where $|A|$ is the determinant of matrix $A$.
Stationarity (both in weak and strong sense) is a property of a process. Whether it is a vector valued or scalar valued. For example the process is called stationary in weak sense, if it satisfies two conditions: $EX_t=c$ and $Cov(X_t,X_s)=r(t-s)$, where $c$ is a constant and $r$ is apropriate function. The definition is the same for vector and scalar processes. For vector processes it immediately implies that each individual element of the vector is stationary. Now if one of the elements is not stationary, then it is also immediately clear, that whole vector cannot be stationary too. The same reasoning applies for stationarity in strong sense. So the answer to the question would be no. You cannot get a stationary solution for VAR equation if one of the elements is not stationary.
It is usual to test non-stationarity, or to be more precise unit-root non-stationarity for individual variables and then estimate VAR. Estimation assumes that you have either stationarity or cointegration. If the process is non-stationary and not cointegrated the estimation is not possible (it is possible to argue differently, but it is safe to assume that this holds for all the usual cases). | What's a stationary VAR? | VAR is actualy an equation. We say that process $X_t$ is VAR when it satisfies the following equation:
$$X_t=\alpha+\Phi_1X_{t-1}+...+\Phi_pX_{t-p}+\varepsilon_t,$$
where $\Phi_i$ are matrices and $\ | What's a stationary VAR?
VAR is actualy an equation. We say that process $X_t$ is VAR when it satisfies the following equation:
$$X_t=\alpha+\Phi_1X_{t-1}+...+\Phi_pX_{t-p}+\varepsilon_t,$$
where $\Phi_i$ are matrices and $\varepsilon_t$ is white noise process. If the process satistfying this equation is stationary we say that the VAR is stationary. Given matrices $\Phi_i$ you can check whether the solution is stationary or not. If the roots of the following equation are in modulo greater than 1, then the solution is stationary:
$$|I-\lambda\Phi_1-...-\lambda^p\Phi_p|=0,$$
where $|A|$ is the determinant of matrix $A$.
Stationarity (both in weak and strong sense) is a property of a process. Whether it is a vector valued or scalar valued. For example the process is called stationary in weak sense, if it satisfies two conditions: $EX_t=c$ and $Cov(X_t,X_s)=r(t-s)$, where $c$ is a constant and $r$ is apropriate function. The definition is the same for vector and scalar processes. For vector processes it immediately implies that each individual element of the vector is stationary. Now if one of the elements is not stationary, then it is also immediately clear, that whole vector cannot be stationary too. The same reasoning applies for stationarity in strong sense. So the answer to the question would be no. You cannot get a stationary solution for VAR equation if one of the elements is not stationary.
It is usual to test non-stationarity, or to be more precise unit-root non-stationarity for individual variables and then estimate VAR. Estimation assumes that you have either stationarity or cointegration. If the process is non-stationary and not cointegrated the estimation is not possible (it is possible to argue differently, but it is safe to assume that this holds for all the usual cases). | What's a stationary VAR?
VAR is actualy an equation. We say that process $X_t$ is VAR when it satisfies the following equation:
$$X_t=\alpha+\Phi_1X_{t-1}+...+\Phi_pX_{t-p}+\varepsilon_t,$$
where $\Phi_i$ are matrices and $\ |
30,790 | What's a stationary VAR? | When we use stationary variables in estimating VAR, the subsequent forecasting using such VAR model does not provide clear picture. The graphs that plot actual and forecast values of stationary variable generally show a straight line for forecast value and actual values trending around mean. The idea of whether variable is increasing or decreasing is not clear at all. How do we solve this issue | What's a stationary VAR? | When we use stationary variables in estimating VAR, the subsequent forecasting using such VAR model does not provide clear picture. The graphs that plot actual and forecast values of stationary variab | What's a stationary VAR?
When we use stationary variables in estimating VAR, the subsequent forecasting using such VAR model does not provide clear picture. The graphs that plot actual and forecast values of stationary variable generally show a straight line for forecast value and actual values trending around mean. The idea of whether variable is increasing or decreasing is not clear at all. How do we solve this issue | What's a stationary VAR?
When we use stationary variables in estimating VAR, the subsequent forecasting using such VAR model does not provide clear picture. The graphs that plot actual and forecast values of stationary variab |
30,791 | What's a stationary VAR? | What is a stationary VAR?
I don't think the question is correct. VAR (Vector Autoregression) is an econometric technique used to model the relationship between time series variables. We cannot say that VAR is "stationary". You can have "stationary" time series, but not "stationary" VAR models. This is not correct to say! Anyway, a stationary time series variable is a variable which fluctuate around its mean (or its trend) over time. The series may deviate for a little while but it will definitely revert back to the mean or the trend later.
Can a VAR with non-stationary variables be stationary?
Again the question is not formulated correctly. But this is what you want to know. Non-stationary variables can have a stationary relationship. It means that they "move together" over time. We say that they are "cointegrated".
How do you test whether a VAR is stationary or non-stationary? (Example in R language if possible/applicable).
In order to test whether a variable is stationary, you can use a unit root test such as the Dickey-Fuller (DF) test. In R, the tseries package contains the adf.test function.
You may want to read this. It may be a little hard to digest, but there a few easy-to-understand illustrations that will help you understand. | What's a stationary VAR? | What is a stationary VAR?
I don't think the question is correct. VAR (Vector Autoregression) is an econometric technique used to model the relationship between time series variables. We cannot say th | What's a stationary VAR?
What is a stationary VAR?
I don't think the question is correct. VAR (Vector Autoregression) is an econometric technique used to model the relationship between time series variables. We cannot say that VAR is "stationary". You can have "stationary" time series, but not "stationary" VAR models. This is not correct to say! Anyway, a stationary time series variable is a variable which fluctuate around its mean (or its trend) over time. The series may deviate for a little while but it will definitely revert back to the mean or the trend later.
Can a VAR with non-stationary variables be stationary?
Again the question is not formulated correctly. But this is what you want to know. Non-stationary variables can have a stationary relationship. It means that they "move together" over time. We say that they are "cointegrated".
How do you test whether a VAR is stationary or non-stationary? (Example in R language if possible/applicable).
In order to test whether a variable is stationary, you can use a unit root test such as the Dickey-Fuller (DF) test. In R, the tseries package contains the adf.test function.
You may want to read this. It may be a little hard to digest, but there a few easy-to-understand illustrations that will help you understand. | What's a stationary VAR?
What is a stationary VAR?
I don't think the question is correct. VAR (Vector Autoregression) is an econometric technique used to model the relationship between time series variables. We cannot say th |
30,792 | Permutation test in R | What you are saying is you would like to compare the paired t-test statistic to the distribution of such statistics obtained by independently switching all possible pairs of data. There are $2^{10}=1024$ such switches, small enough to enable fast computation of the full distribution.
It is convenient in R to code this as a t-test for the difference between the two sets of data: instead of switching values, we merely need to negate them.
Let's first run the t-test:
x <- c(12.9, 13.5, 12.8, 15.6, 17.2, 19.2, 12.6, 15.3, 14.4, 11.3)
y <- c(12.7, 13.6, 12.0, 15.2, 16.8, 20.0, 12.0, 15.9, 16.0, 11.1)
(value <- t.test(x,y, paired=TRUE, alternative="two.sided"))
The statistic and p-value are $-0.213$ and $0.836$, as expected. Now let's generate the permutation distribution (using expand.grid as requested):
perms <- do.call(expand.grid, lapply(as.list(1:length(x)), function(i) c(-1,1)))
dist <- apply(perms, 1, function(p) t.test(p*(x-y), alt="t")$statistic)
(This takes $0.33$ seconds.) As a quick check, let's graph the results:
hist(dist)
abline(v = value$statistic, col="Red", lwd=2)
Because the actual statistic is near the middle of the distribution and this is a two-sided test, the p-value looks approximately to be $0.9$ or so. We can compute it:
sum(abs(dist) > abs(value$statistic)) / 2^length(x)
The result is $0.836$, the same as the t-distribution gave us. | Permutation test in R | What you are saying is you would like to compare the paired t-test statistic to the distribution of such statistics obtained by independently switching all possible pairs of data. There are $2^{10}=1 | Permutation test in R
What you are saying is you would like to compare the paired t-test statistic to the distribution of such statistics obtained by independently switching all possible pairs of data. There are $2^{10}=1024$ such switches, small enough to enable fast computation of the full distribution.
It is convenient in R to code this as a t-test for the difference between the two sets of data: instead of switching values, we merely need to negate them.
Let's first run the t-test:
x <- c(12.9, 13.5, 12.8, 15.6, 17.2, 19.2, 12.6, 15.3, 14.4, 11.3)
y <- c(12.7, 13.6, 12.0, 15.2, 16.8, 20.0, 12.0, 15.9, 16.0, 11.1)
(value <- t.test(x,y, paired=TRUE, alternative="two.sided"))
The statistic and p-value are $-0.213$ and $0.836$, as expected. Now let's generate the permutation distribution (using expand.grid as requested):
perms <- do.call(expand.grid, lapply(as.list(1:length(x)), function(i) c(-1,1)))
dist <- apply(perms, 1, function(p) t.test(p*(x-y), alt="t")$statistic)
(This takes $0.33$ seconds.) As a quick check, let's graph the results:
hist(dist)
abline(v = value$statistic, col="Red", lwd=2)
Because the actual statistic is near the middle of the distribution and this is a two-sided test, the p-value looks approximately to be $0.9$ or so. We can compute it:
sum(abs(dist) > abs(value$statistic)) / 2^length(x)
The result is $0.836$, the same as the t-distribution gave us. | Permutation test in R
What you are saying is you would like to compare the paired t-test statistic to the distribution of such statistics obtained by independently switching all possible pairs of data. There are $2^{10}=1 |
30,793 | Permutation test in R | You should be using a paired T-test since these are paired data, not a 2 sample test.
All possible permutations of pre-post data would be obtained using
expand.grid(pre=x, post=y)
And we know it's only a 2 by 100 matrix which is far from impossible. I don't know why you're replicating x, or what k is. | Permutation test in R | You should be using a paired T-test since these are paired data, not a 2 sample test.
All possible permutations of pre-post data would be obtained using
expand.grid(pre=x, post=y)
And we know it's onl | Permutation test in R
You should be using a paired T-test since these are paired data, not a 2 sample test.
All possible permutations of pre-post data would be obtained using
expand.grid(pre=x, post=y)
And we know it's only a 2 by 100 matrix which is far from impossible. I don't know why you're replicating x, or what k is. | Permutation test in R
You should be using a paired T-test since these are paired data, not a 2 sample test.
All possible permutations of pre-post data would be obtained using
expand.grid(pre=x, post=y)
And we know it's onl |
30,794 | Computation of hypergeometric function in R | Unless you need to evaluate the Gauss hypergeometric function for complex values of the parameters or the variable, it is better to use Robin Hankin's gsl package.
Based on my experience I also recommend to only evaluate the Gauss hypergeometric function for a value of the variable lying in $[0,1]$, and use a transformation formula for values in $]-\infty, 0]$.
library(gsl)
Gauss2F1 <- function(a,b,c,x){
if(x>=0 & x<1){
hyperg_2F1(a,b,c,x)
}else{
hyperg_2F1(c-a,b,c,1-1/(1-x))/(1-x)^b
}
} | Computation of hypergeometric function in R | Unless you need to evaluate the Gauss hypergeometric function for complex values of the parameters or the variable, it is better to use Robin Hankin's gsl package.
Based on my experience I also recomm | Computation of hypergeometric function in R
Unless you need to evaluate the Gauss hypergeometric function for complex values of the parameters or the variable, it is better to use Robin Hankin's gsl package.
Based on my experience I also recommend to only evaluate the Gauss hypergeometric function for a value of the variable lying in $[0,1]$, and use a transformation formula for values in $]-\infty, 0]$.
library(gsl)
Gauss2F1 <- function(a,b,c,x){
if(x>=0 & x<1){
hyperg_2F1(a,b,c,x)
}else{
hyperg_2F1(c-a,b,c,1-1/(1-x))/(1-x)^b
}
} | Computation of hypergeometric function in R
Unless you need to evaluate the Gauss hypergeometric function for complex values of the parameters or the variable, it is better to use Robin Hankin's gsl package.
Based on my experience I also recomm |
30,795 | Computation of hypergeometric function in R | @Stéphane Laurent's's formula above is great. I've noticed that it sometimes produces NaNs when a, b, c are large and z is negative – I haven't been able to pin the precise conditions down. In these cases we can use another hypergeometric transformation starting from Stéphane's alternative expression. It leads to this alternative formula:
library(gsl)
Gauss2F1b <- function(a,b,c,x){
if(x>=0 & x<1){
hyperg_2F1(a,b,c,x)
}else{
hyperg_2F1(a,c-b,c,1-1/(1-x))/(1-x)^a
}
}
For example:
> Gauss2F1(80.2,50.1,61.3,-1)
[1] NaN
>
> Gauss2F1b(80.2,50.1,61.3,-1)
[1] 5.498597e-20
>
>
> Gauss2F1(80.2,50.1,61.3,-3)
[1] NaN
> Gauss2F1b(80.2,50.1,61.3,-3)
[1] 5.343807e-38
>
>
> Gauss2F1(80.2,50.1,61.3,-0.4)
[1] NaN
> Gauss2F1b(80.2,50.1,61.3,-0.4)
[1] 3.322785e-10
all three agreeing with Mathematica's Hypergeometric2F1. This formula seems well behaved also for smaller a, b, c. Note that there are cases in which this formula gives NaN and Stéphane's doesn't, though. Best to check case by case. | Computation of hypergeometric function in R | @Stéphane Laurent's's formula above is great. I've noticed that it sometimes produces NaNs when a, b, c are large and z is negative – I haven't been able to pin the precise conditions down. In these c | Computation of hypergeometric function in R
@Stéphane Laurent's's formula above is great. I've noticed that it sometimes produces NaNs when a, b, c are large and z is negative – I haven't been able to pin the precise conditions down. In these cases we can use another hypergeometric transformation starting from Stéphane's alternative expression. It leads to this alternative formula:
library(gsl)
Gauss2F1b <- function(a,b,c,x){
if(x>=0 & x<1){
hyperg_2F1(a,b,c,x)
}else{
hyperg_2F1(a,c-b,c,1-1/(1-x))/(1-x)^a
}
}
For example:
> Gauss2F1(80.2,50.1,61.3,-1)
[1] NaN
>
> Gauss2F1b(80.2,50.1,61.3,-1)
[1] 5.498597e-20
>
>
> Gauss2F1(80.2,50.1,61.3,-3)
[1] NaN
> Gauss2F1b(80.2,50.1,61.3,-3)
[1] 5.343807e-38
>
>
> Gauss2F1(80.2,50.1,61.3,-0.4)
[1] NaN
> Gauss2F1b(80.2,50.1,61.3,-0.4)
[1] 3.322785e-10
all three agreeing with Mathematica's Hypergeometric2F1. This formula seems well behaved also for smaller a, b, c. Note that there are cases in which this formula gives NaN and Stéphane's doesn't, though. Best to check case by case. | Computation of hypergeometric function in R
@Stéphane Laurent's's formula above is great. I've noticed that it sometimes produces NaNs when a, b, c are large and z is negative – I haven't been able to pin the precise conditions down. In these c |
30,796 | Standard algorithms for doing hierarchical linear regression? | There's Harvey Goldstein's iterative generalized least-squares (IGLS) algorithm for one, and also it's minor modification, restricted iterative generalized least-squares (RIGLS), that gives unbiased estimates of the variance parameters.
These algorithms are still iterative, so not closed form, but they're computationally simpler than MCMC or maximum likelihood. You just iterate until the parameters converge.
Goldstein H. Multilevel Mixed Linear-Model Analysis Using Iterative Generalized Least-Squares. Biometrika 1986; 73(1):43-56. doi: 10.1093/biomet/73.1.43
Goldstein H. Restricted Unbiased Iterative Generalized Least-Squares Estimation. Biometrika 1989; 76(3):622-623. doi: 10.1093/biomet/76.3.622
For more info on this and alternatives, see e.g.:
Stephen W. Raudenbush, Anthony S. Bryk. Hierarchical linear models:
applications and data analysis methods. (2nd edition) Sage, 2002. | Standard algorithms for doing hierarchical linear regression? | There's Harvey Goldstein's iterative generalized least-squares (IGLS) algorithm for one, and also it's minor modification, restricted iterative generalized least-squares (RIGLS), that gives unbiased e | Standard algorithms for doing hierarchical linear regression?
There's Harvey Goldstein's iterative generalized least-squares (IGLS) algorithm for one, and also it's minor modification, restricted iterative generalized least-squares (RIGLS), that gives unbiased estimates of the variance parameters.
These algorithms are still iterative, so not closed form, but they're computationally simpler than MCMC or maximum likelihood. You just iterate until the parameters converge.
Goldstein H. Multilevel Mixed Linear-Model Analysis Using Iterative Generalized Least-Squares. Biometrika 1986; 73(1):43-56. doi: 10.1093/biomet/73.1.43
Goldstein H. Restricted Unbiased Iterative Generalized Least-Squares Estimation. Biometrika 1989; 76(3):622-623. doi: 10.1093/biomet/76.3.622
For more info on this and alternatives, see e.g.:
Stephen W. Raudenbush, Anthony S. Bryk. Hierarchical linear models:
applications and data analysis methods. (2nd edition) Sage, 2002. | Standard algorithms for doing hierarchical linear regression?
There's Harvey Goldstein's iterative generalized least-squares (IGLS) algorithm for one, and also it's minor modification, restricted iterative generalized least-squares (RIGLS), that gives unbiased e |
30,797 | Standard algorithms for doing hierarchical linear regression? | The lme4 package in R uses iteratively reweighted least squares (IRLS) and penalized iteratively reweighted least squares (PIRLS). See the PDF's here:
http://rss.acs.unt.edu/Rdoc/library/lme4/doc/index.html | Standard algorithms for doing hierarchical linear regression? | The lme4 package in R uses iteratively reweighted least squares (IRLS) and penalized iteratively reweighted least squares (PIRLS). See the PDF's here:
http://rss.acs.unt.edu/Rdoc/library/lme4/doc/inde | Standard algorithms for doing hierarchical linear regression?
The lme4 package in R uses iteratively reweighted least squares (IRLS) and penalized iteratively reweighted least squares (PIRLS). See the PDF's here:
http://rss.acs.unt.edu/Rdoc/library/lme4/doc/index.html | Standard algorithms for doing hierarchical linear regression?
The lme4 package in R uses iteratively reweighted least squares (IRLS) and penalized iteratively reweighted least squares (PIRLS). See the PDF's here:
http://rss.acs.unt.edu/Rdoc/library/lme4/doc/inde |
30,798 | Standard algorithms for doing hierarchical linear regression? | Another good source for "computing algorithms" for HLM's (again to the extent that you view them as similar specifications as LMM's) would be:
McCulloch, C., Searle, S., Neuhaus, J. (2008). Generalized Linear and Mixed Models. 2nd Edition. Wiley. Chapter 14 - Computing.
Algorithms they list for computing LMM's include:
EM algorithm
Newton Raphson algorithm
Algorithms they list for GLMM's include:
Numerical quadrature (GH quadrature)
EM algorithm
MCMC algorithms (as you mention)
Stochastic approximation algorithms
Simulated maximum likelihood
Other algorithms for GLMM's that they suggest include:
Penalized quasi-likelihood methods
Laplace approximations
PQL/Laplace with bootstrap bias correction | Standard algorithms for doing hierarchical linear regression? | Another good source for "computing algorithms" for HLM's (again to the extent that you view them as similar specifications as LMM's) would be:
McCulloch, C., Searle, S., Neuhaus, J. (2008). Generaliz | Standard algorithms for doing hierarchical linear regression?
Another good source for "computing algorithms" for HLM's (again to the extent that you view them as similar specifications as LMM's) would be:
McCulloch, C., Searle, S., Neuhaus, J. (2008). Generalized Linear and Mixed Models. 2nd Edition. Wiley. Chapter 14 - Computing.
Algorithms they list for computing LMM's include:
EM algorithm
Newton Raphson algorithm
Algorithms they list for GLMM's include:
Numerical quadrature (GH quadrature)
EM algorithm
MCMC algorithms (as you mention)
Stochastic approximation algorithms
Simulated maximum likelihood
Other algorithms for GLMM's that they suggest include:
Penalized quasi-likelihood methods
Laplace approximations
PQL/Laplace with bootstrap bias correction | Standard algorithms for doing hierarchical linear regression?
Another good source for "computing algorithms" for HLM's (again to the extent that you view them as similar specifications as LMM's) would be:
McCulloch, C., Searle, S., Neuhaus, J. (2008). Generaliz |
30,799 | Standard algorithms for doing hierarchical linear regression? | If you consider the HLM to be a type of linear mixed model, you could consider the EM algorithm. Page 22-23 of the following course notes indicate how to implement the classic EM algorithm for mixed model:
http://www.stat.ucla.edu/~yuille/courses/stat153/emtutorial.pdf
###########################################################
# Classical EM algorithm for Linear Mixed Model #
###########################################################
em.mixed <- function(y, x, z, beta, var0, var1,maxiter=2000,tolerance = 1e-0010)
{
time <-proc.time()
n <- nrow(y)
q1 <- nrow(z)
conv <- 1
L0 <- loglike(y, x, z, beta, var0, var1)
i<-0
cat(" Iter. sigma0 sigma1 Likelihood",fill=T)
repeat {
if(i>maxiter) {conv<-0
break}
V <- c(var1) * z %*% t(z) + c(var0) * diag(n)
Vinv <- solve(V)
xb <- x %*% beta
resid <- (y-xb)
temp1 <- Vinv %*% resid
s0 <- c(var0)^2 * t(temp1)%*%temp1 + c(var0) * n - c(var0)^2 * tr(Vinv)
s1 <- c(var1)^2 * t(temp1)%*%z%*%t(z)%*%temp1+ c(var1)*q1 -
c(var1)^2 *tr(t(z)%*%Vinv%*%z)
w <- xb + c(var0) * temp1
var0 <- s0/n
var1 <- s1/q1
beta <- ginverse( t(x) %*% x) %*% t(x)%*% w
L1 <- loglike(y, x, z, beta, var0, var1)
if(L1 < L0) { print("log-likelihood must increase, llikel <llikeO, break.")
conv <- 0
break
}
i <- i + 1
cat(" ", i," ",var0," ",var1," ",L1,fill=T)
if(abs(L1 - L0) < tolerance) {break} #check for convergence
L0 <- L1
}
list(beta=beta, var0=var0,var1=var1,Loglikelihood=L0)
}
#########################################################
# loglike calculates the LogLikelihood for Mixed Model #
#########################################################
loglike<- function(y, x, z, beta, var0, var1)
}
{
n<- nrow(y)
V <- c(var1) * z %*% t(z) + c(var0) * diag(n)
Vinv <- ginverse(V)
xb <- x %*% beta
resid <- (y-xb)
temp1 <- Vinv %*% resid
(-.5)*( log(det(V)) + t(resid) %*% temp1 )
} | Standard algorithms for doing hierarchical linear regression? | If you consider the HLM to be a type of linear mixed model, you could consider the EM algorithm. Page 22-23 of the following course notes indicate how to implement the classic EM algorithm for mixed m | Standard algorithms for doing hierarchical linear regression?
If you consider the HLM to be a type of linear mixed model, you could consider the EM algorithm. Page 22-23 of the following course notes indicate how to implement the classic EM algorithm for mixed model:
http://www.stat.ucla.edu/~yuille/courses/stat153/emtutorial.pdf
###########################################################
# Classical EM algorithm for Linear Mixed Model #
###########################################################
em.mixed <- function(y, x, z, beta, var0, var1,maxiter=2000,tolerance = 1e-0010)
{
time <-proc.time()
n <- nrow(y)
q1 <- nrow(z)
conv <- 1
L0 <- loglike(y, x, z, beta, var0, var1)
i<-0
cat(" Iter. sigma0 sigma1 Likelihood",fill=T)
repeat {
if(i>maxiter) {conv<-0
break}
V <- c(var1) * z %*% t(z) + c(var0) * diag(n)
Vinv <- solve(V)
xb <- x %*% beta
resid <- (y-xb)
temp1 <- Vinv %*% resid
s0 <- c(var0)^2 * t(temp1)%*%temp1 + c(var0) * n - c(var0)^2 * tr(Vinv)
s1 <- c(var1)^2 * t(temp1)%*%z%*%t(z)%*%temp1+ c(var1)*q1 -
c(var1)^2 *tr(t(z)%*%Vinv%*%z)
w <- xb + c(var0) * temp1
var0 <- s0/n
var1 <- s1/q1
beta <- ginverse( t(x) %*% x) %*% t(x)%*% w
L1 <- loglike(y, x, z, beta, var0, var1)
if(L1 < L0) { print("log-likelihood must increase, llikel <llikeO, break.")
conv <- 0
break
}
i <- i + 1
cat(" ", i," ",var0," ",var1," ",L1,fill=T)
if(abs(L1 - L0) < tolerance) {break} #check for convergence
L0 <- L1
}
list(beta=beta, var0=var0,var1=var1,Loglikelihood=L0)
}
#########################################################
# loglike calculates the LogLikelihood for Mixed Model #
#########################################################
loglike<- function(y, x, z, beta, var0, var1)
}
{
n<- nrow(y)
V <- c(var1) * z %*% t(z) + c(var0) * diag(n)
Vinv <- ginverse(V)
xb <- x %*% beta
resid <- (y-xb)
temp1 <- Vinv %*% resid
(-.5)*( log(det(V)) + t(resid) %*% temp1 )
} | Standard algorithms for doing hierarchical linear regression?
If you consider the HLM to be a type of linear mixed model, you could consider the EM algorithm. Page 22-23 of the following course notes indicate how to implement the classic EM algorithm for mixed m |
30,800 | Why aren't all tests scored via item analysis/response theory? | (You asked whether there is a statistical reason: I doubt it, but I'm guessing about other reasons.) Would there be cries of "moving the goalpost"? Students usually like to know when taking a test just how much each item is worth. They might be justified in complaining upon seeing, for example, that some of their hard-worked answers didn't end up counting much.
Many teachers and professors use unsystematic, subjective criteria for scoring tests. But those who do use systems are probably wary about opening those systems up to specific criticism -- something they can largely avoid if hiding behind more subjective approaches. That might explain why item analysis and IRT are not used more widely than they are. | Why aren't all tests scored via item analysis/response theory? | (You asked whether there is a statistical reason: I doubt it, but I'm guessing about other reasons.) Would there be cries of "moving the goalpost"? Students usually like to know when taking a test | Why aren't all tests scored via item analysis/response theory?
(You asked whether there is a statistical reason: I doubt it, but I'm guessing about other reasons.) Would there be cries of "moving the goalpost"? Students usually like to know when taking a test just how much each item is worth. They might be justified in complaining upon seeing, for example, that some of their hard-worked answers didn't end up counting much.
Many teachers and professors use unsystematic, subjective criteria for scoring tests. But those who do use systems are probably wary about opening those systems up to specific criticism -- something they can largely avoid if hiding behind more subjective approaches. That might explain why item analysis and IRT are not used more widely than they are. | Why aren't all tests scored via item analysis/response theory?
(You asked whether there is a statistical reason: I doubt it, but I'm guessing about other reasons.) Would there be cries of "moving the goalpost"? Students usually like to know when taking a test |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.