idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
46,601
|
Are all neural network activation functions differentiable?
|
No! For example, ReLU, which is a widely used activation function, is not differentiable in $z=0$. But they are usually non-differentiable at only a small number of points and they have right derivative and left derivatives at these points. We usually use one of the one-side derivatives. This is rational since digital computers are subject to numerical errors ($z=0$ has been probably some small value rounded to zero). Read chapter 6 of the following book for more details on activation functions:
Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, MIT Press, 2016, http://deeplearningbook.org
|
Are all neural network activation functions differentiable?
|
No! For example, ReLU, which is a widely used activation function, is not differentiable in $z=0$. But they are usually non-differentiable at only a small number of points and they have right derivati
|
Are all neural network activation functions differentiable?
No! For example, ReLU, which is a widely used activation function, is not differentiable in $z=0$. But they are usually non-differentiable at only a small number of points and they have right derivative and left derivatives at these points. We usually use one of the one-side derivatives. This is rational since digital computers are subject to numerical errors ($z=0$ has been probably some small value rounded to zero). Read chapter 6 of the following book for more details on activation functions:
Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, MIT Press, 2016, http://deeplearningbook.org
|
Are all neural network activation functions differentiable?
No! For example, ReLU, which is a widely used activation function, is not differentiable in $z=0$. But they are usually non-differentiable at only a small number of points and they have right derivati
|
46,602
|
Are all neural network activation functions differentiable?
|
If you're going to use gradient descent to learn parameters, you need not only the activation functions to be differential almost everywhere, but ideally the gradient should be non-zero for large parts of the domain. It is not a strict requirement that the gradient be non-0 almost everywhere. For example ReLU has gradient zero for $x \le 0$, and it works pretty well. But whilst the input is in an area of zero gradient, no learning will take place. This manifests in practice in a few ways:
ReLU neurons can get effectively permanently removed from the network, if none of the inputs in the training set ever result in a non-zero gradient. Dropout can sometimes help with this, but not always
sigmoid activation has gradient close to zero for high and low input values. This is a key incentive for normalizing the mean and variance of data, prior to feeding to the network
it's one of the driving factors for the use of leaky-ReLU activation, and ELU activation. Both of these have non-zero gradient almost everywhere
("almost everywhere" means with the exception of a finite, or countably infinite, set of points. eg, as Hossein points out, ReLU is not differentiable at $x = 0$)
|
Are all neural network activation functions differentiable?
|
If you're going to use gradient descent to learn parameters, you need not only the activation functions to be differential almost everywhere, but ideally the gradient should be non-zero for large part
|
Are all neural network activation functions differentiable?
If you're going to use gradient descent to learn parameters, you need not only the activation functions to be differential almost everywhere, but ideally the gradient should be non-zero for large parts of the domain. It is not a strict requirement that the gradient be non-0 almost everywhere. For example ReLU has gradient zero for $x \le 0$, and it works pretty well. But whilst the input is in an area of zero gradient, no learning will take place. This manifests in practice in a few ways:
ReLU neurons can get effectively permanently removed from the network, if none of the inputs in the training set ever result in a non-zero gradient. Dropout can sometimes help with this, but not always
sigmoid activation has gradient close to zero for high and low input values. This is a key incentive for normalizing the mean and variance of data, prior to feeding to the network
it's one of the driving factors for the use of leaky-ReLU activation, and ELU activation. Both of these have non-zero gradient almost everywhere
("almost everywhere" means with the exception of a finite, or countably infinite, set of points. eg, as Hossein points out, ReLU is not differentiable at $x = 0$)
|
Are all neural network activation functions differentiable?
If you're going to use gradient descent to learn parameters, you need not only the activation functions to be differential almost everywhere, but ideally the gradient should be non-zero for large part
|
46,603
|
What's stopping a gradient from making a probability negative?
|
There are a couple issues with the expressions in this question. I'll address these, then answer the question down below.
Issues with expressions
In your expression, $p(x \mid \theta)$ will always be 0 if $x=1$. It should be $\theta^{x_d} (1 - \theta)^{1-x_d}$ rather than $\theta^{x_d} (1 - \theta^{1-x_d})$. No need to subscript the thetas; the data are i.i.d. so $\theta$ is a single, scalar value that's shared for all data points. If performing gradient descent you'd work with the negative log likelihood (otherwise you'd be minimizing the likelihood rather than maximizing it). Use the log likelihood if using gradient ascent.
The negative log likelihood should be:
$$L(\theta) = -\log \prod_{d=1}^{n} \theta^{x_d} (1 - \theta)^{1-x_d}$$
The log of a product is a sum of logs so:
$$L(\theta) = -\sum_{d=1}^{n} \log \left (
\theta^{x_d} (1 - \theta)^{1-x_d}
\right )$$
Differentiating w.r.t. $\theta$, we have:
$$\frac{d}{d\theta} L(\theta)
= -\sum_{d=1}^{n} \left (
\frac{x_d}{\theta}
+ \frac{x_d-1}{1 - \theta}
\right )$$
Can gradient descent yield invalid parameters?
Can gradient descent ever set $\theta$ to be less than 0 or greater than 1? One thing that will tend to prevent this is that, when the data set contains a mix of zeros and ones, the negative log likelihood approaches infinity as $\theta$ approaches 0 or 1. This discourages gradient descent from approaching or exceeding these values; the gradient will pull $\theta$ back into a more reasonable range.
For example, here's the negative log likelihood and gradient for some points sampled i.i.d. from a Bernoulli distribution with true $\theta=0.7$:
But, say we set the step size too large. For example, say the current $\theta$ is 0.5 (so gradient descent will step in the positive direction), and the step size is some large value. We can overshoot the optimum ($\theta=0.7$), and even exceed the valid parameter range; $\theta$ could end up greater than 1. If this happens, the expression for the negative log likelihood will return a complex value because we'd be taking the log of a negative number. This breaks the optimization.
The case is different when the data set contains all zeros or all ones. For example, here's the negative log likelihood and gradient for 100 values that are all one:
The true value of $\theta$ is 1, which has a negative log likelihood of 0. But, looking at the expressions above, the gradient is -100. This means gradient descent will keep stepping in the positive direction. And, in this case, the expression for the negative log likelihood will produce increasingly negative values. So, gradient descent will continue to increase $\theta$ without bound.
Fixing the problem
The issue is that gradient descent is an unconstrained optimization algorithm, but this problem requires that $\theta \in [0, 1]$. Solving the problem correctly requires imposing this constraint. Of course, this particular problem can be solved by simply calculating the frequency of ones in the data (no need for iterative optimization). But, for the sake of illustration, there are a couple approaches that can work. One way is to reparameterize the problem. For example, we could define $\theta$ as a sigmoid function of some real-valued parameter $\alpha$. That is, $\theta = \frac{1}{1 + e^{-\alpha}}$. So, $\alpha$ can take any value and $\theta$ will always always lie between 0 and 1. We can then use an unconstrained optimization algorithm to minimize the negative log likelihood w.r.t. $\alpha$. Another approach is to use an optimization algorithm that lets us impose bound constraints, which will force $\theta$ to lie in the correct range. There are dedicated solvers that can implement many kinds of constraints, and the best choice will depend on the particular problem. A simple example in this case would be to modify gradient descent to clip $\theta$ to the allowed range after each step: $\theta \leftarrow \min(1, \max(0, \theta))$. This is an example of a 'gradient projection method'. I mention it because this question is focused on gradient descent, but there are other constrained optimization algorithms with faster convergence.
|
What's stopping a gradient from making a probability negative?
|
There are a couple issues with the expressions in this question. I'll address these, then answer the question down below.
Issues with expressions
In your expression, $p(x \mid \theta)$ will always be
|
What's stopping a gradient from making a probability negative?
There are a couple issues with the expressions in this question. I'll address these, then answer the question down below.
Issues with expressions
In your expression, $p(x \mid \theta)$ will always be 0 if $x=1$. It should be $\theta^{x_d} (1 - \theta)^{1-x_d}$ rather than $\theta^{x_d} (1 - \theta^{1-x_d})$. No need to subscript the thetas; the data are i.i.d. so $\theta$ is a single, scalar value that's shared for all data points. If performing gradient descent you'd work with the negative log likelihood (otherwise you'd be minimizing the likelihood rather than maximizing it). Use the log likelihood if using gradient ascent.
The negative log likelihood should be:
$$L(\theta) = -\log \prod_{d=1}^{n} \theta^{x_d} (1 - \theta)^{1-x_d}$$
The log of a product is a sum of logs so:
$$L(\theta) = -\sum_{d=1}^{n} \log \left (
\theta^{x_d} (1 - \theta)^{1-x_d}
\right )$$
Differentiating w.r.t. $\theta$, we have:
$$\frac{d}{d\theta} L(\theta)
= -\sum_{d=1}^{n} \left (
\frac{x_d}{\theta}
+ \frac{x_d-1}{1 - \theta}
\right )$$
Can gradient descent yield invalid parameters?
Can gradient descent ever set $\theta$ to be less than 0 or greater than 1? One thing that will tend to prevent this is that, when the data set contains a mix of zeros and ones, the negative log likelihood approaches infinity as $\theta$ approaches 0 or 1. This discourages gradient descent from approaching or exceeding these values; the gradient will pull $\theta$ back into a more reasonable range.
For example, here's the negative log likelihood and gradient for some points sampled i.i.d. from a Bernoulli distribution with true $\theta=0.7$:
But, say we set the step size too large. For example, say the current $\theta$ is 0.5 (so gradient descent will step in the positive direction), and the step size is some large value. We can overshoot the optimum ($\theta=0.7$), and even exceed the valid parameter range; $\theta$ could end up greater than 1. If this happens, the expression for the negative log likelihood will return a complex value because we'd be taking the log of a negative number. This breaks the optimization.
The case is different when the data set contains all zeros or all ones. For example, here's the negative log likelihood and gradient for 100 values that are all one:
The true value of $\theta$ is 1, which has a negative log likelihood of 0. But, looking at the expressions above, the gradient is -100. This means gradient descent will keep stepping in the positive direction. And, in this case, the expression for the negative log likelihood will produce increasingly negative values. So, gradient descent will continue to increase $\theta$ without bound.
Fixing the problem
The issue is that gradient descent is an unconstrained optimization algorithm, but this problem requires that $\theta \in [0, 1]$. Solving the problem correctly requires imposing this constraint. Of course, this particular problem can be solved by simply calculating the frequency of ones in the data (no need for iterative optimization). But, for the sake of illustration, there are a couple approaches that can work. One way is to reparameterize the problem. For example, we could define $\theta$ as a sigmoid function of some real-valued parameter $\alpha$. That is, $\theta = \frac{1}{1 + e^{-\alpha}}$. So, $\alpha$ can take any value and $\theta$ will always always lie between 0 and 1. We can then use an unconstrained optimization algorithm to minimize the negative log likelihood w.r.t. $\alpha$. Another approach is to use an optimization algorithm that lets us impose bound constraints, which will force $\theta$ to lie in the correct range. There are dedicated solvers that can implement many kinds of constraints, and the best choice will depend on the particular problem. A simple example in this case would be to modify gradient descent to clip $\theta$ to the allowed range after each step: $\theta \leftarrow \min(1, \max(0, \theta))$. This is an example of a 'gradient projection method'. I mention it because this question is focused on gradient descent, but there are other constrained optimization algorithms with faster convergence.
|
What's stopping a gradient from making a probability negative?
There are a couple issues with the expressions in this question. I'll address these, then answer the question down below.
Issues with expressions
In your expression, $p(x \mid \theta)$ will always be
|
46,604
|
Approaches for comparing visual representation of two distributions with unequal sample sizes
|
If you really need to compare histograms at different sample sizes, scale them both to area 1 (i.e. to be density estimates).
However, as Nick suggested in comments, there are other ways of comparing the distributions that don't require binning.
You could plot ecdfs, or a pair of theoretical QQ plots on the same axes (the theoretical distribution doesn't need to be perfect, though a reasonable approximation will help with detailed comparisons), or perhaps kernel density estimates, for example.
|
Approaches for comparing visual representation of two distributions with unequal sample sizes
|
If you really need to compare histograms at different sample sizes, scale them both to area 1 (i.e. to be density estimates).
However, as Nick suggested in comments, there are other ways of comparing
|
Approaches for comparing visual representation of two distributions with unequal sample sizes
If you really need to compare histograms at different sample sizes, scale them both to area 1 (i.e. to be density estimates).
However, as Nick suggested in comments, there are other ways of comparing the distributions that don't require binning.
You could plot ecdfs, or a pair of theoretical QQ plots on the same axes (the theoretical distribution doesn't need to be perfect, though a reasonable approximation will help with detailed comparisons), or perhaps kernel density estimates, for example.
|
Approaches for comparing visual representation of two distributions with unequal sample sizes
If you really need to compare histograms at different sample sizes, scale them both to area 1 (i.e. to be density estimates).
However, as Nick suggested in comments, there are other ways of comparing
|
46,605
|
Approaches for comparing visual representation of two distributions with unequal sample sizes
|
Because you asked about histograms, I'm assuming you are interested in comparing the shapes of two distributions to see how similar they are. This is distinct from trying to visualize other aspects of the distributions, such as whether their means differ.
In general, histograms are a blunt tool for assessing the shape of a distribution (see this excellent answer: Assessing approximate distribution of data based on a histogram). Although you would have the same problem with both distributions when trying to compare them, they won't necessarily cancel each other out. Thus, histograms are not really a good choice for this task.
Your best bet is to use a qq-plot. Nowadays, qq-plots are thought of as a means to compare an observed distribution to a theoretical one. However, they were originally developed to compare distributions; the same situation you have here. Interpolation methods are standardly used to match datasets of different sizes, so that is largely a non-issue. If the shapes of the distributions are the same, the points will fall on a straight line.
To explore this suggestion, let's examine some data. Here I simulate data from an exponential distribution and a chi-squared distribution. Both are strongly right-skewed, but their shapes differ. I will code the example using R:
set.seed(7264) # this makes the example exactly reproducible
x1 = rexp(20, rate=1)
x2 = rchisq(100, df=2)
windows()
qqplot(x1, x2)
abline(0,1, col="gray")
The most basic finding is that the points do not fall on a straight line. That means that the shapes differ. A follow-up question is assessing how the distributions differ. To some degree, that can be determined by examining aspects of the plot. I added a 45$^\circ$ line through the origin. To be clear, the points needn't fall on this line—they could fall on any straight line, but this particular reference line can help us understand how they differ: Because the points start together at (0, 0) but are largely above the line, we can tell that the variance of X2 is larger than that of X1. We can see that the last point is about (2.7, 10), so the maximum X2 value is much further out. Because the middle value is above the line, we can tell the median of X2 is larger than the median of X1. Furthermore, the concave-up curve implies that X2 is more skewed than X1.
Nonetheless, it is typically difficult for people to determine shapes, or relative shapes, from qq-plots. I typically recommend people pair them with kernel density plots. You can assess if the shapes are similar with the qq-plot, and then figure out what the shapes seem to be from the density plot.
windows()
plot(density(x2), col="navyblue", ylim=c(0, max(density(x1)$y)),
xlim=c(0, max(density(x2)$x)), lwd=2, xlab="value", main="")
lines(density(x1), col="green4", lwd=2)
xs = seq(0,10,.1)
lines(x=xs, y=dexp(xs, rate=1), lty=2, lwd=2, col="chartreuse")
lines(x=xs, y=dchisq(xs, df=2), lty=2, lwd=2, col="cyan")
legend("topright", legend=c("exp kernel density",
"chi-squared kernel density",
"exp theoretical density",
"chi-squared theoretical density"),
lty=c(1,1,2,2), lwd=2, col=c("navyblue","green4","chartreuse","cyan"))
I think the shapes are easier to see here. (I also included code for the true PDFs, if you want to see what they look like.)
|
Approaches for comparing visual representation of two distributions with unequal sample sizes
|
Because you asked about histograms, I'm assuming you are interested in comparing the shapes of two distributions to see how similar they are. This is distinct from trying to visualize other aspects o
|
Approaches for comparing visual representation of two distributions with unequal sample sizes
Because you asked about histograms, I'm assuming you are interested in comparing the shapes of two distributions to see how similar they are. This is distinct from trying to visualize other aspects of the distributions, such as whether their means differ.
In general, histograms are a blunt tool for assessing the shape of a distribution (see this excellent answer: Assessing approximate distribution of data based on a histogram). Although you would have the same problem with both distributions when trying to compare them, they won't necessarily cancel each other out. Thus, histograms are not really a good choice for this task.
Your best bet is to use a qq-plot. Nowadays, qq-plots are thought of as a means to compare an observed distribution to a theoretical one. However, they were originally developed to compare distributions; the same situation you have here. Interpolation methods are standardly used to match datasets of different sizes, so that is largely a non-issue. If the shapes of the distributions are the same, the points will fall on a straight line.
To explore this suggestion, let's examine some data. Here I simulate data from an exponential distribution and a chi-squared distribution. Both are strongly right-skewed, but their shapes differ. I will code the example using R:
set.seed(7264) # this makes the example exactly reproducible
x1 = rexp(20, rate=1)
x2 = rchisq(100, df=2)
windows()
qqplot(x1, x2)
abline(0,1, col="gray")
The most basic finding is that the points do not fall on a straight line. That means that the shapes differ. A follow-up question is assessing how the distributions differ. To some degree, that can be determined by examining aspects of the plot. I added a 45$^\circ$ line through the origin. To be clear, the points needn't fall on this line—they could fall on any straight line, but this particular reference line can help us understand how they differ: Because the points start together at (0, 0) but are largely above the line, we can tell that the variance of X2 is larger than that of X1. We can see that the last point is about (2.7, 10), so the maximum X2 value is much further out. Because the middle value is above the line, we can tell the median of X2 is larger than the median of X1. Furthermore, the concave-up curve implies that X2 is more skewed than X1.
Nonetheless, it is typically difficult for people to determine shapes, or relative shapes, from qq-plots. I typically recommend people pair them with kernel density plots. You can assess if the shapes are similar with the qq-plot, and then figure out what the shapes seem to be from the density plot.
windows()
plot(density(x2), col="navyblue", ylim=c(0, max(density(x1)$y)),
xlim=c(0, max(density(x2)$x)), lwd=2, xlab="value", main="")
lines(density(x1), col="green4", lwd=2)
xs = seq(0,10,.1)
lines(x=xs, y=dexp(xs, rate=1), lty=2, lwd=2, col="chartreuse")
lines(x=xs, y=dchisq(xs, df=2), lty=2, lwd=2, col="cyan")
legend("topright", legend=c("exp kernel density",
"chi-squared kernel density",
"exp theoretical density",
"chi-squared theoretical density"),
lty=c(1,1,2,2), lwd=2, col=c("navyblue","green4","chartreuse","cyan"))
I think the shapes are easier to see here. (I also included code for the true PDFs, if you want to see what they look like.)
|
Approaches for comparing visual representation of two distributions with unequal sample sizes
Because you asked about histograms, I'm assuming you are interested in comparing the shapes of two distributions to see how similar they are. This is distinct from trying to visualize other aspects o
|
46,606
|
Imputation methods for time series data
|
Your approach sounds very theoretical.
Did you analyze the imputations of the packages you mentioned?
Often imputation packages have requirements (e.g. MCAR data), but will still do a reasonable good job on data not fulfilling these conditions.
Only a actual test and comparison of algorithms will show you which one is best suited for your data.
The testing procedure can look like this:
Find a interval with no (or very few) missing data
Artificially add missing data in this interval. (these should resemble the NA patterns in the rest of the data)
Apply different imputation methods to this dataset. (e.g. methods from imputeTS, mtsdi, AMELIA)
Since you have the real values for your artificially deleted NA values, you can now compare how good alle the algorithms did on your data
Additional info:
The Amelia package also has some options to support the imputation of multivariate time series (see in the manual under 4.6)
Also other packages like mice could be tried
In general if you have multivariate time series, this means you have correlations between your different variables plus you have correlations of each variable in the time axis.
(here is a talk from useR! 2017 conference which among other things explains this)
In theory it sounds like it would make most sense if you try to use both of the correlations. But if the correlations in time is for example very strong, univariate time series imputation methods from imputeTS might even work best.
On the other hand, if the correlation between your variables is very strong, non time series imputation packages could work best. (like mice, VIM, missMDA and others)
|
Imputation methods for time series data
|
Your approach sounds very theoretical.
Did you analyze the imputations of the packages you mentioned?
Often imputation packages have requirements (e.g. MCAR data), but will still do a reasonable goo
|
Imputation methods for time series data
Your approach sounds very theoretical.
Did you analyze the imputations of the packages you mentioned?
Often imputation packages have requirements (e.g. MCAR data), but will still do a reasonable good job on data not fulfilling these conditions.
Only a actual test and comparison of algorithms will show you which one is best suited for your data.
The testing procedure can look like this:
Find a interval with no (or very few) missing data
Artificially add missing data in this interval. (these should resemble the NA patterns in the rest of the data)
Apply different imputation methods to this dataset. (e.g. methods from imputeTS, mtsdi, AMELIA)
Since you have the real values for your artificially deleted NA values, you can now compare how good alle the algorithms did on your data
Additional info:
The Amelia package also has some options to support the imputation of multivariate time series (see in the manual under 4.6)
Also other packages like mice could be tried
In general if you have multivariate time series, this means you have correlations between your different variables plus you have correlations of each variable in the time axis.
(here is a talk from useR! 2017 conference which among other things explains this)
In theory it sounds like it would make most sense if you try to use both of the correlations. But if the correlations in time is for example very strong, univariate time series imputation methods from imputeTS might even work best.
On the other hand, if the correlation between your variables is very strong, non time series imputation packages could work best. (like mice, VIM, missMDA and others)
|
Imputation methods for time series data
Your approach sounds very theoretical.
Did you analyze the imputations of the packages you mentioned?
Often imputation packages have requirements (e.g. MCAR data), but will still do a reasonable goo
|
46,607
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics
|
Notationally: $P(\cdot)$ is a functional to denote probability of events. The same notation is used when speaking of probability in a frequentist framework (e.g. probability is a frequency of events observed in infinite replications of the universe, or counterfactual probability) as in a Bayesian framework (e.g. probability is a degree of belief).
The history about $p$-values seems to date back to De Moivre in the 18th century according to Wikipedia. Pearson used the capital $P$ to denote a measure of inconsistency of an observed set of data from a hypothesis which is to be tested using a test statistic which assumes a known distribution when that hypothesis is true. Modern usage has reverted to lower case $p$ more often than not, I find, because the $p$ value is not a random variable, a type of distinction which is also somewhat antiquated in modern probability theory. I think you may find for submitting statistical research that most journals use lowercase $p$ but there may be instances of $P$, the only recommendation is to agree on one usage and be consistent.
There is no $p$ for Bayesian statistics. Bayesian testing is a controversial subject, but all agree that the $p$-value should immediately be bucked when doing a Bayesian analysis. My personal preference is to report the results of a Bayesian analysis using credible intervals which provide a range of plausible (believable) effects for the parameter. Bayes factors can summarize tests in a manner similar to $p$-values, but $p$-values suck, why would you use them if you're doing a Bayesian analysis?
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st
|
Notationally: $P(\cdot)$ is a functional to denote probability of events. The same notation is used when speaking of probability in a frequentist framework (e.g. probability is a frequency of events o
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics
Notationally: $P(\cdot)$ is a functional to denote probability of events. The same notation is used when speaking of probability in a frequentist framework (e.g. probability is a frequency of events observed in infinite replications of the universe, or counterfactual probability) as in a Bayesian framework (e.g. probability is a degree of belief).
The history about $p$-values seems to date back to De Moivre in the 18th century according to Wikipedia. Pearson used the capital $P$ to denote a measure of inconsistency of an observed set of data from a hypothesis which is to be tested using a test statistic which assumes a known distribution when that hypothesis is true. Modern usage has reverted to lower case $p$ more often than not, I find, because the $p$ value is not a random variable, a type of distinction which is also somewhat antiquated in modern probability theory. I think you may find for submitting statistical research that most journals use lowercase $p$ but there may be instances of $P$, the only recommendation is to agree on one usage and be consistent.
There is no $p$ for Bayesian statistics. Bayesian testing is a controversial subject, but all agree that the $p$-value should immediately be bucked when doing a Bayesian analysis. My personal preference is to report the results of a Bayesian analysis using credible intervals which provide a range of plausible (believable) effects for the parameter. Bayes factors can summarize tests in a manner similar to $p$-values, but $p$-values suck, why would you use them if you're doing a Bayesian analysis?
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st
Notationally: $P(\cdot)$ is a functional to denote probability of events. The same notation is used when speaking of probability in a frequentist framework (e.g. probability is a frequency of events o
|
46,608
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics
|
You are free to use any notation you want $p(X=x)$, $P(X=x)$, $\mathrm{P}(X=x)$, $\Pr(X=x)$ etc. I never heard about any formal rules about it. I have an impression that $p(X)$ is more often used when authors want to talk about probability in general and use catchall term for things like probabilities of events $p(X=x)$, probability density functions $p(x)$ etc., while uppercase $P$ or $\Pr$ is used more commonly when talking about events.
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st
|
You are free to use any notation you want $p(X=x)$, $P(X=x)$, $\mathrm{P}(X=x)$, $\Pr(X=x)$ etc. I never heard about any formal rules about it. I have an impression that $p(X)$ is more often used when
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics
You are free to use any notation you want $p(X=x)$, $P(X=x)$, $\mathrm{P}(X=x)$, $\Pr(X=x)$ etc. I never heard about any formal rules about it. I have an impression that $p(X)$ is more often used when authors want to talk about probability in general and use catchall term for things like probabilities of events $p(X=x)$, probability density functions $p(x)$ etc., while uppercase $P$ or $\Pr$ is used more commonly when talking about events.
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st
You are free to use any notation you want $p(X=x)$, $P(X=x)$, $\mathrm{P}(X=x)$, $\Pr(X=x)$ etc. I never heard about any formal rules about it. I have an impression that $p(X)$ is more often used when
|
46,609
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics
|
I favour $P$ for P-value partly because that usage goes back a long way and I like history of ideas, but perhaps more because $p$ is already overloaded: I often want $p$ to mean
probability in general
a particular probability (e.g. in notation for binomial distributions)
the number of predictors or covariates.
Contrary to that, a reminder or two in text that we are discussing p-values might reduce the ambiguity or puzzlement about what $p$ means.
I've not picked up any hint that being frequentist or Bayesian makes any difference to preferred notation, but that's just an uninformative prior speaking. Evidence and argument on that detail is especially welcome.
Notation in statistics (as generally in any subject with mathematical content) is a messy mixture of tradition, accident and logic. We have some guidelines, such as Greek for parameters and roman for statistics, but consistency is elusive. In directional statistics, for example, trigonometric conventions dominate and $\theta$ and $\phi$ are routinely names for variables.
We would all benefit by agreeing on some better and more consistent notations: there is just the small detail of what those might be.
Here is a case of two common notations. It would usually be futile to try to change anybody else's choice. At most we can carp if authors aren't consistent within publications we review or don't follow an arbitrary style standard for a journal.
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st
|
I favour $P$ for P-value partly because that usage goes back a long way and I like history of ideas, but perhaps more because $p$ is already overloaded: I often want $p$ to mean
probability in gener
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian statistics
I favour $P$ for P-value partly because that usage goes back a long way and I like history of ideas, but perhaps more because $p$ is already overloaded: I often want $p$ to mean
probability in general
a particular probability (e.g. in notation for binomial distributions)
the number of predictors or covariates.
Contrary to that, a reminder or two in text that we are discussing p-values might reduce the ambiguity or puzzlement about what $p$ means.
I've not picked up any hint that being frequentist or Bayesian makes any difference to preferred notation, but that's just an uninformative prior speaking. Evidence and argument on that detail is especially welcome.
Notation in statistics (as generally in any subject with mathematical content) is a messy mixture of tradition, accident and logic. We have some guidelines, such as Greek for parameters and roman for statistics, but consistency is elusive. In directional statistics, for example, trigonometric conventions dominate and $\theta$ and $\phi$ are routinely names for variables.
We would all benefit by agreeing on some better and more consistent notations: there is just the small detail of what those might be.
Here is a case of two common notations. It would usually be futile to try to change anybody else's choice. At most we can carp if authors aren't consistent within publications we review or don't follow an arbitrary style standard for a journal.
|
Upper case (P) or lower case (p) to denote p-values and probabilities in frequentist and Bayesian st
I favour $P$ for P-value partly because that usage goes back a long way and I like history of ideas, but perhaps more because $p$ is already overloaded: I often want $p$ to mean
probability in gener
|
46,610
|
Support vector machine optimization question
|
The simplification in this problem is that the intercept term $\theta_0=0$. This condition allows to draw the decision boundary through the origin. The goal is then to maximize the signed length of the projection vectors, so as to minimize the norm of $\lVert\theta\rVert$, which is our optimization goal: $\underset{\theta}{\arg \min} \frac{1}{2}\displaystyle \sum_{j=1}^n \theta_j^2.$
If you pivot the green vector $\theta$ around the origin, the optimization falls on the most intuitive position. Here is the simulation on Geogebra with a slider.
So just read the x-coordinate, and think about the reciprocal relationship of $p$ and $\lVert \theta \rVert$ (smaller projection values necessitate higher $\lVert\theta\rVert$ to compensate), under the $\pm 1$ constraint.
If the projection happened to be symmetrically $\pm 2$, your constraints that $p^{I=1}\lVert\theta\rVert\geq 1$, and $p^{I=0}\lVert\theta\rVert\leq -1$ would dictate for the norm, $\lVert \theta \rVert$ to be at least $1/2$.
The norm of theta is its length $\lVert \theta \rVert=\sqrt{\theta_1^2 + \theta_2^2}$ in the case of two features (as in the case illustrated - without intercept or "bias term").
|
Support vector machine optimization question
|
The simplification in this problem is that the intercept term $\theta_0=0$. This condition allows to draw the decision boundary through the origin. The goal is then to maximize the signed length of th
|
Support vector machine optimization question
The simplification in this problem is that the intercept term $\theta_0=0$. This condition allows to draw the decision boundary through the origin. The goal is then to maximize the signed length of the projection vectors, so as to minimize the norm of $\lVert\theta\rVert$, which is our optimization goal: $\underset{\theta}{\arg \min} \frac{1}{2}\displaystyle \sum_{j=1}^n \theta_j^2.$
If you pivot the green vector $\theta$ around the origin, the optimization falls on the most intuitive position. Here is the simulation on Geogebra with a slider.
So just read the x-coordinate, and think about the reciprocal relationship of $p$ and $\lVert \theta \rVert$ (smaller projection values necessitate higher $\lVert\theta\rVert$ to compensate), under the $\pm 1$ constraint.
If the projection happened to be symmetrically $\pm 2$, your constraints that $p^{I=1}\lVert\theta\rVert\geq 1$, and $p^{I=0}\lVert\theta\rVert\leq -1$ would dictate for the norm, $\lVert \theta \rVert$ to be at least $1/2$.
The norm of theta is its length $\lVert \theta \rVert=\sqrt{\theta_1^2 + \theta_2^2}$ in the case of two features (as in the case illustrated - without intercept or "bias term").
|
Support vector machine optimization question
The simplification in this problem is that the intercept term $\theta_0=0$. This condition allows to draw the decision boundary through the origin. The goal is then to maximize the signed length of th
|
46,611
|
Support vector machine optimization question
|
I suggest that you study Andrew Ng's lecture note on SVM here. The short answer is, first you need to solve $\alpha_i$ in dual optimization problem. Then, solve for $\theta$, or $w$ in the lecture note using the following eq:
$$w=\sum_{i=1}^{m}\alpha_iy^{(i)}x^{(i)}$$
|
Support vector machine optimization question
|
I suggest that you study Andrew Ng's lecture note on SVM here. The short answer is, first you need to solve $\alpha_i$ in dual optimization problem. Then, solve for $\theta$, or $w$ in the lecture not
|
Support vector machine optimization question
I suggest that you study Andrew Ng's lecture note on SVM here. The short answer is, first you need to solve $\alpha_i$ in dual optimization problem. Then, solve for $\theta$, or $w$ in the lecture note using the following eq:
$$w=\sum_{i=1}^{m}\alpha_iy^{(i)}x^{(i)}$$
|
Support vector machine optimization question
I suggest that you study Andrew Ng's lecture note on SVM here. The short answer is, first you need to solve $\alpha_i$ in dual optimization problem. Then, solve for $\theta$, or $w$ in the lecture not
|
46,612
|
Discrete white noise
|
Assuming that the $S(n)$ are also binary (taking on values in $\{0,1\}$) as are the $X(n)$, then I suspect that the $\epsilon(n)$ are also meant to be taking on values in $\{0,1\}$ and that $+$ in $X(n) = S(n) + \epsilon(n)$ is intended to be a modulo 2 sum or Exclusive-OR sum which might be better written as
$$X(n) = S(n)\oplus \epsilon(n)\tag{1}$$
or, in real-number arithmetic as
$$X(n) = S(n) + \epsilon(n) - 2S(n)\epsilon(n).\tag{2}$$
The model for this white noise process is an IID sequence of Bernoulli
random variables with parameter $p$ that are independent of the $S(n)$ series.
Yes, I know that you said uncorrelated, but uncorrelated Bernoulli random variables are also independent random variables.
Readers of stats.SE and time-series books will undoubtedly be horrified
at the nonlinear equation $(2)$, but this model is quite commonly used in the communications and information theory literature under the name Binary Symmetric Channel with crossover probability $p$ and readers of
dsp.SE will be quite familiar with the model.
|
Discrete white noise
|
Assuming that the $S(n)$ are also binary (taking on values in $\{0,1\}$) as are the $X(n)$, then I suspect that the $\epsilon(n)$ are also meant to be taking on values in $\{0,1\}$ and that $+$ in $X(
|
Discrete white noise
Assuming that the $S(n)$ are also binary (taking on values in $\{0,1\}$) as are the $X(n)$, then I suspect that the $\epsilon(n)$ are also meant to be taking on values in $\{0,1\}$ and that $+$ in $X(n) = S(n) + \epsilon(n)$ is intended to be a modulo 2 sum or Exclusive-OR sum which might be better written as
$$X(n) = S(n)\oplus \epsilon(n)\tag{1}$$
or, in real-number arithmetic as
$$X(n) = S(n) + \epsilon(n) - 2S(n)\epsilon(n).\tag{2}$$
The model for this white noise process is an IID sequence of Bernoulli
random variables with parameter $p$ that are independent of the $S(n)$ series.
Yes, I know that you said uncorrelated, but uncorrelated Bernoulli random variables are also independent random variables.
Readers of stats.SE and time-series books will undoubtedly be horrified
at the nonlinear equation $(2)$, but this model is quite commonly used in the communications and information theory literature under the name Binary Symmetric Channel with crossover probability $p$ and readers of
dsp.SE will be quite familiar with the model.
|
Discrete white noise
Assuming that the $S(n)$ are also binary (taking on values in $\{0,1\}$) as are the $X(n)$, then I suspect that the $\epsilon(n)$ are also meant to be taking on values in $\{0,1\}$ and that $+$ in $X(
|
46,613
|
Discrete white noise
|
Discrete white noise definition is quite similar to continuous noise definition, meaning that it has mean zero (constant), and its variance is also constant (nonzero), and there's no autocorrelation:
$$E[\varepsilon(n)]=0$$
$$Var[\varepsilon(n)]=\sigma^2$$
$$E[\varepsilon(n)\varepsilon(n-k)]=0,\space k>0 $$
Example: $\varepsilon(n)\sim Pois(\lambda)-\lambda$, you have:
$$E[\varepsilon(n)]=0$$
$$Var[\varepsilon(n)]=\lambda$$
$$E[\varepsilon(n)\varepsilon(n-k)]=0,\space k>0 $$
This is used widely in finance to model so called jump processes as opposed to diffusion, which is a usual Brownian motion. Also, look up Levy and Poisson processes. This one is interesting that it has some similarities to Brownian motion since Poisson distribution like Gaussian is a stable distribution. So accumulation of this particular noise remains to be Poisson!
|
Discrete white noise
|
Discrete white noise definition is quite similar to continuous noise definition, meaning that it has mean zero (constant), and its variance is also constant (nonzero), and there's no autocorrelation:
|
Discrete white noise
Discrete white noise definition is quite similar to continuous noise definition, meaning that it has mean zero (constant), and its variance is also constant (nonzero), and there's no autocorrelation:
$$E[\varepsilon(n)]=0$$
$$Var[\varepsilon(n)]=\sigma^2$$
$$E[\varepsilon(n)\varepsilon(n-k)]=0,\space k>0 $$
Example: $\varepsilon(n)\sim Pois(\lambda)-\lambda$, you have:
$$E[\varepsilon(n)]=0$$
$$Var[\varepsilon(n)]=\lambda$$
$$E[\varepsilon(n)\varepsilon(n-k)]=0,\space k>0 $$
This is used widely in finance to model so called jump processes as opposed to diffusion, which is a usual Brownian motion. Also, look up Levy and Poisson processes. This one is interesting that it has some similarities to Brownian motion since Poisson distribution like Gaussian is a stable distribution. So accumulation of this particular noise remains to be Poisson!
|
Discrete white noise
Discrete white noise definition is quite similar to continuous noise definition, meaning that it has mean zero (constant), and its variance is also constant (nonzero), and there's no autocorrelation:
|
46,614
|
Using Non-numeric Features
|
In most cases, you find a way to turn the non-numeric feature in a numeric one, and then go from there.
The simplest solution is to generate a set of indicator variables. For example, if you have $n$ different schools, you might add a set of $n$ variables $S_1, S_2, \ldots S_n$ to each data point. To indicate that $i$th school on your list is the closest, set $S_i = 1$ and set the rest of the variables to zero. This works well when 1) the identity of the closest school matters and 2) you can enumerate the schools present in your data set.
You might also think that the school identity per se doesn't actually carry much information; it's just a proxy for information about school size, test scores, student:teacher ratio, etc. You could join your data set with another data source that has that sort of information. The features would now be something like "size of the nearest high school", "Average SAT score at the nearest high school", etc.
It's also possible that the name of the school has a little bit of signal in it. For example, a good school system might have magnet and/or lab schools in it. You could design features to extract these from a string containing the school name. These would then be added, as indicator variables, to your feature set. This process, often called feature engineering, may require some domain knowledge.
However, in some cases, you can work directly on the non-numeric data. This is particularly true when building a discriminative classifier (or anything else using distances). For example, there are special kernels for support vector machines that allow you directly operate on strings (e.g., http://www.jmlr.org/papers/volume2/lodhi02a/lodhi02a.pdf), without turning them into something like a bag-of-words vector or something like that.
|
Using Non-numeric Features
|
In most cases, you find a way to turn the non-numeric feature in a numeric one, and then go from there.
The simplest solution is to generate a set of indicator variables. For example, if you have $n$
|
Using Non-numeric Features
In most cases, you find a way to turn the non-numeric feature in a numeric one, and then go from there.
The simplest solution is to generate a set of indicator variables. For example, if you have $n$ different schools, you might add a set of $n$ variables $S_1, S_2, \ldots S_n$ to each data point. To indicate that $i$th school on your list is the closest, set $S_i = 1$ and set the rest of the variables to zero. This works well when 1) the identity of the closest school matters and 2) you can enumerate the schools present in your data set.
You might also think that the school identity per se doesn't actually carry much information; it's just a proxy for information about school size, test scores, student:teacher ratio, etc. You could join your data set with another data source that has that sort of information. The features would now be something like "size of the nearest high school", "Average SAT score at the nearest high school", etc.
It's also possible that the name of the school has a little bit of signal in it. For example, a good school system might have magnet and/or lab schools in it. You could design features to extract these from a string containing the school name. These would then be added, as indicator variables, to your feature set. This process, often called feature engineering, may require some domain knowledge.
However, in some cases, you can work directly on the non-numeric data. This is particularly true when building a discriminative classifier (or anything else using distances). For example, there are special kernels for support vector machines that allow you directly operate on strings (e.g., http://www.jmlr.org/papers/volume2/lodhi02a/lodhi02a.pdf), without turning them into something like a bag-of-words vector or something like that.
|
Using Non-numeric Features
In most cases, you find a way to turn the non-numeric feature in a numeric one, and then go from there.
The simplest solution is to generate a set of indicator variables. For example, if you have $n$
|
46,615
|
Using Non-numeric Features
|
There could be many categorical variables that affect the sales price - for example, Ready_for_immediate_occupancy (Yes/No). I will not worry about your specific variable - closest high school - and just address using categorical variables in regression.
As @Matt Krause said, certainly it is not uncommon to use indicator variables and simply treat them as numeric variables. But there are other methods as well. One of my favorites is to use decision trees. Many people learn about decision trees as a means of classification - predicting a nominal value. Your question is about regression - predicting a numeric value. But decision trees work just fine for regression and have no problems with categorical variables. One of the most popular methods is CART - Classification and Regression Trees. Other decision tree algorithms handle this too. Try googling decision trees regression and you should find lots on the topic.
|
Using Non-numeric Features
|
There could be many categorical variables that affect the sales price - for example, Ready_for_immediate_occupancy (Yes/No). I will not worry about your specific variable - closest high school - and
|
Using Non-numeric Features
There could be many categorical variables that affect the sales price - for example, Ready_for_immediate_occupancy (Yes/No). I will not worry about your specific variable - closest high school - and just address using categorical variables in regression.
As @Matt Krause said, certainly it is not uncommon to use indicator variables and simply treat them as numeric variables. But there are other methods as well. One of my favorites is to use decision trees. Many people learn about decision trees as a means of classification - predicting a nominal value. Your question is about regression - predicting a numeric value. But decision trees work just fine for regression and have no problems with categorical variables. One of the most popular methods is CART - Classification and Regression Trees. Other decision tree algorithms handle this too. Try googling decision trees regression and you should find lots on the topic.
|
Using Non-numeric Features
There could be many categorical variables that affect the sales price - for example, Ready_for_immediate_occupancy (Yes/No). I will not worry about your specific variable - closest high school - and
|
46,616
|
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
|
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
Because it's not a test for means.
I'm wanting to compare scores derived from a reaction time task between two groups with unequal sizes (G1 = 78; G2 = 23). However, when I run the U test it tells me there is no significant difference, U = 897.00, z = .000, P = 1.00. How am I getting significance of 1.00 when there is at least some difference between the means (.21 vs .20)?
Because it doesn't compare means! (Nor does it compare medians, in spite of many books saying otherwise)
Even though the difference in means is slightly different from 0, the thing that the Mann-Whitney looks at* turned out to be 0.
* whether conceived in terms of average rank or as two-sample Hodges-Lehmann difference.
(See this answer, for example)
I was expecting no difference, but these stats are different from what I got from t-testing.
If they were always the same, you wouldn't need two different tests.
I am running the U-test to account for unequal group sizes and non-normal data on several different variables, but is a z-statistic of .000 and P value of 1.00 accurate to report?
I'm not sure what you mean by "accurate" but I'd just report those figures; they're certainly possible.
|
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
|
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
Because it's not a test for means.
I'm wanting to compare scores derived from a reaction time task between two groups with unequal
|
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
Because it's not a test for means.
I'm wanting to compare scores derived from a reaction time task between two groups with unequal sizes (G1 = 78; G2 = 23). However, when I run the U test it tells me there is no significant difference, U = 897.00, z = .000, P = 1.00. How am I getting significance of 1.00 when there is at least some difference between the means (.21 vs .20)?
Because it doesn't compare means! (Nor does it compare medians, in spite of many books saying otherwise)
Even though the difference in means is slightly different from 0, the thing that the Mann-Whitney looks at* turned out to be 0.
* whether conceived in terms of average rank or as two-sample Hodges-Lehmann difference.
(See this answer, for example)
I was expecting no difference, but these stats are different from what I got from t-testing.
If they were always the same, you wouldn't need two different tests.
I am running the U-test to account for unequal group sizes and non-normal data on several different variables, but is a z-statistic of .000 and P value of 1.00 accurate to report?
I'm not sure what you mean by "accurate" but I'd just report those figures; they're certainly possible.
|
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
Because it's not a test for means.
I'm wanting to compare scores derived from a reaction time task between two groups with unequal
|
46,617
|
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
|
The Mann-Whitney U test is a non-parametric test meaning that it is, loosely put, counting up hits and misses for rankings with the point being that the number of outcomes is countable as opposed to real continuous like a $t$-statistic. With some simplification the number of outcomes is $\frac{n(n+1)}{2}$ or the number of combinations of $n$ objects taken two at a time, and in practice, some of those combinations yield a $p=1$. What this does is to limit the number of possible probability values a U-statistic can output such that a perfect $p=1$ merely means that no difference was detected as the countable events matched. The $p=1$ is an accidental outcome, that is, it does not have the same meaning as it would in a deterministic system, and even suspected to be deterministic systems suffer the black swan problem. That is, just because the probability of seeing only white swans might seem like $p=1$, doesn't make that estimate hold for a larger dataset. That is, one needs a solid physical reason for calling a system deterministic, and guesswork alone cannot provide that. As it is, the outcomes for the Mann-Whitney U test are countable, each outcome is only approximate.
Couldn't help but notice the discussion above. Here is a worked example of what the Mann-Whitney U test does. In that file, one can see that the Mann-Whitney U test tests differences in location of the data sets using a fairly comprehensive comparison of rankings. The concept of location of data is a more general one than mean or median. The best measure of data location can be thought of as the minimum variance unbiased estimator of data location MVUE. For example, for a uniform distribution of unknown population endpoints, the average extreme value $\frac{min(x)+max(x)}{2}$ of $x$-values is a lower variance estimator of location than either the mean or median, despite the fact that for uniform distributions all three measures tend to the same location in the limit as the number of observations increases unbounded large.
The Mann-Whitney U test, has as its most important feature is that it is an unbiased test of difference of location as it ignores non-normality. Finally, the difference of location measure of the Mann-Whitney U test is...wait for it...the U-statistic.
The theory of U-statistics allows a minimum-variance unbiased estimator to be derived from each unbiased estimator of an estimable parameter (alternatively, statistical functional) for large classes of probability distributions. If $f(x_1, x_2) = |x_1 - x_2|$, the U-statistic is the mean pairwise deviation
$f_n(x_1,\ldots, x_n) = \sum_{i\neq j} |x_i - x_j| / (n(n-1))$, defined for $n\ge 2$. That theory explains that the U-statistic is MVUE for the median of data triplets and not of $n$ data samples.
|
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
|
The Mann-Whitney U test is a non-parametric test meaning that it is, loosely put, counting up hits and misses for rankings with the point being that the number of outcomes is countable as opposed to r
|
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
The Mann-Whitney U test is a non-parametric test meaning that it is, loosely put, counting up hits and misses for rankings with the point being that the number of outcomes is countable as opposed to real continuous like a $t$-statistic. With some simplification the number of outcomes is $\frac{n(n+1)}{2}$ or the number of combinations of $n$ objects taken two at a time, and in practice, some of those combinations yield a $p=1$. What this does is to limit the number of possible probability values a U-statistic can output such that a perfect $p=1$ merely means that no difference was detected as the countable events matched. The $p=1$ is an accidental outcome, that is, it does not have the same meaning as it would in a deterministic system, and even suspected to be deterministic systems suffer the black swan problem. That is, just because the probability of seeing only white swans might seem like $p=1$, doesn't make that estimate hold for a larger dataset. That is, one needs a solid physical reason for calling a system deterministic, and guesswork alone cannot provide that. As it is, the outcomes for the Mann-Whitney U test are countable, each outcome is only approximate.
Couldn't help but notice the discussion above. Here is a worked example of what the Mann-Whitney U test does. In that file, one can see that the Mann-Whitney U test tests differences in location of the data sets using a fairly comprehensive comparison of rankings. The concept of location of data is a more general one than mean or median. The best measure of data location can be thought of as the minimum variance unbiased estimator of data location MVUE. For example, for a uniform distribution of unknown population endpoints, the average extreme value $\frac{min(x)+max(x)}{2}$ of $x$-values is a lower variance estimator of location than either the mean or median, despite the fact that for uniform distributions all three measures tend to the same location in the limit as the number of observations increases unbounded large.
The Mann-Whitney U test, has as its most important feature is that it is an unbiased test of difference of location as it ignores non-normality. Finally, the difference of location measure of the Mann-Whitney U test is...wait for it...the U-statistic.
The theory of U-statistics allows a minimum-variance unbiased estimator to be derived from each unbiased estimator of an estimable parameter (alternatively, statistical functional) for large classes of probability distributions. If $f(x_1, x_2) = |x_1 - x_2|$, the U-statistic is the mean pairwise deviation
$f_n(x_1,\ldots, x_n) = \sum_{i\neq j} |x_i - x_j| / (n(n-1))$, defined for $n\ge 2$. That theory explains that the U-statistic is MVUE for the median of data triplets and not of $n$ data samples.
|
How can a Mann-Whitney U-Test return a p = 1.00 for unequal means?
The Mann-Whitney U test is a non-parametric test meaning that it is, loosely put, counting up hits and misses for rankings with the point being that the number of outcomes is countable as opposed to r
|
46,618
|
How choose a proper ARIMA model looking at ACF and PACF?
|
The visuals suggest your data is probably hourly data which means that there may be daily effects depending upon the kind of data that it is. Daily effects often include day-of-the-week effects , weekly effects , holiday effects et al and of course possible outliers/level shifts/time trends. Why don't you post your data and it's type and perhaps I can help further. Looking at acf plots (symptoms) in order to deduce "causes" can be useful but many times insufficient to identify a useful model. Leaning on simple statistics like BIC and AIC can be confusing (nearly always !) when the data has inherent structure other than very simple ARIMA. Very simple data often arises in textbooks bent on proposing simple model identification tools but hardly ever in the real world.
EDITED AFTER Math's Fun REQUEST FOR VIABLE ALTERNATIVES TO BIC AND SUCH:
Besides AUTOBOX ( based upon my dissertation topic ) which uses built-in heuristics showing the step-by-step process I can provide here some top level guidance. There is a free version (including an R version) which allows hundreds of text book data sets to be used without any commitment. This free feature can be very educational and instrumental in expanding one's consciousness as to possible pitfalls and approaches to model identification.
In summary ARIMA ( any model ! ) identification is an iterative process not a one-and-done. The anachronistic view that you can assume that there are no outliers and form a model that subsequentially detects outliers suggests possible (probable )sub-optimization because your first assumption has been proven to be wrong. The modern approach requires a comprehensive/simultaneous/global approach which yields a holistic model combining both memory (ARIMA) and needed dummy variables. To give you an example . First identify possible pulses/level shifts/local time trends/seasonal pulses and then take the residuals from this tentative model and then identify ARIMA . Now form a composite/hybrid model and validate/test for remaining structure in the errors which can include ARIMA modifications and additional dummy variables and possible treatment for time-varying parameters /and/or time varying/dependent error variance. Secondarily one might use the Inverse-Autocorrelation procedure http://www.eco.uc3m.es/~jgonzalo/teaching/timeseriesMA/IdentificationWei.pdf to provide a reasonable initial ARIMA model and proceed from there to augment as necessary. Another very useful approach is the EACF or Extended ACF With regard to ARMA time series, what exactly is eacf (extended auto-correlation function)? . The aforementioned AUTOBOX uses a hybrid of these two to initially identify a model before it iterates to a statistically significant and parsimonious solution.
Thus model identification is an iterative , self-checking process .
|
How choose a proper ARIMA model looking at ACF and PACF?
|
The visuals suggest your data is probably hourly data which means that there may be daily effects depending upon the kind of data that it is. Daily effects often include day-of-the-week effects , week
|
How choose a proper ARIMA model looking at ACF and PACF?
The visuals suggest your data is probably hourly data which means that there may be daily effects depending upon the kind of data that it is. Daily effects often include day-of-the-week effects , weekly effects , holiday effects et al and of course possible outliers/level shifts/time trends. Why don't you post your data and it's type and perhaps I can help further. Looking at acf plots (symptoms) in order to deduce "causes" can be useful but many times insufficient to identify a useful model. Leaning on simple statistics like BIC and AIC can be confusing (nearly always !) when the data has inherent structure other than very simple ARIMA. Very simple data often arises in textbooks bent on proposing simple model identification tools but hardly ever in the real world.
EDITED AFTER Math's Fun REQUEST FOR VIABLE ALTERNATIVES TO BIC AND SUCH:
Besides AUTOBOX ( based upon my dissertation topic ) which uses built-in heuristics showing the step-by-step process I can provide here some top level guidance. There is a free version (including an R version) which allows hundreds of text book data sets to be used without any commitment. This free feature can be very educational and instrumental in expanding one's consciousness as to possible pitfalls and approaches to model identification.
In summary ARIMA ( any model ! ) identification is an iterative process not a one-and-done. The anachronistic view that you can assume that there are no outliers and form a model that subsequentially detects outliers suggests possible (probable )sub-optimization because your first assumption has been proven to be wrong. The modern approach requires a comprehensive/simultaneous/global approach which yields a holistic model combining both memory (ARIMA) and needed dummy variables. To give you an example . First identify possible pulses/level shifts/local time trends/seasonal pulses and then take the residuals from this tentative model and then identify ARIMA . Now form a composite/hybrid model and validate/test for remaining structure in the errors which can include ARIMA modifications and additional dummy variables and possible treatment for time-varying parameters /and/or time varying/dependent error variance. Secondarily one might use the Inverse-Autocorrelation procedure http://www.eco.uc3m.es/~jgonzalo/teaching/timeseriesMA/IdentificationWei.pdf to provide a reasonable initial ARIMA model and proceed from there to augment as necessary. Another very useful approach is the EACF or Extended ACF With regard to ARMA time series, what exactly is eacf (extended auto-correlation function)? . The aforementioned AUTOBOX uses a hybrid of these two to initially identify a model before it iterates to a statistically significant and parsimonious solution.
Thus model identification is an iterative , self-checking process .
|
How choose a proper ARIMA model looking at ACF and PACF?
The visuals suggest your data is probably hourly data which means that there may be daily effects depending upon the kind of data that it is. Daily effects often include day-of-the-week effects , week
|
46,619
|
How choose a proper ARIMA model looking at ACF and PACF?
|
Since the ACF of the series gets closer to zero with increasing lags, the order of integration must be 0, that is you have ARIMA(p,0,q). (here I might be a little cautious since you might have the order of integration, $d$, such that $-\frac12<d<0$, in case you are interested in long-range dependence filters).
On the other hand as ACF has swings below and above zero, you have some antipertistance in your data. PACF gets close to zero after almost 30 lags hence q is possibly below 30. It is though not so easy to say something about p. Did you try using an information criterion like BIC?
Could you please share the source of your data or say bit on what do they represent?
|
How choose a proper ARIMA model looking at ACF and PACF?
|
Since the ACF of the series gets closer to zero with increasing lags, the order of integration must be 0, that is you have ARIMA(p,0,q). (here I might be a little cautious since you might have the ord
|
How choose a proper ARIMA model looking at ACF and PACF?
Since the ACF of the series gets closer to zero with increasing lags, the order of integration must be 0, that is you have ARIMA(p,0,q). (here I might be a little cautious since you might have the order of integration, $d$, such that $-\frac12<d<0$, in case you are interested in long-range dependence filters).
On the other hand as ACF has swings below and above zero, you have some antipertistance in your data. PACF gets close to zero after almost 30 lags hence q is possibly below 30. It is though not so easy to say something about p. Did you try using an information criterion like BIC?
Could you please share the source of your data or say bit on what do they represent?
|
How choose a proper ARIMA model looking at ACF and PACF?
Since the ACF of the series gets closer to zero with increasing lags, the order of integration must be 0, that is you have ARIMA(p,0,q). (here I might be a little cautious since you might have the ord
|
46,620
|
Decision Trees: if always binary split
|
CHAID Trees can have multiple nodes (more than 2), so decision trees are not always binary.
There are many different tree building algorithms and the Random Forest algorithm actually creates an ensemble of decision trees. In the original paper, the authors used a slight variation of the CART algorithm. However its not necessary to use CART to build a Random Forest model; other researchers have extended the concept by using tree learners built using different algorithms to generate a Random Forest model.
It is also not necessary that the independent variables which are used to build a decision tree necessarily have to be binary valued (0-1 valued or one-hot encoded categorical variables). Doing so would have severely restricted the usability of these algorithms. Each of the nodes may contain multiple categories after the split. For example, a variable with weather category as windy, foggy, rainy and sunny may be split into two nodes - Node 1 : windy + sunny, and Node 2 : foggy and rainy.
To answer your specific query: both scikit-learn and R use CART for building random forest models, so their base learners are binary trees.
|
Decision Trees: if always binary split
|
CHAID Trees can have multiple nodes (more than 2), so decision trees are not always binary.
There are many different tree building algorithms and the Random Forest algorithm actually creates an ensemb
|
Decision Trees: if always binary split
CHAID Trees can have multiple nodes (more than 2), so decision trees are not always binary.
There are many different tree building algorithms and the Random Forest algorithm actually creates an ensemble of decision trees. In the original paper, the authors used a slight variation of the CART algorithm. However its not necessary to use CART to build a Random Forest model; other researchers have extended the concept by using tree learners built using different algorithms to generate a Random Forest model.
It is also not necessary that the independent variables which are used to build a decision tree necessarily have to be binary valued (0-1 valued or one-hot encoded categorical variables). Doing so would have severely restricted the usability of these algorithms. Each of the nodes may contain multiple categories after the split. For example, a variable with weather category as windy, foggy, rainy and sunny may be split into two nodes - Node 1 : windy + sunny, and Node 2 : foggy and rainy.
To answer your specific query: both scikit-learn and R use CART for building random forest models, so their base learners are binary trees.
|
Decision Trees: if always binary split
CHAID Trees can have multiple nodes (more than 2), so decision trees are not always binary.
There are many different tree building algorithms and the Random Forest algorithm actually creates an ensemb
|
46,621
|
Statistical test to determine if a relationship is linear?
|
Any rank test will only test for monotonicity, and a highly nonlinear relationship can certainly be monotone. So any rank-based test won't be helpful.
I would recommend that you fit a linear and a nonlinear model and assess whether the nonlinear model explains a significantly larger amount of variance via ANOVA. Here is a little example in R:
set.seed(1)
xx <- runif(100)
yy <- xx^2+rnorm(100,0,0.1)
plot(xx,yy)
model.linear <- lm(yy~xx)
model.squared <- lm(yy~poly(xx,2))
anova(model.linear,model.squared)
Analysis of Variance Table
Model 1: yy ~ xx
Model 2: yy ~ poly(xx, 2)
Res.Df RSS Df Sum of Sq F Pr(>F)
1 98 1.27901
2 97 0.86772 1 0.41129 45.977 9.396e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
In this particular case, the ANOVA correctly identifies nonlinearity.
Of course, you can also look at higher orders than squares. Or use splines instead of unrestricted polynomials. Or harmonics. It really depends on what specific alternatives you have in mind, which will in turn depend on your specific use case.
|
Statistical test to determine if a relationship is linear?
|
Any rank test will only test for monotonicity, and a highly nonlinear relationship can certainly be monotone. So any rank-based test won't be helpful.
I would recommend that you fit a linear and a non
|
Statistical test to determine if a relationship is linear?
Any rank test will only test for monotonicity, and a highly nonlinear relationship can certainly be monotone. So any rank-based test won't be helpful.
I would recommend that you fit a linear and a nonlinear model and assess whether the nonlinear model explains a significantly larger amount of variance via ANOVA. Here is a little example in R:
set.seed(1)
xx <- runif(100)
yy <- xx^2+rnorm(100,0,0.1)
plot(xx,yy)
model.linear <- lm(yy~xx)
model.squared <- lm(yy~poly(xx,2))
anova(model.linear,model.squared)
Analysis of Variance Table
Model 1: yy ~ xx
Model 2: yy ~ poly(xx, 2)
Res.Df RSS Df Sum of Sq F Pr(>F)
1 98 1.27901
2 97 0.86772 1 0.41129 45.977 9.396e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
In this particular case, the ANOVA correctly identifies nonlinearity.
Of course, you can also look at higher orders than squares. Or use splines instead of unrestricted polynomials. Or harmonics. It really depends on what specific alternatives you have in mind, which will in turn depend on your specific use case.
|
Statistical test to determine if a relationship is linear?
Any rank test will only test for monotonicity, and a highly nonlinear relationship can certainly be monotone. So any rank-based test won't be helpful.
I would recommend that you fit a linear and a non
|
46,622
|
Statistical test to determine if a relationship is linear?
|
I like a lot the ANOVA-based answer by Stephan Kolassa. However, I would also like to offer a slightly different perspective.
First of all, consider the reason why you're testing for nonlinearity. If you want to test the assumptions of ordinary least squares to estimate the simple linear regression model, note that if you then want to use the estimated model to perform other tests (e.g., test whether the correlation between $X$ and $Y$ is statistically significant), the resulting test will be a composite test, whose Type I and Type II error rates won't be the nominal ones. This is one of multiple reasons why, instead than formally testing the assumptions of linear regression, you may want to use plots in order to understand if those assumptions are reasonable. Another reason is that the more tests you perform, the more likely you are to get a significant test result even if the null is true (after all, linearity of the relationship between $X$ and $Y$ is not the only assumption of the simple linear regression model), and closely related to this reason there's the fact that assumption tests have themselves assumptions!
For example, following Stephan Kolassa's example, let's build a simple regression model:
set.seed(1)
xx <- runif(100)
yy <- xx^2+rnorm(100,0,0.1)
plot(xx,yy)
linear.model <- lm(yy ~ xx)
The plot function for linear models shows a host of plots whose goal is exactly to give you an idea about the validity of the assumptions behind the linear model and the OLS estimation method. The purpose of the first of these plots, the residuals vs fitted plot, is exactly to show if there are deviations from the assumption of a linear relationship between the predictor $X$ and the response $Y$:
plot(linear.model)
You can clearly see that there is a quadratic trend between fitted values and residuals, thus the assumption that $Y$ is a linear function of $X$ is questionable.
If, however, you are determined on using a statistical test to verify the assumption of linearity, then you're faced with the issue that, as noted by Stephan Kolassa, there are infinitely many possible forms of nonlinearity, so you cannot possibly devise a single test for all of them. You need to decide your alternatives and then you can test for them.
Now, if all your alternatives are polynomials, then you don't even need ANOVA, because by default R computes orthogonal polynomials. Let's test 4 alternatives, i.e., a linear polynomial, a quadratic one, a cubic one and a quartic one. Of course, looking at the residual vs fitted plot, there's not evidence for an higher than degree 2 model here. However, we include the higher degree models to show how to operate in a more general case. We just need one fit to compare all four models:
quartic.model <- lm(yy ~ poly(xx,4))
summary(quartic.model)
Call:
lm(formula = yy ~ poly(xx, 4))
Residuals:
Min 1Q Median 3Q Max
-0.175678 -0.061429 -0.007403 0.056324 0.264612
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.33729 0.00947 35.617 < 2e-16 ***
poly(xx, 4)1 2.78089 0.09470 29.365 < 2e-16 ***
poly(xx, 4)2 0.64132 0.09470 6.772 1.05e-09 ***
poly(xx, 4)3 0.04490 0.09470 0.474 0.636
poly(xx, 4)4 0.11722 0.09470 1.238 0.219
As you can see, the p-values for the first and second degree term are extremely low, meaning that a linear fit is insufficient, but the p-values for the third and fourth term are much larger, meaning that third or higher degree models are not justified. Thus, we select the second degree model. Note that this is only valid because R is fitting orthogonal polynomials (don't try to do this when fitting raw polynomials!). The result would have been the same if we had used ANOVA. As a matter of fact, the squares of the t-statistics here are equal to the F-statistics of the ANOVA test:
linear.model <- lm(yy ~ poly(xx,1))
quadratic.model <- lm(yy ~ poly(xx,2))
cubic.model <- lm(yy ~ poly(xx,3))
anova(linear.model, quadratic.model, cubic.model, quartic.model)
Analysis of Variance Table
Model 1: yy ~ poly(xx, 1)
Model 2: yy ~ poly(xx, 2)
Model 3: yy ~ poly(xx, 3)
Model 4: yy ~ poly(xx, 4)
Res.Df RSS Df Sum of Sq F Pr(>F)
1 98 1.27901
2 97 0.86772 1 0.41129 45.8622 1.049e-09 ***
3 96 0.86570 1 0.00202 0.2248 0.6365
4 95 0.85196 1 0.01374 1.5322 0.2188
For example, 6.772^2 = 45.85998, which is not exactly 45.8622 but pretty close, taking into account numerical errors.
The advantage of the ANOVA test comes into play when you want to explore non-polynomial models, as long as they're all nested. Two or more models $M_1,\dots,M_N$ are nested if the predictors of $M_i$ are a subset of the predictors of $M_{i+1}$, for each $i$. For example, let's consider a cubic spline model with 1 interior knot placed at the median of xx. The cubic spline basis includes linear, second and third degree polynomials, thus the linear.model, the quadratic.model and the cubic.model are all nested models of the following spline.model:
spline.model <- lm(yy ~ bs(xx,knots = quantile(xx,prob=0.5)))
The quartic.model is not a nested model of the spline.model (nor is the vice versa true), so we must leave it out of our ANOVA test:
anova(linear.model, quadratic.model,cubic.model,spline.model)
Analysis of Variance Table
Model 1: yy ~ poly(xx, 1)
Model 2: yy ~ poly(xx, 2)
Model 3: yy ~ poly(xx, 3)
Model 4: yy ~ bs(xx, knots = quantile(xx, prob = 0.5))
Res.Df RSS Df Sum of Sq F Pr(>F)
1 98 1.27901
2 97 0.86772 1 0.41129 46.1651 9.455e-10 ***
3 96 0.86570 1 0.00202 0.2263 0.6354
4 95 0.84637 1 0.01933 2.1699 0.1440
Again, we see that a quadratic fit is justified, but we have no reason to reject the hypothesis of a quadratic model, in favour of a cubic or a spline fit alternative.
Finally, if you would like to test also non-nested model (for example, you would like to test a linear model, a spline model and a nonlinear model such as a Gaussian Process), then I don't think there are hypothesis tests for that. In this case your best bet is cross-validation.
|
Statistical test to determine if a relationship is linear?
|
I like a lot the ANOVA-based answer by Stephan Kolassa. However, I would also like to offer a slightly different perspective.
First of all, consider the reason why you're testing for nonlinearity. If
|
Statistical test to determine if a relationship is linear?
I like a lot the ANOVA-based answer by Stephan Kolassa. However, I would also like to offer a slightly different perspective.
First of all, consider the reason why you're testing for nonlinearity. If you want to test the assumptions of ordinary least squares to estimate the simple linear regression model, note that if you then want to use the estimated model to perform other tests (e.g., test whether the correlation between $X$ and $Y$ is statistically significant), the resulting test will be a composite test, whose Type I and Type II error rates won't be the nominal ones. This is one of multiple reasons why, instead than formally testing the assumptions of linear regression, you may want to use plots in order to understand if those assumptions are reasonable. Another reason is that the more tests you perform, the more likely you are to get a significant test result even if the null is true (after all, linearity of the relationship between $X$ and $Y$ is not the only assumption of the simple linear regression model), and closely related to this reason there's the fact that assumption tests have themselves assumptions!
For example, following Stephan Kolassa's example, let's build a simple regression model:
set.seed(1)
xx <- runif(100)
yy <- xx^2+rnorm(100,0,0.1)
plot(xx,yy)
linear.model <- lm(yy ~ xx)
The plot function for linear models shows a host of plots whose goal is exactly to give you an idea about the validity of the assumptions behind the linear model and the OLS estimation method. The purpose of the first of these plots, the residuals vs fitted plot, is exactly to show if there are deviations from the assumption of a linear relationship between the predictor $X$ and the response $Y$:
plot(linear.model)
You can clearly see that there is a quadratic trend between fitted values and residuals, thus the assumption that $Y$ is a linear function of $X$ is questionable.
If, however, you are determined on using a statistical test to verify the assumption of linearity, then you're faced with the issue that, as noted by Stephan Kolassa, there are infinitely many possible forms of nonlinearity, so you cannot possibly devise a single test for all of them. You need to decide your alternatives and then you can test for them.
Now, if all your alternatives are polynomials, then you don't even need ANOVA, because by default R computes orthogonal polynomials. Let's test 4 alternatives, i.e., a linear polynomial, a quadratic one, a cubic one and a quartic one. Of course, looking at the residual vs fitted plot, there's not evidence for an higher than degree 2 model here. However, we include the higher degree models to show how to operate in a more general case. We just need one fit to compare all four models:
quartic.model <- lm(yy ~ poly(xx,4))
summary(quartic.model)
Call:
lm(formula = yy ~ poly(xx, 4))
Residuals:
Min 1Q Median 3Q Max
-0.175678 -0.061429 -0.007403 0.056324 0.264612
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.33729 0.00947 35.617 < 2e-16 ***
poly(xx, 4)1 2.78089 0.09470 29.365 < 2e-16 ***
poly(xx, 4)2 0.64132 0.09470 6.772 1.05e-09 ***
poly(xx, 4)3 0.04490 0.09470 0.474 0.636
poly(xx, 4)4 0.11722 0.09470 1.238 0.219
As you can see, the p-values for the first and second degree term are extremely low, meaning that a linear fit is insufficient, but the p-values for the third and fourth term are much larger, meaning that third or higher degree models are not justified. Thus, we select the second degree model. Note that this is only valid because R is fitting orthogonal polynomials (don't try to do this when fitting raw polynomials!). The result would have been the same if we had used ANOVA. As a matter of fact, the squares of the t-statistics here are equal to the F-statistics of the ANOVA test:
linear.model <- lm(yy ~ poly(xx,1))
quadratic.model <- lm(yy ~ poly(xx,2))
cubic.model <- lm(yy ~ poly(xx,3))
anova(linear.model, quadratic.model, cubic.model, quartic.model)
Analysis of Variance Table
Model 1: yy ~ poly(xx, 1)
Model 2: yy ~ poly(xx, 2)
Model 3: yy ~ poly(xx, 3)
Model 4: yy ~ poly(xx, 4)
Res.Df RSS Df Sum of Sq F Pr(>F)
1 98 1.27901
2 97 0.86772 1 0.41129 45.8622 1.049e-09 ***
3 96 0.86570 1 0.00202 0.2248 0.6365
4 95 0.85196 1 0.01374 1.5322 0.2188
For example, 6.772^2 = 45.85998, which is not exactly 45.8622 but pretty close, taking into account numerical errors.
The advantage of the ANOVA test comes into play when you want to explore non-polynomial models, as long as they're all nested. Two or more models $M_1,\dots,M_N$ are nested if the predictors of $M_i$ are a subset of the predictors of $M_{i+1}$, for each $i$. For example, let's consider a cubic spline model with 1 interior knot placed at the median of xx. The cubic spline basis includes linear, second and third degree polynomials, thus the linear.model, the quadratic.model and the cubic.model are all nested models of the following spline.model:
spline.model <- lm(yy ~ bs(xx,knots = quantile(xx,prob=0.5)))
The quartic.model is not a nested model of the spline.model (nor is the vice versa true), so we must leave it out of our ANOVA test:
anova(linear.model, quadratic.model,cubic.model,spline.model)
Analysis of Variance Table
Model 1: yy ~ poly(xx, 1)
Model 2: yy ~ poly(xx, 2)
Model 3: yy ~ poly(xx, 3)
Model 4: yy ~ bs(xx, knots = quantile(xx, prob = 0.5))
Res.Df RSS Df Sum of Sq F Pr(>F)
1 98 1.27901
2 97 0.86772 1 0.41129 46.1651 9.455e-10 ***
3 96 0.86570 1 0.00202 0.2263 0.6354
4 95 0.84637 1 0.01933 2.1699 0.1440
Again, we see that a quadratic fit is justified, but we have no reason to reject the hypothesis of a quadratic model, in favour of a cubic or a spline fit alternative.
Finally, if you would like to test also non-nested model (for example, you would like to test a linear model, a spline model and a nonlinear model such as a Gaussian Process), then I don't think there are hypothesis tests for that. In this case your best bet is cross-validation.
|
Statistical test to determine if a relationship is linear?
I like a lot the ANOVA-based answer by Stephan Kolassa. However, I would also like to offer a slightly different perspective.
First of all, consider the reason why you're testing for nonlinearity. If
|
46,623
|
Statistical test to determine if a relationship is linear?
|
Along the lines of what @Kolassa said, I would suggest fitting a non-parametric model (something like a cubic spline smoother) to your data, and see if the improvement in the fit is significant.
If I remember well, the book "Generalized Additive Models", by Hastie and Tibshirani, contains a description of test that can be used for that.
|
Statistical test to determine if a relationship is linear?
|
Along the lines of what @Kolassa said, I would suggest fitting a non-parametric model (something like a cubic spline smoother) to your data, and see if the improvement in the fit is significant.
If I
|
Statistical test to determine if a relationship is linear?
Along the lines of what @Kolassa said, I would suggest fitting a non-parametric model (something like a cubic spline smoother) to your data, and see if the improvement in the fit is significant.
If I remember well, the book "Generalized Additive Models", by Hastie and Tibshirani, contains a description of test that can be used for that.
|
Statistical test to determine if a relationship is linear?
Along the lines of what @Kolassa said, I would suggest fitting a non-parametric model (something like a cubic spline smoother) to your data, and see if the improvement in the fit is significant.
If I
|
46,624
|
Statistical test to determine if a relationship is linear?
|
This is way more basic than the other suggestions, but you could plot ln(Y) against ln(X). The slope of the best-fit line here would be the order of the relationship between X and Y (for instance if the slope of ln(Y) vs ln(X) is a, then the relationship between X and Y is Y = CX^a + D, where C and D are constants.)
To find if the relationship between X and Y is linear, you could do LINEST (Excel function) on the plot of ln(Y) vs. ln(X). This will give you the slope and the uncertainty in the slope. Then take the Z score between 1 and your slope. If the absolute value of the Z score is less than 2, the data is linear, but if it's more than 2, the data is not.
|
Statistical test to determine if a relationship is linear?
|
This is way more basic than the other suggestions, but you could plot ln(Y) against ln(X). The slope of the best-fit line here would be the order of the relationship between X and Y (for instance if t
|
Statistical test to determine if a relationship is linear?
This is way more basic than the other suggestions, but you could plot ln(Y) against ln(X). The slope of the best-fit line here would be the order of the relationship between X and Y (for instance if the slope of ln(Y) vs ln(X) is a, then the relationship between X and Y is Y = CX^a + D, where C and D are constants.)
To find if the relationship between X and Y is linear, you could do LINEST (Excel function) on the plot of ln(Y) vs. ln(X). This will give you the slope and the uncertainty in the slope. Then take the Z score between 1 and your slope. If the absolute value of the Z score is less than 2, the data is linear, but if it's more than 2, the data is not.
|
Statistical test to determine if a relationship is linear?
This is way more basic than the other suggestions, but you could plot ln(Y) against ln(X). The slope of the best-fit line here would be the order of the relationship between X and Y (for instance if t
|
46,625
|
How can one compute Lilliefors' test for arbitrary distributions?
|
[At present this only deals with the initial question regarding limitations. I may come back to address some of the other questions.]
The Kolmogorov-Smirnov test (i.e. one with a fully specified continuous distribution) is itself distribution-free -- the distribution of the test statistic doesn't depend on what that specified distribution is.
In the case of the Lilliefors test we know the distributional form but we don't know one or more parameters, so the distribution isn't fully specified (we estimate those unknown parameters) and as a result the test isn't distribution free - we need to treat each distribution separately.
The central issue with the standard approach to the Lilliefors' test is that you want the distribution of the test statistic to stay the same across different sets of parameter values.
Given some specified distribution, what we want then is that the test works the same no matter what the true parameter values are.
Consider the cases Lilliefors looked at - a normal and an expoenential. Let's take the exponential first. When we estimate the scale parameter ($\mu$ say), and then divide the observed values by that scale ($V_i=X_i/\hat{\mu}$) to get a standardized set of values, the distribution of those standardized values doesn't depend on the true scale parameter, $\mu$ (it's in both the numerator and denominator and so cancels out).
Similarly, if we estimate both parameters in the normal -- the distribution of the standardized values $Z_i=\frac{X_i-\hat{\mu}}{\hat{\sigma}}$ don't depend on $\mu$ and $\sigma$.
(You may find it useful at this point to read about pivotal quantities and ancillary statistics)
As a result, in cases like these, the distribution of the test statistic doesn't change as we change the parameter values; it only depends on the particular distribution, which parameters are estimated (e.g. it changes again if we only estimate one of the parameters in the normal), and the sample size.
This is not always so. For example if we were looking at a beta distribution, it's not the case that simply putting in the estimated parameter values and using the probability integral transform leaves the distribution of the test statistic unchanged as you change parameter values. I look at a gamma example below.
In some circumstances it may not make a huge difference (you might still have an approximate test), and in some cases it might not work well at small sample size but may be reasonable at large sample sizes. Such things are a matter for investigation -- but unless you have the property discussed above you can't necessarily just assume that things will just work without some reason to believe they will.
This is the reason for my caution in the original thread you refer to.
Example of the issue at hand:
You mentioned the gamma in your question, so here's an small example of the issue with that, looking at a small and a large value of the shape parameter. Note that it's just the shape parameter that's the problem here, since the scale parameter estimate can just be used to scale the data in the same way as the exponential:
As you see the right tail of the two distributions is different. However, for non-small values of the parameter the cube root transformation leaves gammas having almost the same shape (but differing in location and scale, as a function of the parameters). This suggests that you could safely have a "large-shape-parameter" approximate test - it suggests that the distribution should be almost the same for say $\alpha=10$ as $\alpha=100$, for example.
[Further, it looks like the quantiles of the KS statistic at small values of the shape parameter is nearly linear in the quantiles for larger values, so it's possible there may be something approximate that could be done with smaller estimated shape parameters to get a test of about the right size.]
|
How can one compute Lilliefors' test for arbitrary distributions?
|
[At present this only deals with the initial question regarding limitations. I may come back to address some of the other questions.]
The Kolmogorov-Smirnov test (i.e. one with a fully specified conti
|
How can one compute Lilliefors' test for arbitrary distributions?
[At present this only deals with the initial question regarding limitations. I may come back to address some of the other questions.]
The Kolmogorov-Smirnov test (i.e. one with a fully specified continuous distribution) is itself distribution-free -- the distribution of the test statistic doesn't depend on what that specified distribution is.
In the case of the Lilliefors test we know the distributional form but we don't know one or more parameters, so the distribution isn't fully specified (we estimate those unknown parameters) and as a result the test isn't distribution free - we need to treat each distribution separately.
The central issue with the standard approach to the Lilliefors' test is that you want the distribution of the test statistic to stay the same across different sets of parameter values.
Given some specified distribution, what we want then is that the test works the same no matter what the true parameter values are.
Consider the cases Lilliefors looked at - a normal and an expoenential. Let's take the exponential first. When we estimate the scale parameter ($\mu$ say), and then divide the observed values by that scale ($V_i=X_i/\hat{\mu}$) to get a standardized set of values, the distribution of those standardized values doesn't depend on the true scale parameter, $\mu$ (it's in both the numerator and denominator and so cancels out).
Similarly, if we estimate both parameters in the normal -- the distribution of the standardized values $Z_i=\frac{X_i-\hat{\mu}}{\hat{\sigma}}$ don't depend on $\mu$ and $\sigma$.
(You may find it useful at this point to read about pivotal quantities and ancillary statistics)
As a result, in cases like these, the distribution of the test statistic doesn't change as we change the parameter values; it only depends on the particular distribution, which parameters are estimated (e.g. it changes again if we only estimate one of the parameters in the normal), and the sample size.
This is not always so. For example if we were looking at a beta distribution, it's not the case that simply putting in the estimated parameter values and using the probability integral transform leaves the distribution of the test statistic unchanged as you change parameter values. I look at a gamma example below.
In some circumstances it may not make a huge difference (you might still have an approximate test), and in some cases it might not work well at small sample size but may be reasonable at large sample sizes. Such things are a matter for investigation -- but unless you have the property discussed above you can't necessarily just assume that things will just work without some reason to believe they will.
This is the reason for my caution in the original thread you refer to.
Example of the issue at hand:
You mentioned the gamma in your question, so here's an small example of the issue with that, looking at a small and a large value of the shape parameter. Note that it's just the shape parameter that's the problem here, since the scale parameter estimate can just be used to scale the data in the same way as the exponential:
As you see the right tail of the two distributions is different. However, for non-small values of the parameter the cube root transformation leaves gammas having almost the same shape (but differing in location and scale, as a function of the parameters). This suggests that you could safely have a "large-shape-parameter" approximate test - it suggests that the distribution should be almost the same for say $\alpha=10$ as $\alpha=100$, for example.
[Further, it looks like the quantiles of the KS statistic at small values of the shape parameter is nearly linear in the quantiles for larger values, so it's possible there may be something approximate that could be done with smaller estimated shape parameters to get a test of about the right size.]
|
How can one compute Lilliefors' test for arbitrary distributions?
[At present this only deals with the initial question regarding limitations. I may come back to address some of the other questions.]
The Kolmogorov-Smirnov test (i.e. one with a fully specified conti
|
46,626
|
How to derive the MLE of a Gaussian mixture distribution
|
This is the proper start but I wonder at the wording of the exercise. I would have asked the following:
Write the likelihood of the sample $(x_1,\ldots,x_n)$ when the $X_i$'s are iid from$$p(x)= \mathbb{P}(K=1) N(x|\mu_1,\sigma^2_1) +
\mathbb{P}(K=0) N(x|\mu_0,\sigma^2_0)\qquad\qquad(1)$$and conclude at the lack of closed-form expression for the maximum likelihood estimator.
Introducing the latent variables $K_i$ associated with the component of each $X_i$, namely$$\mathbb{P}(K_i=1)=\pi_1=1-\mathbb{P}(K=0)$$and$$X_i|K_i=k\sim
N(x|\mu_k,\sigma^2_k)$$show that the marginal distribution of $X_i$ is
indeed (1).
Give the density of the pair $(X_i,K_i)$ and deduce the density of the completed sample $((x_1,k_x),\ldots,(x_n,k_n))$, acting as if the
$k_i$'s were also observed. We will call this density the completed
likelihood.
Derive the maximum likelihood estimator of the parameter $(\pi_0,\mu_0,\mu_1,\sigma_0,\sigma_1)$ based on the completed sample
$((x_1,k_x),\ldots,(x_n,k_n))$.
|
How to derive the MLE of a Gaussian mixture distribution
|
This is the proper start but I wonder at the wording of the exercise. I would have asked the following:
Write the likelihood of the sample $(x_1,\ldots,x_n)$ when the $X_i$'s are iid from$$p(x)= \ma
|
How to derive the MLE of a Gaussian mixture distribution
This is the proper start but I wonder at the wording of the exercise. I would have asked the following:
Write the likelihood of the sample $(x_1,\ldots,x_n)$ when the $X_i$'s are iid from$$p(x)= \mathbb{P}(K=1) N(x|\mu_1,\sigma^2_1) +
\mathbb{P}(K=0) N(x|\mu_0,\sigma^2_0)\qquad\qquad(1)$$and conclude at the lack of closed-form expression for the maximum likelihood estimator.
Introducing the latent variables $K_i$ associated with the component of each $X_i$, namely$$\mathbb{P}(K_i=1)=\pi_1=1-\mathbb{P}(K=0)$$and$$X_i|K_i=k\sim
N(x|\mu_k,\sigma^2_k)$$show that the marginal distribution of $X_i$ is
indeed (1).
Give the density of the pair $(X_i,K_i)$ and deduce the density of the completed sample $((x_1,k_x),\ldots,(x_n,k_n))$, acting as if the
$k_i$'s were also observed. We will call this density the completed
likelihood.
Derive the maximum likelihood estimator of the parameter $(\pi_0,\mu_0,\mu_1,\sigma_0,\sigma_1)$ based on the completed sample
$((x_1,k_x),\ldots,(x_n,k_n))$.
|
How to derive the MLE of a Gaussian mixture distribution
This is the proper start but I wonder at the wording of the exercise. I would have asked the following:
Write the likelihood of the sample $(x_1,\ldots,x_n)$ when the $X_i$'s are iid from$$p(x)= \ma
|
46,627
|
How to derive the MLE of a Gaussian mixture distribution
|
I continued working on this exercise and came up with a solution. I'd be glad about comments.
Let $\theta=[\pi_0,\pi_1,\mu_0,\mu_1,\sigma_0^2,\sigma_1^2]$
The likelihood over N observations is given by:
$$ P(x|\theta) = \prod_{i=1}^n \bigg[\pi_0 N(x_i|\mu_0,\sigma_0^2)+\pi_1 N(x_i|\mu_1,\sigma_1^2) \bigg]$$
The likelihood written as product over sets $K_0$ and $K_1$ is given by
$$ P(x|\theta) = \prod_{i=1}^n \bigg[ (\pi_0 N(x_i|\mu_0,\sigma_0^2))^{1-k_i}(\pi_1 N(x_i|\mu_1,\sigma_1^2))^{k_i} \bigg]$$
The log-likelihood is given by
$$ \ln P(x|\theta) = \sum_{i=1}^n \bigg[ (1-k_i) (\ln \pi_0 + \ln N(x_i|\mu_0,\sigma_0^2))+k_i(\ln \pi_1 + \ln N(x_i|\mu_1,\sigma_1^2)) \bigg] $$
Consequently we can find the MLE for $\mu_0$ and $\sigma^2_0$ in a nearly standard way finding:
$$\hat{\mu}_0 = \frac{1}{\sum_{i_1}^n (1-k_i)} \sum_{i_1}^n (1-k_i) x_i$$
$$\hat{\sigma}^2_0 = \frac{1}{\sum_{i_1}^n (1-k_i)} \sum_{i_1}^n (1-k_i)(x_i - \hat{\mu}_0)^2$$
|
How to derive the MLE of a Gaussian mixture distribution
|
I continued working on this exercise and came up with a solution. I'd be glad about comments.
Let $\theta=[\pi_0,\pi_1,\mu_0,\mu_1,\sigma_0^2,\sigma_1^2]$
The likelihood over N observations is given
|
How to derive the MLE of a Gaussian mixture distribution
I continued working on this exercise and came up with a solution. I'd be glad about comments.
Let $\theta=[\pi_0,\pi_1,\mu_0,\mu_1,\sigma_0^2,\sigma_1^2]$
The likelihood over N observations is given by:
$$ P(x|\theta) = \prod_{i=1}^n \bigg[\pi_0 N(x_i|\mu_0,\sigma_0^2)+\pi_1 N(x_i|\mu_1,\sigma_1^2) \bigg]$$
The likelihood written as product over sets $K_0$ and $K_1$ is given by
$$ P(x|\theta) = \prod_{i=1}^n \bigg[ (\pi_0 N(x_i|\mu_0,\sigma_0^2))^{1-k_i}(\pi_1 N(x_i|\mu_1,\sigma_1^2))^{k_i} \bigg]$$
The log-likelihood is given by
$$ \ln P(x|\theta) = \sum_{i=1}^n \bigg[ (1-k_i) (\ln \pi_0 + \ln N(x_i|\mu_0,\sigma_0^2))+k_i(\ln \pi_1 + \ln N(x_i|\mu_1,\sigma_1^2)) \bigg] $$
Consequently we can find the MLE for $\mu_0$ and $\sigma^2_0$ in a nearly standard way finding:
$$\hat{\mu}_0 = \frac{1}{\sum_{i_1}^n (1-k_i)} \sum_{i_1}^n (1-k_i) x_i$$
$$\hat{\sigma}^2_0 = \frac{1}{\sum_{i_1}^n (1-k_i)} \sum_{i_1}^n (1-k_i)(x_i - \hat{\mu}_0)^2$$
|
How to derive the MLE of a Gaussian mixture distribution
I continued working on this exercise and came up with a solution. I'd be glad about comments.
Let $\theta=[\pi_0,\pi_1,\mu_0,\mu_1,\sigma_0^2,\sigma_1^2]$
The likelihood over N observations is given
|
46,628
|
How to detect nonlinear relationship?
|
Furthermore, both Pearson correlation coefficient and Spearman's rank
correlation coefficient were calculated and they were 0.624 and 0.619
respectively. Does this indicate a linear relationship?
No, not necessarily. You can build datasets which have 0.6, but dependence is strongly non-linear, or nearly linear/comonotonic but with anti-tail-dependence (high extreme values for ones correspond to low extreme values for the other, and conversely).
You can display an empirical copula: You sort the values for X (and divide by the number of values), you sort the values for Y (and divide by the number of values).
You can then plot a 'normalized' scatterplot or an estimated density of this bivariate distribution of uniform marginals.
The perfect positive dependence (comonotonic relationship) is depicted by the diagonal of $[0,1]^2$. For some python code and empirical copulas illustration, you can have a look there.
|
How to detect nonlinear relationship?
|
Furthermore, both Pearson correlation coefficient and Spearman's rank
correlation coefficient were calculated and they were 0.624 and 0.619
respectively. Does this indicate a linear relationship?
|
How to detect nonlinear relationship?
Furthermore, both Pearson correlation coefficient and Spearman's rank
correlation coefficient were calculated and they were 0.624 and 0.619
respectively. Does this indicate a linear relationship?
No, not necessarily. You can build datasets which have 0.6, but dependence is strongly non-linear, or nearly linear/comonotonic but with anti-tail-dependence (high extreme values for ones correspond to low extreme values for the other, and conversely).
You can display an empirical copula: You sort the values for X (and divide by the number of values), you sort the values for Y (and divide by the number of values).
You can then plot a 'normalized' scatterplot or an estimated density of this bivariate distribution of uniform marginals.
The perfect positive dependence (comonotonic relationship) is depicted by the diagonal of $[0,1]^2$. For some python code and empirical copulas illustration, you can have a look there.
|
How to detect nonlinear relationship?
Furthermore, both Pearson correlation coefficient and Spearman's rank
correlation coefficient were calculated and they were 0.624 and 0.619
respectively. Does this indicate a linear relationship?
|
46,629
|
How to interpret the bandwidth value in a kernel density estimation?
|
For simplicity, let's assume that we are talking about some really simple kernel, say triangular kernel:
$$ K(x) =
\begin{cases}
1 - |x| & \text{if } x \in [-1, 1] \\
0 & \text{otherwise}
\end{cases}
$$
Recall that in kernel density estimation for estimating density $\hat f_h$ we combine $n$ kernels parametrized by $h$ centered at points $x_i$:
$$
\hat{f}_h(x) = \frac{1}{n}\sum_{i=1}^n K_h (x - x_i) = \frac{1}{nh} \sum_{i=1}^n K\Big(\frac{x-x_i}{h}\Big)
$$
Notice that by $\frac{x-x_i}{h}$ we mean that we want to re-scale the difference of some $x$ with point $x_i$ by factor $h$. Most of the kernels (excluding Gaussian) are limited to the $(-1, 1)$ range, so this means that they will return densities equal to zero for points out of $(x_i-h, x_i+h)$ range. Saying it differently, $h$ is scale parameter for kernel, that changes it's range from $(-1, 1)$ to $(-h, h)$.
This is illustrated on the plot below, where $n=7$ points are used for estimating kernel densities with different bandwidthes $h$ (colored points on top mark the individual values, colored lines are the kernels, gray line is overall kernel estimate). As you can see, $h < 1$ makes the kernels narrower, while $h > 1$ makes them wider. Changing $h$ influences both the individual kernels and the final kernel density estimate, since it's a mixture distribution of individual kernels. Higher $h$ makes the kernel density estimate smoother, while as $h$ gets smaller it leads to kernels being closer to individual datapoints, and with $h \rightarrow 0$ you would end up with just a bunch of Direc delta functions centered at $x_i$ points.
And the R code that produced the plots:
set.seed(123)
n <- 7
x <- rnorm(n, sd = 3)
K <- function(x) ifelse(x >= -1 & x <= 1, 1 - abs(x), 0)
kde <- function(x, data, h, K) {
n <- length(data)
out <- outer(x, data, function(xi,yi) K((xi-yi)/h))
rowSums(out)/(n*h)
}
xx = seq(-8, 8, by = 0.001)
for (h in c(0.5, 1, 1.5, 2)) {
plot(NA, xlim = c(-4, 8), ylim = c(0, 0.5), xlab = "", ylab = "",
main = paste0("h = ", h))
for (i in 1:n) {
lines(xx, K((xx-x[i])/h)/(n*h), type = "l", col = rainbow(n)[i])
rug(x[i], lwd = 2, col = rainbow(n)[i], side = 3, ticksize = 0.075)
}
lines(xx, kde(xx, x, h, K), col = "darkgray")
}
For more details you can check the great introductory books by Silverman (1986) and Wand & Jones (1995).
Silverman, B.W. (1986). Density estimation for statistics and data analysis. CRC/Chapman & Hall.
Wand, M.P and Jones, M.C. (1995). Kernel Smoothing. London: Chapman & Hall/CRC.
|
How to interpret the bandwidth value in a kernel density estimation?
|
For simplicity, let's assume that we are talking about some really simple kernel, say triangular kernel:
$$ K(x) =
\begin{cases}
1 - |x| & \text{if } x \in [-1, 1] \\
0 & \text{otherwise}
\end{cases}
|
How to interpret the bandwidth value in a kernel density estimation?
For simplicity, let's assume that we are talking about some really simple kernel, say triangular kernel:
$$ K(x) =
\begin{cases}
1 - |x| & \text{if } x \in [-1, 1] \\
0 & \text{otherwise}
\end{cases}
$$
Recall that in kernel density estimation for estimating density $\hat f_h$ we combine $n$ kernels parametrized by $h$ centered at points $x_i$:
$$
\hat{f}_h(x) = \frac{1}{n}\sum_{i=1}^n K_h (x - x_i) = \frac{1}{nh} \sum_{i=1}^n K\Big(\frac{x-x_i}{h}\Big)
$$
Notice that by $\frac{x-x_i}{h}$ we mean that we want to re-scale the difference of some $x$ with point $x_i$ by factor $h$. Most of the kernels (excluding Gaussian) are limited to the $(-1, 1)$ range, so this means that they will return densities equal to zero for points out of $(x_i-h, x_i+h)$ range. Saying it differently, $h$ is scale parameter for kernel, that changes it's range from $(-1, 1)$ to $(-h, h)$.
This is illustrated on the plot below, where $n=7$ points are used for estimating kernel densities with different bandwidthes $h$ (colored points on top mark the individual values, colored lines are the kernels, gray line is overall kernel estimate). As you can see, $h < 1$ makes the kernels narrower, while $h > 1$ makes them wider. Changing $h$ influences both the individual kernels and the final kernel density estimate, since it's a mixture distribution of individual kernels. Higher $h$ makes the kernel density estimate smoother, while as $h$ gets smaller it leads to kernels being closer to individual datapoints, and with $h \rightarrow 0$ you would end up with just a bunch of Direc delta functions centered at $x_i$ points.
And the R code that produced the plots:
set.seed(123)
n <- 7
x <- rnorm(n, sd = 3)
K <- function(x) ifelse(x >= -1 & x <= 1, 1 - abs(x), 0)
kde <- function(x, data, h, K) {
n <- length(data)
out <- outer(x, data, function(xi,yi) K((xi-yi)/h))
rowSums(out)/(n*h)
}
xx = seq(-8, 8, by = 0.001)
for (h in c(0.5, 1, 1.5, 2)) {
plot(NA, xlim = c(-4, 8), ylim = c(0, 0.5), xlab = "", ylab = "",
main = paste0("h = ", h))
for (i in 1:n) {
lines(xx, K((xx-x[i])/h)/(n*h), type = "l", col = rainbow(n)[i])
rug(x[i], lwd = 2, col = rainbow(n)[i], side = 3, ticksize = 0.075)
}
lines(xx, kde(xx, x, h, K), col = "darkgray")
}
For more details you can check the great introductory books by Silverman (1986) and Wand & Jones (1995).
Silverman, B.W. (1986). Density estimation for statistics and data analysis. CRC/Chapman & Hall.
Wand, M.P and Jones, M.C. (1995). Kernel Smoothing. London: Chapman & Hall/CRC.
|
How to interpret the bandwidth value in a kernel density estimation?
For simplicity, let's assume that we are talking about some really simple kernel, say triangular kernel:
$$ K(x) =
\begin{cases}
1 - |x| & \text{if } x \in [-1, 1] \\
0 & \text{otherwise}
\end{cases}
|
46,630
|
Can I use a paired t-test on data that are averages?
|
It's probably not a problem at all, provided the sample sizes are similar to each other.
There could be complications with small sample sizes, though. Intuitively, an average of a small sample is more variable than averages of large samples. If some pairs are both based on small samples, they could create unusual outlying values. It's well known that the Student $t$ test does not work well in such cases.
Analysis
Let's pursue this a little further with a model and a simulation. Because this is meant only to illustrate a phenomenon, I'll propose only the simplest possible model--one for which the $t$ test is ordinarily without any problems at all--and use only the simplest form of the $t$ test to avoid technical complications.
The model is that for each pair of averages, $(\bar x_A, \bar x_B)$, the data contributing to the first average are a sample of some Normal distribution with mean $\mu_A$ and variance $\sigma^2$; and the data contributing to the second average are a sample of some Normal distribution with mean $\mu_B$ and variance $\sigma^2$. The null hypothesis asserts that in every pair $\mu_A=\mu_B$. The alternative hypothesis is that there is some systematic difference $\delta$ and that in each pair $\mu_A = \mu_B + \delta$.
If all the sample sizes were the same, say equal to $m$, then every one of the observations would behave as independent Normal random variables with variance $\sigma^2/m$. This is where a paired $t$ test is ideal: the differences $\bar x_A - \bar x_B$ then have Normal distributions with mean $0$ and variance $2\sigma^2/m$ and they are all independent. The Student $t$ statistic--equal to the difference in means of the $\bar x_A$ and the means of the $\bar x_B$, divided by the estimated standard error of those differences, then will have exactly a $t_{n-1}$ distribution, where $n$ is the number of pairs.
However, when the sample sizes vary, the resulting distribution is not exactly a Student $t$ distribution. The numerator of the $t$ statistic, being a linear combination of independent Normal variables, is still Normal; but the denominator, being the square root of a sum of squares of Normal variables having different variances, no longer has a $\chi^2$ distribution. We therefore have no right to expect the ratio to have a Student $t$ distribution.
Simulation
To see whether this might be a practical issue, I simulated $10,000$ paired t-tests for $20$ pairs. First I computed the values in each pair as the average of two independent values (equal to $2$). All data were independently drawn from the same Normal distribution. (You may easily vary the code to simulate your particular circumstances.) I collected the t-statistic for each iteration. Here they are, displayed as a histogram. On it is drawn (as a red curve) the Student $t$ distribution with $20-1=19$ degrees of freedom: it is supposed to describe this histogram well, especially in the tails, which correspond to significant results.
It's a nice agreement between theory and simulation, confirming the appropriateness of the t-test (and, incidentally, showing the code is likely working as intended).
To create an extreme case (but not the most extreme), I supposed that one pair was based on samples of size $2$ while the others were based on samples of size $200$, but otherwise all data were independently drawn from the same Normal distribution.
Something went very wrong. The single pair based on a small sample has caused the $t$ statistics to be less extreme than we might otherwise suppose, but rarely near zero. This is due, as previously suggested, to its effect on the estimated standard deviation: the inflated SD pulls in the tails--it's hard to get a large fraction when its denominator is large--but also the deviation of the single pair dominates the numerator. Accordingly, numerator and denominator tend to be comparable (but can have different signs). That's why the histogram bunches up near $\pm 1$.
As a result, the t-test will have a harder time detecting a departure from the null hypothesis: it will be less powerful than we think.
Conclusions
In practice, we can expect there to be a certain amount of this behavior in your data. A deeper analysis of variance estimates and $\chi^2$ distributions indicates it really won't be much of a problem unless there are indeed some pairs with radically smaller sample sizes than others.
Although I haven't fully analyzed any alternatives (the question did not ask for a solution, only for whether a paired t-test would work!), I believe that this analysis could readily be extended to study an obvious weighted version of the t-test, weighting the data by the reciprocals of their sample sizes, and that the weighted version would have superior performance. It could take some effort to figure out the appropriate degrees of freedom to use in general.
Software
This is the R code that created the figures. By suitably changing n.group you can specify the sample sizes in your pairs and re-run it to study the extent to which using a paired t-test might be problematic.
n.group <- cbind(A=c(2, rep(200, 19)), B=c(2, rep(200, 19)))
#n.group <- cbind(A=rep(2,20), B=rep(2,20))
n <- nrow(n.group) # Number of pairs
mu <- c(A=0, B=0) # The underlying group means
n.sim <- 1e4 # Simulation size
# Create the data.
x <- array(rnorm(n.sim*length(n.group), rep(mu, each=n), 1/sqrt(n.group)),
dim=c(n, 2, n.sim))
# Run the t-tests.
t.stat <- apply(x, 3, function(y) {
z <- y[,1]-y[,2]
mean(z) / sd(z) * sqrt(length(z))
})
# Display the results and compare to the Student t distribution.
hist(t.stat, freq=FALSE, breaks=50)
curve(dt(x, n-1), add=TRUE, col="Red", lwd=2)
|
Can I use a paired t-test on data that are averages?
|
It's probably not a problem at all, provided the sample sizes are similar to each other.
There could be complications with small sample sizes, though. Intuitively, an average of a small sample is mor
|
Can I use a paired t-test on data that are averages?
It's probably not a problem at all, provided the sample sizes are similar to each other.
There could be complications with small sample sizes, though. Intuitively, an average of a small sample is more variable than averages of large samples. If some pairs are both based on small samples, they could create unusual outlying values. It's well known that the Student $t$ test does not work well in such cases.
Analysis
Let's pursue this a little further with a model and a simulation. Because this is meant only to illustrate a phenomenon, I'll propose only the simplest possible model--one for which the $t$ test is ordinarily without any problems at all--and use only the simplest form of the $t$ test to avoid technical complications.
The model is that for each pair of averages, $(\bar x_A, \bar x_B)$, the data contributing to the first average are a sample of some Normal distribution with mean $\mu_A$ and variance $\sigma^2$; and the data contributing to the second average are a sample of some Normal distribution with mean $\mu_B$ and variance $\sigma^2$. The null hypothesis asserts that in every pair $\mu_A=\mu_B$. The alternative hypothesis is that there is some systematic difference $\delta$ and that in each pair $\mu_A = \mu_B + \delta$.
If all the sample sizes were the same, say equal to $m$, then every one of the observations would behave as independent Normal random variables with variance $\sigma^2/m$. This is where a paired $t$ test is ideal: the differences $\bar x_A - \bar x_B$ then have Normal distributions with mean $0$ and variance $2\sigma^2/m$ and they are all independent. The Student $t$ statistic--equal to the difference in means of the $\bar x_A$ and the means of the $\bar x_B$, divided by the estimated standard error of those differences, then will have exactly a $t_{n-1}$ distribution, where $n$ is the number of pairs.
However, when the sample sizes vary, the resulting distribution is not exactly a Student $t$ distribution. The numerator of the $t$ statistic, being a linear combination of independent Normal variables, is still Normal; but the denominator, being the square root of a sum of squares of Normal variables having different variances, no longer has a $\chi^2$ distribution. We therefore have no right to expect the ratio to have a Student $t$ distribution.
Simulation
To see whether this might be a practical issue, I simulated $10,000$ paired t-tests for $20$ pairs. First I computed the values in each pair as the average of two independent values (equal to $2$). All data were independently drawn from the same Normal distribution. (You may easily vary the code to simulate your particular circumstances.) I collected the t-statistic for each iteration. Here they are, displayed as a histogram. On it is drawn (as a red curve) the Student $t$ distribution with $20-1=19$ degrees of freedom: it is supposed to describe this histogram well, especially in the tails, which correspond to significant results.
It's a nice agreement between theory and simulation, confirming the appropriateness of the t-test (and, incidentally, showing the code is likely working as intended).
To create an extreme case (but not the most extreme), I supposed that one pair was based on samples of size $2$ while the others were based on samples of size $200$, but otherwise all data were independently drawn from the same Normal distribution.
Something went very wrong. The single pair based on a small sample has caused the $t$ statistics to be less extreme than we might otherwise suppose, but rarely near zero. This is due, as previously suggested, to its effect on the estimated standard deviation: the inflated SD pulls in the tails--it's hard to get a large fraction when its denominator is large--but also the deviation of the single pair dominates the numerator. Accordingly, numerator and denominator tend to be comparable (but can have different signs). That's why the histogram bunches up near $\pm 1$.
As a result, the t-test will have a harder time detecting a departure from the null hypothesis: it will be less powerful than we think.
Conclusions
In practice, we can expect there to be a certain amount of this behavior in your data. A deeper analysis of variance estimates and $\chi^2$ distributions indicates it really won't be much of a problem unless there are indeed some pairs with radically smaller sample sizes than others.
Although I haven't fully analyzed any alternatives (the question did not ask for a solution, only for whether a paired t-test would work!), I believe that this analysis could readily be extended to study an obvious weighted version of the t-test, weighting the data by the reciprocals of their sample sizes, and that the weighted version would have superior performance. It could take some effort to figure out the appropriate degrees of freedom to use in general.
Software
This is the R code that created the figures. By suitably changing n.group you can specify the sample sizes in your pairs and re-run it to study the extent to which using a paired t-test might be problematic.
n.group <- cbind(A=c(2, rep(200, 19)), B=c(2, rep(200, 19)))
#n.group <- cbind(A=rep(2,20), B=rep(2,20))
n <- nrow(n.group) # Number of pairs
mu <- c(A=0, B=0) # The underlying group means
n.sim <- 1e4 # Simulation size
# Create the data.
x <- array(rnorm(n.sim*length(n.group), rep(mu, each=n), 1/sqrt(n.group)),
dim=c(n, 2, n.sim))
# Run the t-tests.
t.stat <- apply(x, 3, function(y) {
z <- y[,1]-y[,2]
mean(z) / sd(z) * sqrt(length(z))
})
# Display the results and compare to the Student t distribution.
hist(t.stat, freq=FALSE, breaks=50)
curve(dt(x, n-1), add=TRUE, col="Red", lwd=2)
|
Can I use a paired t-test on data that are averages?
It's probably not a problem at all, provided the sample sizes are similar to each other.
There could be complications with small sample sizes, though. Intuitively, an average of a small sample is mor
|
46,631
|
Can I use a paired t-test on data that are averages?
|
In principle it's fine, but you are not using the full power of your data though, since you use only the 15 averages and not the rest of the data. You are testing the null hypothesis of whether $<A_i - B_i>_i = 0$ (with $i$ from 1 to 15), where $A_i$ and $B_i$ are random variables. If you compute not only their means but also their variances, you can compute the total variance using the fact that the variance of the mean is the mean of the variance (and not only the sample variance of your means), the t statistic and the p value yourself.
EDIT (after discussion with @whuber): This approach holds as long as the sample sizes do not get widely different, so that the data can be assumed to be identically distributed. This article proposes a paired T test for non iid data.
|
Can I use a paired t-test on data that are averages?
|
In principle it's fine, but you are not using the full power of your data though, since you use only the 15 averages and not the rest of the data. You are testing the null hypothesis of whether $<A_i
|
Can I use a paired t-test on data that are averages?
In principle it's fine, but you are not using the full power of your data though, since you use only the 15 averages and not the rest of the data. You are testing the null hypothesis of whether $<A_i - B_i>_i = 0$ (with $i$ from 1 to 15), where $A_i$ and $B_i$ are random variables. If you compute not only their means but also their variances, you can compute the total variance using the fact that the variance of the mean is the mean of the variance (and not only the sample variance of your means), the t statistic and the p value yourself.
EDIT (after discussion with @whuber): This approach holds as long as the sample sizes do not get widely different, so that the data can be assumed to be identically distributed. This article proposes a paired T test for non iid data.
|
Can I use a paired t-test on data that are averages?
In principle it's fine, but you are not using the full power of your data though, since you use only the 15 averages and not the rest of the data. You are testing the null hypothesis of whether $<A_i
|
46,632
|
Should I get 100% classification accuracy on training data?
|
No, your data may not be perfectly classifiable especially by a linear classifier and this is not always because of the classifier or the features you are using. None of the features may contain sufficent differences to provide a clear line.
You may try non-linear models which can provide better classification as well as higher risk of over-fitting. Using a validation set can help you identify whether you need a different model or the problem lies in the nature of your data.
|
Should I get 100% classification accuracy on training data?
|
No, your data may not be perfectly classifiable especially by a linear classifier and this is not always because of the classifier or the features you are using. None of the features may contain suffi
|
Should I get 100% classification accuracy on training data?
No, your data may not be perfectly classifiable especially by a linear classifier and this is not always because of the classifier or the features you are using. None of the features may contain sufficent differences to provide a clear line.
You may try non-linear models which can provide better classification as well as higher risk of over-fitting. Using a validation set can help you identify whether you need a different model or the problem lies in the nature of your data.
|
Should I get 100% classification accuracy on training data?
No, your data may not be perfectly classifiable especially by a linear classifier and this is not always because of the classifier or the features you are using. None of the features may contain suffi
|
46,633
|
Should I get 100% classification accuracy on training data?
|
No, it's not always possible to create a linear boundary in the predictor space between all "1"s and "0"s in the data set (which is what would be required to have perfect linear classifier).
E.g., what if you had a single predictor and the training data were $y = (0,0,1,1)$, $x = (1,3,2,4)$. You can imagine similar scenarios with more predictors.
|
Should I get 100% classification accuracy on training data?
|
No, it's not always possible to create a linear boundary in the predictor space between all "1"s and "0"s in the data set (which is what would be required to have perfect linear classifier).
E.g., wha
|
Should I get 100% classification accuracy on training data?
No, it's not always possible to create a linear boundary in the predictor space between all "1"s and "0"s in the data set (which is what would be required to have perfect linear classifier).
E.g., what if you had a single predictor and the training data were $y = (0,0,1,1)$, $x = (1,3,2,4)$. You can imagine similar scenarios with more predictors.
|
Should I get 100% classification accuracy on training data?
No, it's not always possible to create a linear boundary in the predictor space between all "1"s and "0"s in the data set (which is what would be required to have perfect linear classifier).
E.g., wha
|
46,634
|
Should I get 100% classification accuracy on training data?
|
No, every dataset is not linearly separable, as previous answers stated it.
Unless... You have more predictors than observations (or more columns than rows).
Therefore you should make sure that the feature extraction pipeline you use does not produce more features than your number of observations.
|
Should I get 100% classification accuracy on training data?
|
No, every dataset is not linearly separable, as previous answers stated it.
Unless... You have more predictors than observations (or more columns than rows).
Therefore you should make sure that the f
|
Should I get 100% classification accuracy on training data?
No, every dataset is not linearly separable, as previous answers stated it.
Unless... You have more predictors than observations (or more columns than rows).
Therefore you should make sure that the feature extraction pipeline you use does not produce more features than your number of observations.
|
Should I get 100% classification accuracy on training data?
No, every dataset is not linearly separable, as previous answers stated it.
Unless... You have more predictors than observations (or more columns than rows).
Therefore you should make sure that the f
|
46,635
|
Should I get 100% classification accuracy on training data?
|
Imagine that it's truly a random data set. Let's say you're trying to classify the data into sick and healthy, and it just happens so that the incidence of sickness is truly random, at least independent of any of your predictors. In this case you shouldn't be getting good accuracy metrics without overfitting
|
Should I get 100% classification accuracy on training data?
|
Imagine that it's truly a random data set. Let's say you're trying to classify the data into sick and healthy, and it just happens so that the incidence of sickness is truly random, at least independe
|
Should I get 100% classification accuracy on training data?
Imagine that it's truly a random data set. Let's say you're trying to classify the data into sick and healthy, and it just happens so that the incidence of sickness is truly random, at least independent of any of your predictors. In this case you shouldn't be getting good accuracy metrics without overfitting
|
Should I get 100% classification accuracy on training data?
Imagine that it's truly a random data set. Let's say you're trying to classify the data into sick and healthy, and it just happens so that the incidence of sickness is truly random, at least independe
|
46,636
|
Projecting data on a sphere
|
It might be possible to solve your example problem using a procedure similar to nonclassical metric MDS (using the stress criterion). Initialize the 'projected points' to lie on a sphere (more on this later). Then, use an optimization solver to find the projected points that minimize the objective function. There are a few differences compared to ordinary MDS. 1) Geodesic distances must be computed between projected points, rather than Euclidean distances. This is easy when the manifold is a sphere. 2) The optimization must obey the constraint that the projected points lie on a sphere.
Fortunately, there are existing, open source solvers for performing optimization where the parameters are constrained to lie on a particular manifold. Since the parameters here are the projected points themselves, this is exactly what's needed. Manopt is a package for Matlab, and Pymanopt is for Python. They include spheres as one of the supported manifolds, and others are available.
The quality of the final result will depend on the initialization. This is also the case for ordinary, nonclassical MDS, where a good initial configuration is often obtained using classical MDS (which can be solved efficiently as an eigenvalue problem). For 'spherical MDS', you could take the following approach for initialization. Perform ordinary MDS, isomap, or some other nonlinear dimensionality reduction technique to obtain coordinates in a Euclidean space. Then, map the resulting points onto the surface of a sphere using a suitable projection. For example, to project onto a 3-sphere, first perform ordinary dimensionality reduction to 2d. Map the resulting points onto a 3-sphere using something like a stereographic projection. If the original data lies on some manifold that's topologically equivalent to a sphere, then it might be more appropriate to perform initial dimensionality reduction to 3d (or do nothing if they're already in 3d), then normalize the vectors to pull them onto a sphere. Finally, run the optimization. As with ordinary, nonclassical MDS, multiple runs can be performed using different initial conditions, then the best result selected.
It should be possible to generalize to other manifolds, and to other objective functions. For example, we could imagine converting the objective functions of other nonlinear dimensionality reduction algorithms to work on spheres or other manifolds.
|
Projecting data on a sphere
|
It might be possible to solve your example problem using a procedure similar to nonclassical metric MDS (using the stress criterion). Initialize the 'projected points' to lie on a sphere (more on this
|
Projecting data on a sphere
It might be possible to solve your example problem using a procedure similar to nonclassical metric MDS (using the stress criterion). Initialize the 'projected points' to lie on a sphere (more on this later). Then, use an optimization solver to find the projected points that minimize the objective function. There are a few differences compared to ordinary MDS. 1) Geodesic distances must be computed between projected points, rather than Euclidean distances. This is easy when the manifold is a sphere. 2) The optimization must obey the constraint that the projected points lie on a sphere.
Fortunately, there are existing, open source solvers for performing optimization where the parameters are constrained to lie on a particular manifold. Since the parameters here are the projected points themselves, this is exactly what's needed. Manopt is a package for Matlab, and Pymanopt is for Python. They include spheres as one of the supported manifolds, and others are available.
The quality of the final result will depend on the initialization. This is also the case for ordinary, nonclassical MDS, where a good initial configuration is often obtained using classical MDS (which can be solved efficiently as an eigenvalue problem). For 'spherical MDS', you could take the following approach for initialization. Perform ordinary MDS, isomap, or some other nonlinear dimensionality reduction technique to obtain coordinates in a Euclidean space. Then, map the resulting points onto the surface of a sphere using a suitable projection. For example, to project onto a 3-sphere, first perform ordinary dimensionality reduction to 2d. Map the resulting points onto a 3-sphere using something like a stereographic projection. If the original data lies on some manifold that's topologically equivalent to a sphere, then it might be more appropriate to perform initial dimensionality reduction to 3d (or do nothing if they're already in 3d), then normalize the vectors to pull them onto a sphere. Finally, run the optimization. As with ordinary, nonclassical MDS, multiple runs can be performed using different initial conditions, then the best result selected.
It should be possible to generalize to other manifolds, and to other objective functions. For example, we could imagine converting the objective functions of other nonlinear dimensionality reduction algorithms to work on spheres or other manifolds.
|
Projecting data on a sphere
It might be possible to solve your example problem using a procedure similar to nonclassical metric MDS (using the stress criterion). Initialize the 'projected points' to lie on a sphere (more on this
|
46,637
|
Projecting data on a sphere
|
Maybe something like that on R ?
http://planspace.org/2013/02/03/pca-3d-visualization-and-clustering-in-r/
The PCA project on a plane if you only take the first two axes. If you take the 3rd one you have the projection on sphere.
|
Projecting data on a sphere
|
Maybe something like that on R ?
http://planspace.org/2013/02/03/pca-3d-visualization-and-clustering-in-r/
The PCA project on a plane if you only take the first two axes. If you take the 3rd one you h
|
Projecting data on a sphere
Maybe something like that on R ?
http://planspace.org/2013/02/03/pca-3d-visualization-and-clustering-in-r/
The PCA project on a plane if you only take the first two axes. If you take the 3rd one you have the projection on sphere.
|
Projecting data on a sphere
Maybe something like that on R ?
http://planspace.org/2013/02/03/pca-3d-visualization-and-clustering-in-r/
The PCA project on a plane if you only take the first two axes. If you take the 3rd one you h
|
46,638
|
Projecting data on a sphere
|
Relational Perspective Map (http://www.visumap.net/index.aspx?p=Resources/RpmOverview) might be something interesting for you. RPM has been originally purposed for torus surface; then extended to other low dimensional manifolds like 3D sphere and projective plane. A key to design MDS on closed manifold is that the "distance" metric has to be extended to a kind of multi-path-aggregation. The VisuMap software package also provides animation/nevigation to visualize data in spaces like 3D sphere.
|
Projecting data on a sphere
|
Relational Perspective Map (http://www.visumap.net/index.aspx?p=Resources/RpmOverview) might be something interesting for you. RPM has been originally purposed for torus surface; then extended to othe
|
Projecting data on a sphere
Relational Perspective Map (http://www.visumap.net/index.aspx?p=Resources/RpmOverview) might be something interesting for you. RPM has been originally purposed for torus surface; then extended to other low dimensional manifolds like 3D sphere and projective plane. A key to design MDS on closed manifold is that the "distance" metric has to be extended to a kind of multi-path-aggregation. The VisuMap software package also provides animation/nevigation to visualize data in spaces like 3D sphere.
|
Projecting data on a sphere
Relational Perspective Map (http://www.visumap.net/index.aspx?p=Resources/RpmOverview) might be something interesting for you. RPM has been originally purposed for torus surface; then extended to othe
|
46,639
|
Why do we use the Unregularized Cost to plot a Learning Curve?
|
Background: I believe you are referring to this lecture dealing with Regularization and Bias/Variance in the context of polynomial regression.
The algorithm fmincg produces optimized estimated $\hat \theta$ coefficients (or parameters), based on a gradient descent computation derived from the objective function:
$$J(\theta)=\frac{1}{2m}\left(\displaystyle\sum_{i=1}^m(h_{\theta}(x^{(i)})-y^{(i)})^2\right)+\frac{\lambda}{2m}\left(\sum_{j=i}^n\theta_j^2\right)$$
where $m$ is the number of examples (or subjects/observations), each denoted as $x^{(i)}$; $j$ the number of features; and $\lambda$ the regularization parameter. The optimization gradients include the regularization $\lambda$ for each parameter other than $\theta_0$: it is found in the expression: $\frac{\lambda}{m}\theta_j$ after differentiating the equation above.
The issue at hand is to select the optimal $\lambda$ value to prevent overfitting the data, but also avoiding high bias.
To this end, a vector of possible lambda values is supplied, which in the course exercise is $[0,0.001,0.003,0.01,0.03,0.1,0.3,1,3,10]$, to optimize the coefficients $\Theta$. In this process, and for each iteration through the different lambda values, all other factors (basically the model matrix) remain constant.
Consequently, the differences between the $\Theta$ vectors of parameters that will be obtained are a direct consequence of the different regularization parameters $\lambda$ chosen.
At each iteration and using gradient descent the parameters that minimize the objective function are calculated on the entire training set to eventually plot a validation curve of squared errors over lambda values. This is different than in the case of the learning curves (cost vs. number of examples), where the training set is segmented in increasing numbers of observations as explained right here.
At this point, we have obtained optimal estimated parameters on the training set, and their differences are directly related to the regularization parameter.
Therefore, it makes sense to now set aside the regularization and see what would be the cost or errors, applying each different set of $\Theta$'s to both the training and cross validation sets, looking for a minimum in the crossvalidation set errors. We are not looking to optimize further the parameters $\theta$, we are just checking how the choice of different $\lambda$ values (with its associated coefficients) is reflected in the loss (or cost) function, initially dropping the errors, but eventually, and after having taken care of overfitting, progressively increasing these errors due to bias:
This explains why the training error (cost or loss function) is defined as:
$$J_{train}=\frac{1}{2m}\left[\displaystyle\sum_{i=1}^m(h_{\theta}(x^{(i)})-y^{(i)})^2\right]$$
and accordingly, the CV error as:
$$J_{cv}=\frac{1}{2m}\left[\displaystyle\sum_{i=1}^m(h_{\theta}(x^{(i)}_{cv})-y^{(i)}_{cv})^2\right]$$
Basically, the squared errors. In a way the confusion stems from the similarity between the function to minimize by choosing optimal parameters (objective function), and the cost or loss function, meant to assess the errors.
|
Why do we use the Unregularized Cost to plot a Learning Curve?
|
Background: I believe you are referring to this lecture dealing with Regularization and Bias/Variance in the context of polynomial regression.
The algorithm fmincg produces optimized estimated $\hat \
|
Why do we use the Unregularized Cost to plot a Learning Curve?
Background: I believe you are referring to this lecture dealing with Regularization and Bias/Variance in the context of polynomial regression.
The algorithm fmincg produces optimized estimated $\hat \theta$ coefficients (or parameters), based on a gradient descent computation derived from the objective function:
$$J(\theta)=\frac{1}{2m}\left(\displaystyle\sum_{i=1}^m(h_{\theta}(x^{(i)})-y^{(i)})^2\right)+\frac{\lambda}{2m}\left(\sum_{j=i}^n\theta_j^2\right)$$
where $m$ is the number of examples (or subjects/observations), each denoted as $x^{(i)}$; $j$ the number of features; and $\lambda$ the regularization parameter. The optimization gradients include the regularization $\lambda$ for each parameter other than $\theta_0$: it is found in the expression: $\frac{\lambda}{m}\theta_j$ after differentiating the equation above.
The issue at hand is to select the optimal $\lambda$ value to prevent overfitting the data, but also avoiding high bias.
To this end, a vector of possible lambda values is supplied, which in the course exercise is $[0,0.001,0.003,0.01,0.03,0.1,0.3,1,3,10]$, to optimize the coefficients $\Theta$. In this process, and for each iteration through the different lambda values, all other factors (basically the model matrix) remain constant.
Consequently, the differences between the $\Theta$ vectors of parameters that will be obtained are a direct consequence of the different regularization parameters $\lambda$ chosen.
At each iteration and using gradient descent the parameters that minimize the objective function are calculated on the entire training set to eventually plot a validation curve of squared errors over lambda values. This is different than in the case of the learning curves (cost vs. number of examples), where the training set is segmented in increasing numbers of observations as explained right here.
At this point, we have obtained optimal estimated parameters on the training set, and their differences are directly related to the regularization parameter.
Therefore, it makes sense to now set aside the regularization and see what would be the cost or errors, applying each different set of $\Theta$'s to both the training and cross validation sets, looking for a minimum in the crossvalidation set errors. We are not looking to optimize further the parameters $\theta$, we are just checking how the choice of different $\lambda$ values (with its associated coefficients) is reflected in the loss (or cost) function, initially dropping the errors, but eventually, and after having taken care of overfitting, progressively increasing these errors due to bias:
This explains why the training error (cost or loss function) is defined as:
$$J_{train}=\frac{1}{2m}\left[\displaystyle\sum_{i=1}^m(h_{\theta}(x^{(i)})-y^{(i)})^2\right]$$
and accordingly, the CV error as:
$$J_{cv}=\frac{1}{2m}\left[\displaystyle\sum_{i=1}^m(h_{\theta}(x^{(i)}_{cv})-y^{(i)}_{cv})^2\right]$$
Basically, the squared errors. In a way the confusion stems from the similarity between the function to minimize by choosing optimal parameters (objective function), and the cost or loss function, meant to assess the errors.
|
Why do we use the Unregularized Cost to plot a Learning Curve?
Background: I believe you are referring to this lecture dealing with Regularization and Bias/Variance in the context of polynomial regression.
The algorithm fmincg produces optimized estimated $\hat \
|
46,640
|
Why do we use the Unregularized Cost to plot a Learning Curve?
|
Because if you want to know the actual cost, you need to look at the unregularized cost.
Consider LASSO regression, where we we tell a kind of mathematical white lie in the formula by adding a term that reflects the sum of the parameter values. Adding this term influences the parameter estimates. But at the end of the day when we want to make predictions with this model, we will be evaluating those predictions purely using the sum of squared prediction errors, without the extra term.
For an analogy, think about setting your alarm clock a half hour ahead so that you never wake up late. Sure, you wake up a half hour early, but you don't go through the rest of the day thinking that it's a half hour later than it is. The time on your alarm clock is the regularized cost, and the time on everyone else's clock is the unregularized cost.
|
Why do we use the Unregularized Cost to plot a Learning Curve?
|
Because if you want to know the actual cost, you need to look at the unregularized cost.
Consider LASSO regression, where we we tell a kind of mathematical white lie in the formula by adding a term th
|
Why do we use the Unregularized Cost to plot a Learning Curve?
Because if you want to know the actual cost, you need to look at the unregularized cost.
Consider LASSO regression, where we we tell a kind of mathematical white lie in the formula by adding a term that reflects the sum of the parameter values. Adding this term influences the parameter estimates. But at the end of the day when we want to make predictions with this model, we will be evaluating those predictions purely using the sum of squared prediction errors, without the extra term.
For an analogy, think about setting your alarm clock a half hour ahead so that you never wake up late. Sure, you wake up a half hour early, but you don't go through the rest of the day thinking that it's a half hour later than it is. The time on your alarm clock is the regularized cost, and the time on everyone else's clock is the unregularized cost.
|
Why do we use the Unregularized Cost to plot a Learning Curve?
Because if you want to know the actual cost, you need to look at the unregularized cost.
Consider LASSO regression, where we we tell a kind of mathematical white lie in the formula by adding a term th
|
46,641
|
Is Simpson's Paradox always an example of confounding?
|
Here's a simple visual example of Simpson's Paradox where there is no confounding:
Observing the relationship between the two variables Sex and Medical Cost, there would appear to be a strong causal relationship:
However if you add a third variable, Age, in the causal diagram:
it becomes clear that the relationship between Sex and Cost is not significant, rather there is a strong linear relationship between Age and Cost.
Meanwhile, there clearly should be no causal relationship between Age and Sex in the diagram, hence Age is not a confounder. To be clear, in this example Sex would no longer be in a causal relationship with Cost, which would by definition mean confounding is not possible if there are only two variables in the path diagram.
|
Is Simpson's Paradox always an example of confounding?
|
Here's a simple visual example of Simpson's Paradox where there is no confounding:
Observing the relationship between the two variables Sex and Medical Cost, there would appear to be a strong causal
|
Is Simpson's Paradox always an example of confounding?
Here's a simple visual example of Simpson's Paradox where there is no confounding:
Observing the relationship between the two variables Sex and Medical Cost, there would appear to be a strong causal relationship:
However if you add a third variable, Age, in the causal diagram:
it becomes clear that the relationship between Sex and Cost is not significant, rather there is a strong linear relationship between Age and Cost.
Meanwhile, there clearly should be no causal relationship between Age and Sex in the diagram, hence Age is not a confounder. To be clear, in this example Sex would no longer be in a causal relationship with Cost, which would by definition mean confounding is not possible if there are only two variables in the path diagram.
|
Is Simpson's Paradox always an example of confounding?
Here's a simple visual example of Simpson's Paradox where there is no confounding:
Observing the relationship between the two variables Sex and Medical Cost, there would appear to be a strong causal
|
46,642
|
Is Simpson's Paradox always an example of confounding?
|
You could imagine forming subclasses based on X, and the relationship between X and Y within each subclass opposes the relationship between X and Y across the sample. You could conceive of the subclasses as a confounder, but if you've artificially imposed them and they come from nothing but the already measured X variable, then no additional substantive confounding variable would have to be introduced.
|
Is Simpson's Paradox always an example of confounding?
|
You could imagine forming subclasses based on X, and the relationship between X and Y within each subclass opposes the relationship between X and Y across the sample. You could conceive of the subclas
|
Is Simpson's Paradox always an example of confounding?
You could imagine forming subclasses based on X, and the relationship between X and Y within each subclass opposes the relationship between X and Y across the sample. You could conceive of the subclasses as a confounder, but if you've artificially imposed them and they come from nothing but the already measured X variable, then no additional substantive confounding variable would have to be introduced.
|
Is Simpson's Paradox always an example of confounding?
You could imagine forming subclasses based on X, and the relationship between X and Y within each subclass opposes the relationship between X and Y across the sample. You could conceive of the subclas
|
46,643
|
Is Simpson's Paradox always an example of confounding?
|
No, Simpson's paradox is not always about confounding. In fact, I would say there is no reason to be surprised by sign reversals if you already know the covariate you adjust for is a confounder, you should check this answer here. You can have sign reversal adjusting for colliders or mediators, and without causal knowledge, you cannot know which estimate will give you the correct answer. If you want to play with simulations showing several sign reversals each time you include a covariate for adjustment, you can check the Simpson Machine in Dagitty's website.
|
Is Simpson's Paradox always an example of confounding?
|
No, Simpson's paradox is not always about confounding. In fact, I would say there is no reason to be surprised by sign reversals if you already know the covariate you adjust for is a confounder, you s
|
Is Simpson's Paradox always an example of confounding?
No, Simpson's paradox is not always about confounding. In fact, I would say there is no reason to be surprised by sign reversals if you already know the covariate you adjust for is a confounder, you should check this answer here. You can have sign reversal adjusting for colliders or mediators, and without causal knowledge, you cannot know which estimate will give you the correct answer. If you want to play with simulations showing several sign reversals each time you include a covariate for adjustment, you can check the Simpson Machine in Dagitty's website.
|
Is Simpson's Paradox always an example of confounding?
No, Simpson's paradox is not always about confounding. In fact, I would say there is no reason to be surprised by sign reversals if you already know the covariate you adjust for is a confounder, you s
|
46,644
|
Hosmer-Lemeshow test in R
|
Update 28 July: the below has now been pushed to CRAN in the generalhoslem package along with the Lipsitz and Pulkstenis-Robinson tests.
Fagerland and Hosmer discuss a generalisation of the Hosmer-Lemeshow test and two other approaches (the Lipsitz test and Pulkstenis-Robinson tests) in A goodness-of-fit test for the proportional odds regression model 2013 Stat Med and in Tests for goodness of fit in ordinal logistic regression models (2016) Journal of Statistical Computing and Simulation.
I haven't checked properly yet, but as far as I know, they haven't been implemented in R. If that turns out to be the case, I plan to add them to the generalhoslem package.
EDIT: it's probably also worth pointing out Fagerland and Hosmer's recommendation that
because the tests may detect different types of lack of fit, a thorough assessment of goodness of fit requires use of all three approaches.
ANOTHER EDIT: I didn't realise this straight away but the only difference between the multinomial version of the Hosmer-Lemeshow test and the ordinal version is in the degrees of freedom. Strictly speaking, the test for ordinal response models requires sorting the observations by a weighted ordinal 'response score' but they show in the 2012 article above that that is equivalent to binning the observations in the same way as in the multinomial case.
I haven't properly tested it but the the logitgof function on my github page should now do the trick: https://github.com/matthewjay15/generalhoslem-v1.1.0/blob/1.2.0.9000/logitgof.R Note that if you examine the observed and expected tables produced by this function, then the columns may not be in the right order (they will be alphabetical). This shouldn't make any difference to the test statistic, though, as they will correspond to each other.
An example using the ordinal and MASS packages:
library(reshape) # needed by logitgof
logitgof <- function (obs, exp, g = 10, ord = FALSE) {
DNAME <- paste(deparse(substitute(obs)), deparse(substitute(exp)), sep = ", ")
yhat <- exp
if (is.null(ncol(yhat))) {
mult <- FALSE
} else {
if (ncol(yhat) == 1) {
mult <- FALSE
} else mult <- TRUE
}
n <- ncol(yhat)
if (mult) {
if (!ord) {
METHOD <- "Hosmer and Lemeshow test (multinomial model)"
} else {
METHOD <- "Hosmer and Lemeshow test (ordinal model)"
}
qq <- unique(quantile(1 - yhat[, 1], probs = seq(0, 1, 1/g)))
cutyhats <- cut(1 - yhat[, 1], breaks = qq, include.lowest = TRUE)
dfobs <- data.frame(obs, cutyhats)
dfobsmelt <- melt(dfobs, id.vars = 2)
observed <- cast(dfobsmelt, cutyhats ~ value, length)
observed <- observed[order(c(1, names(observed[, 2:ncol(observed)])))]
dfexp <- data.frame(yhat, cutyhats)
dfexpmelt <- melt(dfexp, id.vars = ncol(dfexp))
expected <- cast(dfexpmelt, cutyhats ~ variable, sum)
expected <- expected[order(c(1, names(expected[, 2:ncol(expected)])))]
stddiffs <- abs(observed[, 2:ncol(observed)] - expected[, 2:ncol(expected)]) / sqrt(expected[, 2:ncol(expected)])
if (ncol(expected) != ncol(observed)) stop("Observed and expected tables have different number of columns. Check you entered the correct data.")
chisq <- sum((observed[, 2:ncol(observed)] - expected[, 2:ncol(expected)])^2 / expected[, 2:ncol(expected)])
if (!ord) {
PARAMETER <- (nrow(expected) - 2) * (ncol(yhat) - 1)
} else {
PARAMETER <- (nrow(expected) - 2) * (ncol(yhat) - 1) + ncol(yhat) - 2
}
} else {
METHOD <- "Hosmer and Lemeshow test (binary model)"
if (is.factor(obs)) {
y <- as.numeric(obs) - 1
} else {
y <- obs
}
qq <- unique(quantile(yhat, probs = seq(0, 1, 1/g)))
cutyhat <- cut(yhat, breaks = qq, include.lowest = TRUE)
observed <- xtabs(cbind(y0 = 1 - y, y1 = y) ~ cutyhat)
expected <- xtabs(cbind(yhat0 = 1 - yhat, yhat1 = yhat) ~ cutyhat)
stddiffs <- abs(observed - expected) / sqrt(expected)
chisq <- sum((observed - expected)^2/expected)
PARAMETER <- nrow(expected) - 2
}
if (g != nrow(expected))
warning(paste("Not possible to compute", g, "rows. There might be too few observations."))
if (any(expected[, 2:ncol(expected)] < 1))
warning("At least one cell in the expected frequencies table is < 1. Chi-square approximation may be incorrect.")
PVAL <- 1 - pchisq(chisq, PARAMETER)
names(chisq) <- "X-squared"
names(PARAMETER) <- "df"
structure(list(statistic = chisq, parameter = PARAMETER,
p.value = PVAL, method = METHOD, data.name = DNAME, observed = observed,
expected = expected, stddiffs = stddiffs), class = "htest")
}
library(foreign) # just to download the example dataset
# with the ordinal package
library(ordinal)
ml <- read.dta("http://www.ats.ucla.edu/stat/data/hsbdemo.dta")
mod1 <- clm(ses ~ female + write + read, data = ml)
# extract predicted probs for each level of outcome
predprob <- data.frame(id = ml$id, female = ml$female, read = ml$read, write = ml$write)
fv <- predict(mod1, newdata = predprob, type = "prob")$fit
logitgof(ml$ses, fv, ord = TRUE) # set ord to TRUE to run ordinal instead of multinomial test
# with MASS
library(MASS)
mod2 <- polr(ses ~ female + write + read, data = ml)
logitgof(ml$ses, fitted(mod2), ord = TRUE)
|
Hosmer-Lemeshow test in R
|
Update 28 July: the below has now been pushed to CRAN in the generalhoslem package along with the Lipsitz and Pulkstenis-Robinson tests.
Fagerland and Hosmer discuss a generalisation of the Hosmer-Le
|
Hosmer-Lemeshow test in R
Update 28 July: the below has now been pushed to CRAN in the generalhoslem package along with the Lipsitz and Pulkstenis-Robinson tests.
Fagerland and Hosmer discuss a generalisation of the Hosmer-Lemeshow test and two other approaches (the Lipsitz test and Pulkstenis-Robinson tests) in A goodness-of-fit test for the proportional odds regression model 2013 Stat Med and in Tests for goodness of fit in ordinal logistic regression models (2016) Journal of Statistical Computing and Simulation.
I haven't checked properly yet, but as far as I know, they haven't been implemented in R. If that turns out to be the case, I plan to add them to the generalhoslem package.
EDIT: it's probably also worth pointing out Fagerland and Hosmer's recommendation that
because the tests may detect different types of lack of fit, a thorough assessment of goodness of fit requires use of all three approaches.
ANOTHER EDIT: I didn't realise this straight away but the only difference between the multinomial version of the Hosmer-Lemeshow test and the ordinal version is in the degrees of freedom. Strictly speaking, the test for ordinal response models requires sorting the observations by a weighted ordinal 'response score' but they show in the 2012 article above that that is equivalent to binning the observations in the same way as in the multinomial case.
I haven't properly tested it but the the logitgof function on my github page should now do the trick: https://github.com/matthewjay15/generalhoslem-v1.1.0/blob/1.2.0.9000/logitgof.R Note that if you examine the observed and expected tables produced by this function, then the columns may not be in the right order (they will be alphabetical). This shouldn't make any difference to the test statistic, though, as they will correspond to each other.
An example using the ordinal and MASS packages:
library(reshape) # needed by logitgof
logitgof <- function (obs, exp, g = 10, ord = FALSE) {
DNAME <- paste(deparse(substitute(obs)), deparse(substitute(exp)), sep = ", ")
yhat <- exp
if (is.null(ncol(yhat))) {
mult <- FALSE
} else {
if (ncol(yhat) == 1) {
mult <- FALSE
} else mult <- TRUE
}
n <- ncol(yhat)
if (mult) {
if (!ord) {
METHOD <- "Hosmer and Lemeshow test (multinomial model)"
} else {
METHOD <- "Hosmer and Lemeshow test (ordinal model)"
}
qq <- unique(quantile(1 - yhat[, 1], probs = seq(0, 1, 1/g)))
cutyhats <- cut(1 - yhat[, 1], breaks = qq, include.lowest = TRUE)
dfobs <- data.frame(obs, cutyhats)
dfobsmelt <- melt(dfobs, id.vars = 2)
observed <- cast(dfobsmelt, cutyhats ~ value, length)
observed <- observed[order(c(1, names(observed[, 2:ncol(observed)])))]
dfexp <- data.frame(yhat, cutyhats)
dfexpmelt <- melt(dfexp, id.vars = ncol(dfexp))
expected <- cast(dfexpmelt, cutyhats ~ variable, sum)
expected <- expected[order(c(1, names(expected[, 2:ncol(expected)])))]
stddiffs <- abs(observed[, 2:ncol(observed)] - expected[, 2:ncol(expected)]) / sqrt(expected[, 2:ncol(expected)])
if (ncol(expected) != ncol(observed)) stop("Observed and expected tables have different number of columns. Check you entered the correct data.")
chisq <- sum((observed[, 2:ncol(observed)] - expected[, 2:ncol(expected)])^2 / expected[, 2:ncol(expected)])
if (!ord) {
PARAMETER <- (nrow(expected) - 2) * (ncol(yhat) - 1)
} else {
PARAMETER <- (nrow(expected) - 2) * (ncol(yhat) - 1) + ncol(yhat) - 2
}
} else {
METHOD <- "Hosmer and Lemeshow test (binary model)"
if (is.factor(obs)) {
y <- as.numeric(obs) - 1
} else {
y <- obs
}
qq <- unique(quantile(yhat, probs = seq(0, 1, 1/g)))
cutyhat <- cut(yhat, breaks = qq, include.lowest = TRUE)
observed <- xtabs(cbind(y0 = 1 - y, y1 = y) ~ cutyhat)
expected <- xtabs(cbind(yhat0 = 1 - yhat, yhat1 = yhat) ~ cutyhat)
stddiffs <- abs(observed - expected) / sqrt(expected)
chisq <- sum((observed - expected)^2/expected)
PARAMETER <- nrow(expected) - 2
}
if (g != nrow(expected))
warning(paste("Not possible to compute", g, "rows. There might be too few observations."))
if (any(expected[, 2:ncol(expected)] < 1))
warning("At least one cell in the expected frequencies table is < 1. Chi-square approximation may be incorrect.")
PVAL <- 1 - pchisq(chisq, PARAMETER)
names(chisq) <- "X-squared"
names(PARAMETER) <- "df"
structure(list(statistic = chisq, parameter = PARAMETER,
p.value = PVAL, method = METHOD, data.name = DNAME, observed = observed,
expected = expected, stddiffs = stddiffs), class = "htest")
}
library(foreign) # just to download the example dataset
# with the ordinal package
library(ordinal)
ml <- read.dta("http://www.ats.ucla.edu/stat/data/hsbdemo.dta")
mod1 <- clm(ses ~ female + write + read, data = ml)
# extract predicted probs for each level of outcome
predprob <- data.frame(id = ml$id, female = ml$female, read = ml$read, write = ml$write)
fv <- predict(mod1, newdata = predprob, type = "prob")$fit
logitgof(ml$ses, fv, ord = TRUE) # set ord to TRUE to run ordinal instead of multinomial test
# with MASS
library(MASS)
mod2 <- polr(ses ~ female + write + read, data = ml)
logitgof(ml$ses, fitted(mod2), ord = TRUE)
|
Hosmer-Lemeshow test in R
Update 28 July: the below has now been pushed to CRAN in the generalhoslem package along with the Lipsitz and Pulkstenis-Robinson tests.
Fagerland and Hosmer discuss a generalisation of the Hosmer-Le
|
46,645
|
How to find the sweet spot
|
By "sweet spot," I think we can assume you mean the inflection point -- the point where the growth in new users rolls over and begins to flatten out towards an asymptomtic max. There are no shortage of ways to analyze this information. One of them is as a diffusion process. Something that might help you visualize this would be not to treat it as a scatterplot but rather to plot the cumulative number of new users by day. The shape of that curve should suggest the inflection point. The basic idea is that growth is S-shaped -- slow at the beginning and the end with a rapid rise in the middle in the curve.
Mathematical modeling of that process began with Gompertz in the early 19th c but there are many other, newer models. This wiki post (https://en.wikipedia.org/wiki/Gompertz_function ) describes that model:
Formula
${\displaystyle y(t)=a\mathrm {e} ^{-b\mathrm {e} ^{-ct}},}$ where
a is an asymptote, since ${\displaystyle \lim _{t\to \infty }a\mathrm
> {e} ^{-b\mathrm {e} ^{-ct}}=a\mathrm {e} ^{0}=a}$ and $b$, $c$ are positive
numbers $b$ sets the displacement along the $x$-axis (translates the graph
to the left or right) $c$ sets the growth rate ($y$ scaling) $e$ is Euler's
Number ($e = 2.71828 \cdots$).
(Apologies for any bad formatting)
In the marketing of new products, Rogers' diffusion model is one of the most widely cited papers in any field.
His model was given mathematical formulation by Frank Bass and has seen many amendments and variations over the years.
Bass, F. M. (1969), “A New Product Growth Model for Consumer Durables,” Management Science, 215-227
Other models were developed in biological mathematics to describe the growth of, e.g., pea pods. Known as the Fisher-Pry transform which is described here (here). Fisher-Pry has been applied to the diffusion of new technology by groups such as the Program for the Human Environment at Rockefeller University.
All of the models mentioned so far basically involve univariate analysis. Extensions to multivariate regression models have been made recently. A good resource for those more advanced models (which would facilitate introducing promotion spend as a covariate and include R code) are available from these lecture notes:
http://www.unc.edu/courses/2008fall/ecol/563/001/docs/lectures/lecture27.htm
Here are the contents of that website:
Overview of nonlinear mixed effects models
Deciding which parameters should be made random in linear mixed effects models
Centering a predictor to reduce parameter correlations in linear models
The kestrel data set
The Gompertz model
selfStart functions in R
Deciding which parameters should be made random in a Gompertz mixed effects model
Interpreting the parameters of the SSgompertz function
|
How to find the sweet spot
|
By "sweet spot," I think we can assume you mean the inflection point -- the point where the growth in new users rolls over and begins to flatten out towards an asymptomtic max. There are no shortage o
|
How to find the sweet spot
By "sweet spot," I think we can assume you mean the inflection point -- the point where the growth in new users rolls over and begins to flatten out towards an asymptomtic max. There are no shortage of ways to analyze this information. One of them is as a diffusion process. Something that might help you visualize this would be not to treat it as a scatterplot but rather to plot the cumulative number of new users by day. The shape of that curve should suggest the inflection point. The basic idea is that growth is S-shaped -- slow at the beginning and the end with a rapid rise in the middle in the curve.
Mathematical modeling of that process began with Gompertz in the early 19th c but there are many other, newer models. This wiki post (https://en.wikipedia.org/wiki/Gompertz_function ) describes that model:
Formula
${\displaystyle y(t)=a\mathrm {e} ^{-b\mathrm {e} ^{-ct}},}$ where
a is an asymptote, since ${\displaystyle \lim _{t\to \infty }a\mathrm
> {e} ^{-b\mathrm {e} ^{-ct}}=a\mathrm {e} ^{0}=a}$ and $b$, $c$ are positive
numbers $b$ sets the displacement along the $x$-axis (translates the graph
to the left or right) $c$ sets the growth rate ($y$ scaling) $e$ is Euler's
Number ($e = 2.71828 \cdots$).
(Apologies for any bad formatting)
In the marketing of new products, Rogers' diffusion model is one of the most widely cited papers in any field.
His model was given mathematical formulation by Frank Bass and has seen many amendments and variations over the years.
Bass, F. M. (1969), “A New Product Growth Model for Consumer Durables,” Management Science, 215-227
Other models were developed in biological mathematics to describe the growth of, e.g., pea pods. Known as the Fisher-Pry transform which is described here (here). Fisher-Pry has been applied to the diffusion of new technology by groups such as the Program for the Human Environment at Rockefeller University.
All of the models mentioned so far basically involve univariate analysis. Extensions to multivariate regression models have been made recently. A good resource for those more advanced models (which would facilitate introducing promotion spend as a covariate and include R code) are available from these lecture notes:
http://www.unc.edu/courses/2008fall/ecol/563/001/docs/lectures/lecture27.htm
Here are the contents of that website:
Overview of nonlinear mixed effects models
Deciding which parameters should be made random in linear mixed effects models
Centering a predictor to reduce parameter correlations in linear models
The kestrel data set
The Gompertz model
selfStart functions in R
Deciding which parameters should be made random in a Gompertz mixed effects model
Interpreting the parameters of the SSgompertz function
|
How to find the sweet spot
By "sweet spot," I think we can assume you mean the inflection point -- the point where the growth in new users rolls over and begins to flatten out towards an asymptomtic max. There are no shortage o
|
46,646
|
How to find the sweet spot
|
What you are dealing with here is a regression problem with count data as the response variable. Before speculating on the existence of a "sweet spot" for the level of advertising, I recommend you just try to model the relationship between these variables. A negative binomial GLM would be a good place to start (see here for some further discussion about implementation in R):
#Fit a negative binomial regression model
MODEL <- glm.nb(new_users ~ promotion, data = data)
#Show the model
MODEL
summary(MODEL)
When undertaking regression analysis it is important to do various diagnostic tests to check that your model is okay. Once this is done, and you have settled on an appropriate model, you can make inferences about the relationship between the variables which are more detailed inferences than just observing the sample correlation. Specifically, you can make an inference about the conditional distribution of the number of new users given the advertising level.
Once you have obtained a reasonable estimate for the conditional distribution of the number of new users given a particular level of advertising (e.g., through a negative binomial GLM) you will then be in a position to predict how many new users you can expect from an increase in advertising, starting at any existing advertising level. You can combine this with information on the cost of advertising to allow you to make a judgment on whether there is any "sweet spot" where advertising is most cost effective.
|
How to find the sweet spot
|
What you are dealing with here is a regression problem with count data as the response variable. Before speculating on the existence of a "sweet spot" for the level of advertising, I recommend you ju
|
How to find the sweet spot
What you are dealing with here is a regression problem with count data as the response variable. Before speculating on the existence of a "sweet spot" for the level of advertising, I recommend you just try to model the relationship between these variables. A negative binomial GLM would be a good place to start (see here for some further discussion about implementation in R):
#Fit a negative binomial regression model
MODEL <- glm.nb(new_users ~ promotion, data = data)
#Show the model
MODEL
summary(MODEL)
When undertaking regression analysis it is important to do various diagnostic tests to check that your model is okay. Once this is done, and you have settled on an appropriate model, you can make inferences about the relationship between the variables which are more detailed inferences than just observing the sample correlation. Specifically, you can make an inference about the conditional distribution of the number of new users given the advertising level.
Once you have obtained a reasonable estimate for the conditional distribution of the number of new users given a particular level of advertising (e.g., through a negative binomial GLM) you will then be in a position to predict how many new users you can expect from an increase in advertising, starting at any existing advertising level. You can combine this with information on the cost of advertising to allow you to make a judgment on whether there is any "sweet spot" where advertising is most cost effective.
|
How to find the sweet spot
What you are dealing with here is a regression problem with count data as the response variable. Before speculating on the existence of a "sweet spot" for the level of advertising, I recommend you ju
|
46,647
|
How to find the sweet spot
|
You cannot deduct from the data, that such a point really exists. You have a theory in your head, that at some Point more trp is not going to gain more users but that is not in your data. You will need to formulate this believe as a mathematical modell, then fit your data to that modell and then you can ask your question to the modell. For example you could believe, that an exponential function describes the relationship, then fit an exponential function to the data and investigate, when the slope of the exponential function gets so low, that you think it equals Zero for practical purposes. Or you might want to fit an polinomial curve and look for a place with true slope of zero.
The p-value of correlation depends a lot on whether you have enough data Points in a particular Intervall.
|
How to find the sweet spot
|
You cannot deduct from the data, that such a point really exists. You have a theory in your head, that at some Point more trp is not going to gain more users but that is not in your data. You will nee
|
How to find the sweet spot
You cannot deduct from the data, that such a point really exists. You have a theory in your head, that at some Point more trp is not going to gain more users but that is not in your data. You will need to formulate this believe as a mathematical modell, then fit your data to that modell and then you can ask your question to the modell. For example you could believe, that an exponential function describes the relationship, then fit an exponential function to the data and investigate, when the slope of the exponential function gets so low, that you think it equals Zero for practical purposes. Or you might want to fit an polinomial curve and look for a place with true slope of zero.
The p-value of correlation depends a lot on whether you have enough data Points in a particular Intervall.
|
How to find the sweet spot
You cannot deduct from the data, that such a point really exists. You have a theory in your head, that at some Point more trp is not going to gain more users but that is not in your data. You will nee
|
46,648
|
Similarity function with given properties
|
The function
$$ f\colon [0,1]\times[0,1]\to[0,1], \quad(x,y)\mapsto \frac{1}{4}x+\frac{1}{4}y+\frac{3}{4}(x-y)^2 $$
does what you want. Plus, it's positive, symmetric and definite ($x\neq y$ implies that $f(x,y)>0$).
Neither it nor its root is linearly homogeneous like a norm-derived distance function, though ($f(\lambda x, \lambda y)\neq\lambda f(x,y)$) - but that does not seem to possible anyway given your requirements.
I found it by estimating a linear model based on your input data, with covariates $x$, $y$ and $(x-y)^2$:
foo <- data.frame(a=c(1,.5,1,0,0),b=c(1,.5,0,1,0),y=c(.5,.25,1,1,0))
model <- lm(y~a*b+I((a-b)^2),foo)
xx <- yy <- seq(0,1,.01)
persp(x=xx,y=yy,z=outer(xx,yy,function(xx,yy)xx/4+yy/4+0.75*(xx-yy)^2))
|
Similarity function with given properties
|
The function
$$ f\colon [0,1]\times[0,1]\to[0,1], \quad(x,y)\mapsto \frac{1}{4}x+\frac{1}{4}y+\frac{3}{4}(x-y)^2 $$
does what you want. Plus, it's positive, symmetric and definite ($x\neq y$ implies t
|
Similarity function with given properties
The function
$$ f\colon [0,1]\times[0,1]\to[0,1], \quad(x,y)\mapsto \frac{1}{4}x+\frac{1}{4}y+\frac{3}{4}(x-y)^2 $$
does what you want. Plus, it's positive, symmetric and definite ($x\neq y$ implies that $f(x,y)>0$).
Neither it nor its root is linearly homogeneous like a norm-derived distance function, though ($f(\lambda x, \lambda y)\neq\lambda f(x,y)$) - but that does not seem to possible anyway given your requirements.
I found it by estimating a linear model based on your input data, with covariates $x$, $y$ and $(x-y)^2$:
foo <- data.frame(a=c(1,.5,1,0,0),b=c(1,.5,0,1,0),y=c(.5,.25,1,1,0))
model <- lm(y~a*b+I((a-b)^2),foo)
xx <- yy <- seq(0,1,.01)
persp(x=xx,y=yy,z=outer(xx,yy,function(xx,yy)xx/4+yy/4+0.75*(xx-yy)^2))
|
Similarity function with given properties
The function
$$ f\colon [0,1]\times[0,1]\to[0,1], \quad(x,y)\mapsto \frac{1}{4}x+\frac{1}{4}y+\frac{3}{4}(x-y)^2 $$
does what you want. Plus, it's positive, symmetric and definite ($x\neq y$ implies t
|
46,649
|
Meaning of Borel sets in discrete spaces
|
If $\Omega$ is countable, then we may without loss of generality label the outcomes by the integers and set $\Omega = \{1, 2, \dots\}$. This follows from the definition of countability.
That is, even if we are interested in an experiment where we pick balls from an urn, we can label the outcomes in the sample space by the integers. For example, maybe we let "$1$" denote the outcome that all balls are red, "$2$" the outcome that the first is blue and the rest are red, and so on in some coherent manner.
It suffices, thus, to consider the case where $\Omega$ is the natural numbers, or some subset thereof if we want to also deal with finite spaces. The metric on $\Omega$ is taken to be $d(x, y) = I(x \neq y)$, taking the value 1 if $x \neq y$ and 0 otherwise.
Now you may check$^*$ that all points in $\Omega$ are open sets, and that all unions of open sets are open sets. But that means that every subset of $\Omega$ is a Borel set. Remember, the Borel sets are those in the Borel $\sigma-$algebra, $\mathcal B = \sigma(\mathcal O)$, where $\mathcal O$ are the open subsets of $\Omega$.
Since all subsets are measurable, one usually does not bother with the Borel $\sigma-$algebra on discrete spaces, but instead directly declares all subsets of $\Omega$ to be measurable.
$^*$ Let's prove this. In a metric space, a set $A$ is open if for every $x\in A$ there exists an $\epsilon >0$ such that all points in the $\epsilon-$ball around $x$ are also in $A$.
In our example, take $A = \{x\}$ for an arbitrary $x \in \Omega$ and fix an $\epsilon < 1$, say $\epsilon = 1/2$. Then, $x$ is the only point in the open $1/2-$ball around $x$ (recall, the metric is 1 or 0), and $x\in A$ by definition so we conclude $A$ is open. That is, any point is an open set.
|
Meaning of Borel sets in discrete spaces
|
If $\Omega$ is countable, then we may without loss of generality label the outcomes by the integers and set $\Omega = \{1, 2, \dots\}$. This follows from the definition of countability.
That is, even
|
Meaning of Borel sets in discrete spaces
If $\Omega$ is countable, then we may without loss of generality label the outcomes by the integers and set $\Omega = \{1, 2, \dots\}$. This follows from the definition of countability.
That is, even if we are interested in an experiment where we pick balls from an urn, we can label the outcomes in the sample space by the integers. For example, maybe we let "$1$" denote the outcome that all balls are red, "$2$" the outcome that the first is blue and the rest are red, and so on in some coherent manner.
It suffices, thus, to consider the case where $\Omega$ is the natural numbers, or some subset thereof if we want to also deal with finite spaces. The metric on $\Omega$ is taken to be $d(x, y) = I(x \neq y)$, taking the value 1 if $x \neq y$ and 0 otherwise.
Now you may check$^*$ that all points in $\Omega$ are open sets, and that all unions of open sets are open sets. But that means that every subset of $\Omega$ is a Borel set. Remember, the Borel sets are those in the Borel $\sigma-$algebra, $\mathcal B = \sigma(\mathcal O)$, where $\mathcal O$ are the open subsets of $\Omega$.
Since all subsets are measurable, one usually does not bother with the Borel $\sigma-$algebra on discrete spaces, but instead directly declares all subsets of $\Omega$ to be measurable.
$^*$ Let's prove this. In a metric space, a set $A$ is open if for every $x\in A$ there exists an $\epsilon >0$ such that all points in the $\epsilon-$ball around $x$ are also in $A$.
In our example, take $A = \{x\}$ for an arbitrary $x \in \Omega$ and fix an $\epsilon < 1$, say $\epsilon = 1/2$. Then, $x$ is the only point in the open $1/2-$ball around $x$ (recall, the metric is 1 or 0), and $x\in A$ by definition so we conclude $A$ is open. That is, any point is an open set.
|
Meaning of Borel sets in discrete spaces
If $\Omega$ is countable, then we may without loss of generality label the outcomes by the integers and set $\Omega = \{1, 2, \dots\}$. This follows from the definition of countability.
That is, even
|
46,650
|
Strange outcomes in binary logistic regression in SPSS
|
mdewey already gave a good answer. However, given that SPSS did give you parameter estimates, I suspect you don't have full separation, but more probably multicollinearity, also known simply as "collinearity" - some of your predictors carry almost the same information, which commonly leads to large parameter estimates of opposite signs (which you have) and large standard errors (which you also have). I suggest reading up on multicollinearity.
mdewey already addressed how to detect separation: this occurs if one predictor (or a set of predictors) allow a perfect fit to your binary target variable. (Multi-)collinearity is present when some subset of your predictors carry almost the same information. This is a property of your predictors alone, not of the dependent variable (in particular, the concept is the same for OLS and for logistic regression, unlike separation, which is pretty intrinsical to logistic regression). Collinearity is commonly detected using Variance Inflation Factors (VIFs), although there are alternatives.
How you should address separation or collinearity depends on your science. If you have separation, you may actually be quite happy, since you have a perfectly fitting model! In the case of collinearity, you may want to simply delete one or more of the collinear predictors, or transform them via a Principal Components Analysis (PCA), retaining only the first principal component(s). Or you may want to look at this earlier question with some excellent suggestions. In either case, I'd suggest looking at whether the original or the modified model predicts well on a new sample. (If you don't have a new sample, you may want to perform cross-validation.)
Incidentally, you don't get a confidence interval for numerical reasons. SPSS tries to take the parameter estimate, add 1.96 times the standard error, and exponentiate the result. Unfortunately, $e^{5000+}$ won't really fit into the table window...
|
Strange outcomes in binary logistic regression in SPSS
|
mdewey already gave a good answer. However, given that SPSS did give you parameter estimates, I suspect you don't have full separation, but more probably multicollinearity, also known simply as "colli
|
Strange outcomes in binary logistic regression in SPSS
mdewey already gave a good answer. However, given that SPSS did give you parameter estimates, I suspect you don't have full separation, but more probably multicollinearity, also known simply as "collinearity" - some of your predictors carry almost the same information, which commonly leads to large parameter estimates of opposite signs (which you have) and large standard errors (which you also have). I suggest reading up on multicollinearity.
mdewey already addressed how to detect separation: this occurs if one predictor (or a set of predictors) allow a perfect fit to your binary target variable. (Multi-)collinearity is present when some subset of your predictors carry almost the same information. This is a property of your predictors alone, not of the dependent variable (in particular, the concept is the same for OLS and for logistic regression, unlike separation, which is pretty intrinsical to logistic regression). Collinearity is commonly detected using Variance Inflation Factors (VIFs), although there are alternatives.
How you should address separation or collinearity depends on your science. If you have separation, you may actually be quite happy, since you have a perfectly fitting model! In the case of collinearity, you may want to simply delete one or more of the collinear predictors, or transform them via a Principal Components Analysis (PCA), retaining only the first principal component(s). Or you may want to look at this earlier question with some excellent suggestions. In either case, I'd suggest looking at whether the original or the modified model predicts well on a new sample. (If you don't have a new sample, you may want to perform cross-validation.)
Incidentally, you don't get a confidence interval for numerical reasons. SPSS tries to take the parameter estimate, add 1.96 times the standard error, and exponentiate the result. Unfortunately, $e^{5000+}$ won't really fit into the table window...
|
Strange outcomes in binary logistic regression in SPSS
mdewey already gave a good answer. However, given that SPSS did give you parameter estimates, I suspect you don't have full separation, but more probably multicollinearity, also known simply as "colli
|
46,651
|
Strange outcomes in binary logistic regression in SPSS
|
You almost certainly have separation here. If you tabulate the outcome by your suspect predictors you will find that (a) if the predictor is binary there is only one level of your outcome for one level of the predictor (b) if you predictor is continuous then for a range of values above (below) a cut-off you only have one level of the outcome. What you do next depends on the underlying science of the problem but you can get finite estimates using Firth's method but I do not know whether it is available in SPSS (which I do not use).
|
Strange outcomes in binary logistic regression in SPSS
|
You almost certainly have separation here. If you tabulate the outcome by your suspect predictors you will find that (a) if the predictor is binary there is only one level of your outcome for one leve
|
Strange outcomes in binary logistic regression in SPSS
You almost certainly have separation here. If you tabulate the outcome by your suspect predictors you will find that (a) if the predictor is binary there is only one level of your outcome for one level of the predictor (b) if you predictor is continuous then for a range of values above (below) a cut-off you only have one level of the outcome. What you do next depends on the underlying science of the problem but you can get finite estimates using Firth's method but I do not know whether it is available in SPSS (which I do not use).
|
Strange outcomes in binary logistic regression in SPSS
You almost certainly have separation here. If you tabulate the outcome by your suspect predictors you will find that (a) if the predictor is binary there is only one level of your outcome for one leve
|
46,652
|
Strange outcomes in binary logistic regression in SPSS
|
One other postscript: logistic regression does have problems when there is separation. Two alternatives would be to use penalized logistic, which is available as the STATS FIRTHLOG extension command or to use DISCRIMINANT, which works even when there is separation.
|
Strange outcomes in binary logistic regression in SPSS
|
One other postscript: logistic regression does have problems when there is separation. Two alternatives would be to use penalized logistic, which is available as the STATS FIRTHLOG extension command
|
Strange outcomes in binary logistic regression in SPSS
One other postscript: logistic regression does have problems when there is separation. Two alternatives would be to use penalized logistic, which is available as the STATS FIRTHLOG extension command or to use DISCRIMINANT, which works even when there is separation.
|
Strange outcomes in binary logistic regression in SPSS
One other postscript: logistic regression does have problems when there is separation. Two alternatives would be to use penalized logistic, which is available as the STATS FIRTHLOG extension command
|
46,653
|
Relationship between Gaussian process and Regression by supervised learning model like SVR
|
There exists a very strong link between Gaussian process regression and kernel Ridge regression (also called Tikhonov regularization). Indeed, the posterior expectation you compute using Bayesian inference with prior $\mathcal{GP}(0,k)$ and additive noise model $\mathcal{N}(0,\eta^2)$ gives exactly the same predictions as what obtained using a kernel Ridge regression in the RKHS $\mathcal{H}_k$ of kernel $k$ and regularization parameter $\eta^2$, that is the solution of:
$${\arg\!\min}_{f\in\mathcal{H}_k} \sum_{i=1}^n\big(y_i-f(x_i)\big)^2 + \eta^2 \lVert f \rVert^2_{\mathcal{H}_k}\,.$$
Note that the Bayesian inferance also computes the posterior variance, for which the links with kernel regression are less clear.
For SVR you'll find some differences between the two predictions since the least-squares fitting term is replaced by the $\epsilon$-insensitive error function:
$$g_\epsilon(z) = \begin{cases}
|z|-\epsilon &\text{if }|z|\ge \epsilon\,,\\
0&\text{otherwise.}
\end{cases}$$
The value of $\epsilon$ intuitively controls the "sparsity" of the solution. And using absolute values instead of squares may render the solution more robust to outliers.
You can find additional details in the Chapter 6 of the famous book from Rasmussen and Williams.
|
Relationship between Gaussian process and Regression by supervised learning model like SVR
|
There exists a very strong link between Gaussian process regression and kernel Ridge regression (also called Tikhonov regularization). Indeed, the posterior expectation you compute using Bayesian infe
|
Relationship between Gaussian process and Regression by supervised learning model like SVR
There exists a very strong link between Gaussian process regression and kernel Ridge regression (also called Tikhonov regularization). Indeed, the posterior expectation you compute using Bayesian inference with prior $\mathcal{GP}(0,k)$ and additive noise model $\mathcal{N}(0,\eta^2)$ gives exactly the same predictions as what obtained using a kernel Ridge regression in the RKHS $\mathcal{H}_k$ of kernel $k$ and regularization parameter $\eta^2$, that is the solution of:
$${\arg\!\min}_{f\in\mathcal{H}_k} \sum_{i=1}^n\big(y_i-f(x_i)\big)^2 + \eta^2 \lVert f \rVert^2_{\mathcal{H}_k}\,.$$
Note that the Bayesian inferance also computes the posterior variance, for which the links with kernel regression are less clear.
For SVR you'll find some differences between the two predictions since the least-squares fitting term is replaced by the $\epsilon$-insensitive error function:
$$g_\epsilon(z) = \begin{cases}
|z|-\epsilon &\text{if }|z|\ge \epsilon\,,\\
0&\text{otherwise.}
\end{cases}$$
The value of $\epsilon$ intuitively controls the "sparsity" of the solution. And using absolute values instead of squares may render the solution more robust to outliers.
You can find additional details in the Chapter 6 of the famous book from Rasmussen and Williams.
|
Relationship between Gaussian process and Regression by supervised learning model like SVR
There exists a very strong link between Gaussian process regression and kernel Ridge regression (also called Tikhonov regularization). Indeed, the posterior expectation you compute using Bayesian infe
|
46,654
|
Mode of the $ \chi^2 $ distribution
|
The pdf of a $\chi^2_k$ distribution is,
$$f(x) = 2^{-k/2} \Gamma{(k/2)}^{-1} x^{k/2 - 1}e^{-x/2}. $$
We need to find $x^*$ such that $x^* = \arg \max_\limits{x > 0} f(x)$. Then $x^*$ is the mode. Note that $\arg \max_\limits{x > 0} f(x) = \arg \max_\limits{x > 0} \log f(x)$, so we will find the mode by maximizing the log of the pdf instead of maximizing the pdf (this turns out to be easier).
\begin{align*}
\log f(x) &= -\dfrac{k}{2} \log 2 - \log \Gamma(k/2) + \left(\dfrac{k}{2} - 1 \right) \log x - \dfrac{x}{2}\\
\dfrac{\log f(x)}{dx} &= \left(\dfrac{k}{2} - 1 \right) \dfrac{1}{x} - \dfrac{1}{2} \overset{set}{=} 0\\
\Rightarrow x^* &= k-2
\end{align*}
Thus we get that the mode is $x^* = k-2$. If $k \leq 2$, then the mode is $0$, since the $\chi^2$ pdf in that case is decreasing on the positives.
EDIT: To verify that the second derivate is negative, look at @MatthewGunn's comment below.
|
Mode of the $ \chi^2 $ distribution
|
The pdf of a $\chi^2_k$ distribution is,
$$f(x) = 2^{-k/2} \Gamma{(k/2)}^{-1} x^{k/2 - 1}e^{-x/2}. $$
We need to find $x^*$ such that $x^* = \arg \max_\limits{x > 0} f(x)$. Then $x^*$ is the mode. Not
|
Mode of the $ \chi^2 $ distribution
The pdf of a $\chi^2_k$ distribution is,
$$f(x) = 2^{-k/2} \Gamma{(k/2)}^{-1} x^{k/2 - 1}e^{-x/2}. $$
We need to find $x^*$ such that $x^* = \arg \max_\limits{x > 0} f(x)$. Then $x^*$ is the mode. Note that $\arg \max_\limits{x > 0} f(x) = \arg \max_\limits{x > 0} \log f(x)$, so we will find the mode by maximizing the log of the pdf instead of maximizing the pdf (this turns out to be easier).
\begin{align*}
\log f(x) &= -\dfrac{k}{2} \log 2 - \log \Gamma(k/2) + \left(\dfrac{k}{2} - 1 \right) \log x - \dfrac{x}{2}\\
\dfrac{\log f(x)}{dx} &= \left(\dfrac{k}{2} - 1 \right) \dfrac{1}{x} - \dfrac{1}{2} \overset{set}{=} 0\\
\Rightarrow x^* &= k-2
\end{align*}
Thus we get that the mode is $x^* = k-2$. If $k \leq 2$, then the mode is $0$, since the $\chi^2$ pdf in that case is decreasing on the positives.
EDIT: To verify that the second derivate is negative, look at @MatthewGunn's comment below.
|
Mode of the $ \chi^2 $ distribution
The pdf of a $\chi^2_k$ distribution is,
$$f(x) = 2^{-k/2} \Gamma{(k/2)}^{-1} x^{k/2 - 1}e^{-x/2}. $$
We need to find $x^*$ such that $x^* = \arg \max_\limits{x > 0} f(x)$. Then $x^*$ is the mode. Not
|
46,655
|
Mode of the $ \chi^2 $ distribution
|
The solution by Greenparker is correct and the double derivative can be proved to be less than 0. Substitute x=k-2 -> k=x+2 in the 2nd derivative solution obtained (i.e.
-(((x+2)/2)-1)*x^-2)
Thus we get, (-x/2)*(x^-2) which is negative. hence double derivative is negative.
|
Mode of the $ \chi^2 $ distribution
|
The solution by Greenparker is correct and the double derivative can be proved to be less than 0. Substitute x=k-2 -> k=x+2 in the 2nd derivative solution obtained (i.e.
-(((x+2)/2)-1)*x^-2)
|
Mode of the $ \chi^2 $ distribution
The solution by Greenparker is correct and the double derivative can be proved to be less than 0. Substitute x=k-2 -> k=x+2 in the 2nd derivative solution obtained (i.e.
-(((x+2)/2)-1)*x^-2)
Thus we get, (-x/2)*(x^-2) which is negative. hence double derivative is negative.
|
Mode of the $ \chi^2 $ distribution
The solution by Greenparker is correct and the double derivative can be proved to be less than 0. Substitute x=k-2 -> k=x+2 in the 2nd derivative solution obtained (i.e.
-(((x+2)/2)-1)*x^-2)
|
46,656
|
Reference level in GLM regression
|
I would mostly choose as reference level one which gives meaning in the applied context, that is, a reference level that actually is interesting as a reference in the application. So, in an experiment with several treatments and one control, I would choose the control as the reference level, in a marketing context with many product, I would choose a market leader as reference (or If I am an interested party, my own product.)
But, if some levels have very few observations, using such a level as a reference will lead to all the estimated contrasts$^\dagger$ having a large standard deviation, which is a difficulty for interpretation. So then some compromise must be made.
But what you have been told:
because this somehow makes the model more stable
is not true. Irrespective which level you choose as a reference, the model being estimated is the same, and will be stable or unstable the same. Choice of reference level only is a help for interpretation, not for numerical issues. And whatever contrasts you are most interested in, can always be computed after the fit, it is just convenient if we can read it directly off the standard output.
$^\dagger\colon$ When using treatment contrasts/treatment coding, all the estimated parameters are really contrast comparing level $j$ to the reference level.
|
Reference level in GLM regression
|
I would mostly choose as reference level one which gives meaning in the applied context, that is, a reference level that actually is interesting as a reference in the application. So, in an experiment
|
Reference level in GLM regression
I would mostly choose as reference level one which gives meaning in the applied context, that is, a reference level that actually is interesting as a reference in the application. So, in an experiment with several treatments and one control, I would choose the control as the reference level, in a marketing context with many product, I would choose a market leader as reference (or If I am an interested party, my own product.)
But, if some levels have very few observations, using such a level as a reference will lead to all the estimated contrasts$^\dagger$ having a large standard deviation, which is a difficulty for interpretation. So then some compromise must be made.
But what you have been told:
because this somehow makes the model more stable
is not true. Irrespective which level you choose as a reference, the model being estimated is the same, and will be stable or unstable the same. Choice of reference level only is a help for interpretation, not for numerical issues. And whatever contrasts you are most interested in, can always be computed after the fit, it is just convenient if we can read it directly off the standard output.
$^\dagger\colon$ When using treatment contrasts/treatment coding, all the estimated parameters are really contrast comparing level $j$ to the reference level.
|
Reference level in GLM regression
I would mostly choose as reference level one which gives meaning in the applied context, that is, a reference level that actually is interesting as a reference in the application. So, in an experiment
|
46,657
|
Difference of Frechet variables
|
The statistical understanding of the parameters--$m$ is a location, $s$ is a scale, and $\alpha$ is a power transformation--tells us how to proceed.
Consider this generalization of the problem. Let $F$ be any distribution function. Let $\{t_\alpha\,|\, \alpha\in A\subset\mathbb{R}^p\}$ be a parameterized family of strictly monotonic transformation functions that "play nicely" with rescaling in the following sense: there is a function $g$ such that for any positive number $s$
$$ t_\alpha(s\,t_\alpha^{-1}(y)) = g(s, \alpha) y.$$
This looks pretty abstract, so to fix the idea let's consider a common example where $p=1$ and $t_{(\alpha)}$ is the negative power transformation $x \to x^{-\alpha}$, $A = \{(\alpha)\,|\,\alpha \gt 0\}$. Then (dropping the distinction between the $1$-vector $(\alpha)$ and its component $\alpha$),
$$t_\alpha(s\,t_\alpha^{-1}(y)) = (s\,y^{-1/\alpha})^{-\alpha} = s^{-\alpha} y.\tag{1}$$
In this case we see
$$g(s,\alpha) = s^{-\alpha}.$$
Define a location-scale-shape family by means of parameters $\mu$, $\sigma$, and $\alpha$ via
$$F_{\mu, \sigma, \alpha}(x) = F\left(t_\alpha\left(\frac{x-\mu}{\sigma}\right)\right)$$
for $\mu\in\mathbb{R}$, $\sigma\gt 0$, and $\alpha\in A$. This means that any variable $X$ with such a distribution is obtained from a variable with an $F$ distribution by means of a $t_\alpha$ transformation, a rescaling by $\sigma$, and a shift by $\mu$.
Suppose $X$ has the distribution $F_{\mu, \sigma_1,\alpha}$ and the independent variable $Y$ has the distribution $F_{\mu, \sigma_2,\alpha}$. That is, they have the same shape and location but their scales might differ. Specifically,
$$X = \sigma_1 t_\alpha^{-1}(U) + \mu, \quad Y = \sigma_2 t_\alpha^{-1}(V) + \mu$$
for two independent variables $U, V$ distributed according to $F$.
Using this, the event $X - Y \gt 0$ may be rewritten as
$$t_\alpha^{-1}(U) \gt \sigma\, t_\alpha^{-1}(V)$$
for $\sigma = \sigma_2/\sigma_1$. The relationship $(1)$ simplifies this inequality to
$$ U \gt g(\sigma,\alpha) V.$$
(When $t_\alpha$ is decreasing, the $\gt$ changes to a $\lt$. In that case we should swap $U$ and $V$--which does nothing, since $U$ and $V$ are identically distributed--and we must change $g(\sigma,\alpha)$ to $1/g(\sigma,\alpha)$ in what follows.)
Because $U$ has an $F$ distribution, the chance of this relationship is
$$\Pr( U \gt g(\sigma,\alpha) V) = 1 - F(g(\sigma, \alpha)V).$$
Its expectation gives the answer:
$$\Pr(X - Y \gt 0) = \int_{\mathbb{R}} \left(1 - F(g(\sigma, \alpha)v)\right) dF(v).\tag{2}$$
The beauty of this solution is that it reduces the calculation to one involving only $F$. For instance, the Frechet distribution family is obtained from a negative power transformation of an exponential variable. Thus $$F(x) = 1 - \exp(-x);\quad dF(x) = \exp(-x)dx$$ (for $x\gt 0$ only) and (according to $(1)$)
$$t_\alpha(y) = y^{-\alpha}, \quad g(s,\alpha) = s^{-\alpha}.$$
Because this $t_\alpha$ is decreasing in $y$ for any $\alpha \gt 0$, we must invariably use $1/g(\sigma,\alpha) = \sigma^\alpha$ in the calculations. The value of $(2)$ therefore is
$$\int_0^\infty \exp(-\sigma^\alpha v)\exp(-v)dv = \int_0^\infty \exp(-(\sigma^\alpha + 1) v)dv = \frac{1}{1 + \sigma^\alpha} = \frac{\sigma_1^\alpha}{\sigma_1^\alpha + \sigma_2^\alpha}.$$
The actual amount of calculation needed to obtain this result is remarkably little.
|
Difference of Frechet variables
|
The statistical understanding of the parameters--$m$ is a location, $s$ is a scale, and $\alpha$ is a power transformation--tells us how to proceed.
Consider this generalization of the problem. Let
|
Difference of Frechet variables
The statistical understanding of the parameters--$m$ is a location, $s$ is a scale, and $\alpha$ is a power transformation--tells us how to proceed.
Consider this generalization of the problem. Let $F$ be any distribution function. Let $\{t_\alpha\,|\, \alpha\in A\subset\mathbb{R}^p\}$ be a parameterized family of strictly monotonic transformation functions that "play nicely" with rescaling in the following sense: there is a function $g$ such that for any positive number $s$
$$ t_\alpha(s\,t_\alpha^{-1}(y)) = g(s, \alpha) y.$$
This looks pretty abstract, so to fix the idea let's consider a common example where $p=1$ and $t_{(\alpha)}$ is the negative power transformation $x \to x^{-\alpha}$, $A = \{(\alpha)\,|\,\alpha \gt 0\}$. Then (dropping the distinction between the $1$-vector $(\alpha)$ and its component $\alpha$),
$$t_\alpha(s\,t_\alpha^{-1}(y)) = (s\,y^{-1/\alpha})^{-\alpha} = s^{-\alpha} y.\tag{1}$$
In this case we see
$$g(s,\alpha) = s^{-\alpha}.$$
Define a location-scale-shape family by means of parameters $\mu$, $\sigma$, and $\alpha$ via
$$F_{\mu, \sigma, \alpha}(x) = F\left(t_\alpha\left(\frac{x-\mu}{\sigma}\right)\right)$$
for $\mu\in\mathbb{R}$, $\sigma\gt 0$, and $\alpha\in A$. This means that any variable $X$ with such a distribution is obtained from a variable with an $F$ distribution by means of a $t_\alpha$ transformation, a rescaling by $\sigma$, and a shift by $\mu$.
Suppose $X$ has the distribution $F_{\mu, \sigma_1,\alpha}$ and the independent variable $Y$ has the distribution $F_{\mu, \sigma_2,\alpha}$. That is, they have the same shape and location but their scales might differ. Specifically,
$$X = \sigma_1 t_\alpha^{-1}(U) + \mu, \quad Y = \sigma_2 t_\alpha^{-1}(V) + \mu$$
for two independent variables $U, V$ distributed according to $F$.
Using this, the event $X - Y \gt 0$ may be rewritten as
$$t_\alpha^{-1}(U) \gt \sigma\, t_\alpha^{-1}(V)$$
for $\sigma = \sigma_2/\sigma_1$. The relationship $(1)$ simplifies this inequality to
$$ U \gt g(\sigma,\alpha) V.$$
(When $t_\alpha$ is decreasing, the $\gt$ changes to a $\lt$. In that case we should swap $U$ and $V$--which does nothing, since $U$ and $V$ are identically distributed--and we must change $g(\sigma,\alpha)$ to $1/g(\sigma,\alpha)$ in what follows.)
Because $U$ has an $F$ distribution, the chance of this relationship is
$$\Pr( U \gt g(\sigma,\alpha) V) = 1 - F(g(\sigma, \alpha)V).$$
Its expectation gives the answer:
$$\Pr(X - Y \gt 0) = \int_{\mathbb{R}} \left(1 - F(g(\sigma, \alpha)v)\right) dF(v).\tag{2}$$
The beauty of this solution is that it reduces the calculation to one involving only $F$. For instance, the Frechet distribution family is obtained from a negative power transformation of an exponential variable. Thus $$F(x) = 1 - \exp(-x);\quad dF(x) = \exp(-x)dx$$ (for $x\gt 0$ only) and (according to $(1)$)
$$t_\alpha(y) = y^{-\alpha}, \quad g(s,\alpha) = s^{-\alpha}.$$
Because this $t_\alpha$ is decreasing in $y$ for any $\alpha \gt 0$, we must invariably use $1/g(\sigma,\alpha) = \sigma^\alpha$ in the calculations. The value of $(2)$ therefore is
$$\int_0^\infty \exp(-\sigma^\alpha v)\exp(-v)dv = \int_0^\infty \exp(-(\sigma^\alpha + 1) v)dv = \frac{1}{1 + \sigma^\alpha} = \frac{\sigma_1^\alpha}{\sigma_1^\alpha + \sigma_2^\alpha}.$$
The actual amount of calculation needed to obtain this result is remarkably little.
|
Difference of Frechet variables
The statistical understanding of the parameters--$m$ is a location, $s$ is a scale, and $\alpha$ is a power transformation--tells us how to proceed.
Consider this generalization of the problem. Let
|
46,658
|
Difference of Frechet variables
|
I rename the variables $X_1=X$ and $X_2 = Y$ so the question is: What is the probability that $X_1>X_2$ given that $X_j$ is Frechet$(\alpha,s_j,m)$?
First note that this problem can be considered as the problem that
$$X_1 = \max\{X_1,X_2\},$$
this problem is well known in the theory of extreme values. First I note that
$$Pr(X_1>X_2) = Pr(X_1 - m > X_2 - m) = Pr(Z_1>Z_2),$$
defining $Z_j = X_j - m$ such that $Z_j$ is Frechet$(\alpha,s_j,0)$.
I then note that
$$ Pr(Z_1>Z_2) = Pr(Z_1 = \max\{Z_1,Z_2\}) = \int_{z_1} \int_{z_2} I[z_1 = \max\{z_1,z_2\}] f_{Z_2}(dz_2)f_{Z_1}(dz_1).$$
Because the Frechet with location $0$ has positive support on $(0,\infty)$ the first integral goes from $0$ to $\infty$. The same thing goes for the second integral however the indicator is 0 unless $z_2\leq z_1$ so the integral runs from $0$ to $z_1$. Making the limits explicit I therefore have
$$= \int_{0}^{\infty} \int_{0}^{z_1} f_{Z_2}(dz_2)f_{Z_1}(dz_1)$$
and it follows that
$$= \int_{0}^{\infty} F_{Z_2}(z_1)f_{Z_1}(z_1)dz_1, \ \ \ (eq. 1)$$
into which the Frechet c.d.f. and p.d.f. can be inserted to get the solution.
The Frechet c.d.f. is given as
$$F_{Z_j}(z) = Pr(Z_j\leq z) = \exp\left( - \left(\frac{z}{s_j}\right)^{-\alpha} \right),$$
which I choose to reparameterize in order to get
$$F_{Z_j}(z) = \exp\left( - \Phi_j z^{-\alpha} \right),$$
having defined $\Phi_j :=s_j^\alpha$. By differentiation it follows that
$$f_{Z_j}(z) = \exp\left( - \Phi_j z^{-\alpha} \right) \Phi_j \alpha z^{-\alpha -1} \ \ \ (eq. 2).$$
Insert these into (eq.1) above to get
$$ Pr(Z_1 = \max\{Z_1,Z_2\}) = \int_{0}^{\infty} \exp\left( - \Phi_2 z^{-\alpha} \right) \times \exp\left( - \Phi_1 z^{-\alpha} \right) \Phi_1 \alpha z^{-\alpha -1} dz$$
paying attention to the fact that the $\Phi_j$ parameters come from respective distributions of $Z_1$ and $Z_2$. This integral is easily solved by defining $\Phi= \Phi_1 + \Phi_2$ and rewriting
$$ Pr(Z_1 = \max\{Z_1,Z_2\}) =\frac{\Phi_1}{\Phi} \int_{0}^{\infty} \exp\left( - \Phi z^{-\alpha} \right) \Phi \alpha z^{-\alpha -1} dz$$
where I move $\Phi_1$ outside the integral and then divide by $\Phi$ outside the integral and multiply with $\Phi$ inside the integral. It is now easy to see that the function under the integral is the p.d.f. as given in (eq. 2) and hence it integrates 1 implying that
$$ Pr(Z_1 = \max\{Z_1,Z_2\}) =\frac{\Phi_1}{\Phi},$$
which offcourse can be written using the parameterization $$\Phi_j :=s_j^\alpha$$ and $\Phi := \sum_j s_j^\alpha$.
This result can be generalized to get
$$Pr(i \in \arg \max_j \{X_j\}) = Pr(X_i \geq \max_j \{X_j\}) = \frac{\Phi_i}{\sum_j \Phi_j}$$
assuming independence and $X_j \sim Frechet(\alpha,\Phi_j,0)$ for all $j$.
|
Difference of Frechet variables
|
I rename the variables $X_1=X$ and $X_2 = Y$ so the question is: What is the probability that $X_1>X_2$ given that $X_j$ is Frechet$(\alpha,s_j,m)$?
First note that this problem can be considered as
|
Difference of Frechet variables
I rename the variables $X_1=X$ and $X_2 = Y$ so the question is: What is the probability that $X_1>X_2$ given that $X_j$ is Frechet$(\alpha,s_j,m)$?
First note that this problem can be considered as the problem that
$$X_1 = \max\{X_1,X_2\},$$
this problem is well known in the theory of extreme values. First I note that
$$Pr(X_1>X_2) = Pr(X_1 - m > X_2 - m) = Pr(Z_1>Z_2),$$
defining $Z_j = X_j - m$ such that $Z_j$ is Frechet$(\alpha,s_j,0)$.
I then note that
$$ Pr(Z_1>Z_2) = Pr(Z_1 = \max\{Z_1,Z_2\}) = \int_{z_1} \int_{z_2} I[z_1 = \max\{z_1,z_2\}] f_{Z_2}(dz_2)f_{Z_1}(dz_1).$$
Because the Frechet with location $0$ has positive support on $(0,\infty)$ the first integral goes from $0$ to $\infty$. The same thing goes for the second integral however the indicator is 0 unless $z_2\leq z_1$ so the integral runs from $0$ to $z_1$. Making the limits explicit I therefore have
$$= \int_{0}^{\infty} \int_{0}^{z_1} f_{Z_2}(dz_2)f_{Z_1}(dz_1)$$
and it follows that
$$= \int_{0}^{\infty} F_{Z_2}(z_1)f_{Z_1}(z_1)dz_1, \ \ \ (eq. 1)$$
into which the Frechet c.d.f. and p.d.f. can be inserted to get the solution.
The Frechet c.d.f. is given as
$$F_{Z_j}(z) = Pr(Z_j\leq z) = \exp\left( - \left(\frac{z}{s_j}\right)^{-\alpha} \right),$$
which I choose to reparameterize in order to get
$$F_{Z_j}(z) = \exp\left( - \Phi_j z^{-\alpha} \right),$$
having defined $\Phi_j :=s_j^\alpha$. By differentiation it follows that
$$f_{Z_j}(z) = \exp\left( - \Phi_j z^{-\alpha} \right) \Phi_j \alpha z^{-\alpha -1} \ \ \ (eq. 2).$$
Insert these into (eq.1) above to get
$$ Pr(Z_1 = \max\{Z_1,Z_2\}) = \int_{0}^{\infty} \exp\left( - \Phi_2 z^{-\alpha} \right) \times \exp\left( - \Phi_1 z^{-\alpha} \right) \Phi_1 \alpha z^{-\alpha -1} dz$$
paying attention to the fact that the $\Phi_j$ parameters come from respective distributions of $Z_1$ and $Z_2$. This integral is easily solved by defining $\Phi= \Phi_1 + \Phi_2$ and rewriting
$$ Pr(Z_1 = \max\{Z_1,Z_2\}) =\frac{\Phi_1}{\Phi} \int_{0}^{\infty} \exp\left( - \Phi z^{-\alpha} \right) \Phi \alpha z^{-\alpha -1} dz$$
where I move $\Phi_1$ outside the integral and then divide by $\Phi$ outside the integral and multiply with $\Phi$ inside the integral. It is now easy to see that the function under the integral is the p.d.f. as given in (eq. 2) and hence it integrates 1 implying that
$$ Pr(Z_1 = \max\{Z_1,Z_2\}) =\frac{\Phi_1}{\Phi},$$
which offcourse can be written using the parameterization $$\Phi_j :=s_j^\alpha$$ and $\Phi := \sum_j s_j^\alpha$.
This result can be generalized to get
$$Pr(i \in \arg \max_j \{X_j\}) = Pr(X_i \geq \max_j \{X_j\}) = \frac{\Phi_i}{\sum_j \Phi_j}$$
assuming independence and $X_j \sim Frechet(\alpha,\Phi_j,0)$ for all $j$.
|
Difference of Frechet variables
I rename the variables $X_1=X$ and $X_2 = Y$ so the question is: What is the probability that $X_1>X_2$ given that $X_j$ is Frechet$(\alpha,s_j,m)$?
First note that this problem can be considered as
|
46,659
|
Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
|
Altough these 3 approaches consists in dividing a dataset into several subsets, they are still different in the main purpose of this division.
K-Fold Cross Validation (CV)
It consists in dividing the original set of observations into k subset of more or less same size. Then, you will use one of the subset as test set and the remaining subsets will be used to form your training set. You will repeat this kth times, where each time the subset used as test set will change.
As an example, if you use 3-fold CV, your original set will be divided into k1, k2, k3.
First, k1 will form the test set, k2 and k3 will form the training set.
Then, k2 will form the test set, k1 and k3 will form the training set.
Finally, k3 will form the test set, k1 and k3 will form the training set.
For each fold, you output the results and you aggregate these to obtain the final result.
Bootstrap
A bootstrap is a random subset of your original data, sometimes drawn with replacement (check http://www.stat.washington.edu/courses/stat527/s13/readings/EfronTibshirani_JASA_1997.pdf on the .632 rule), sometimes not. But the idea is that a bootstrap contains only a part of your whole set of observations. It is different from CV as it does not contain a testing set.
Bootstrap is used to train a different classifier each time on a different set of observations. To output your results, a combination method is used, like averaging for example.
Out-of-bag
As said above, not all observations are used to form bootstrap. The part not used forms the out-of-bag classifier, and can be used to assess the error rate of your classifier. Out-of-bag are typically used to compute the error-rate, and not to train your classifier.
|
Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
|
Altough these 3 approaches consists in dividing a dataset into several subsets, they are still different in the main purpose of this division.
K-Fold Cross Validation (CV)
It consists in dividing the
|
Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
Altough these 3 approaches consists in dividing a dataset into several subsets, they are still different in the main purpose of this division.
K-Fold Cross Validation (CV)
It consists in dividing the original set of observations into k subset of more or less same size. Then, you will use one of the subset as test set and the remaining subsets will be used to form your training set. You will repeat this kth times, where each time the subset used as test set will change.
As an example, if you use 3-fold CV, your original set will be divided into k1, k2, k3.
First, k1 will form the test set, k2 and k3 will form the training set.
Then, k2 will form the test set, k1 and k3 will form the training set.
Finally, k3 will form the test set, k1 and k3 will form the training set.
For each fold, you output the results and you aggregate these to obtain the final result.
Bootstrap
A bootstrap is a random subset of your original data, sometimes drawn with replacement (check http://www.stat.washington.edu/courses/stat527/s13/readings/EfronTibshirani_JASA_1997.pdf on the .632 rule), sometimes not. But the idea is that a bootstrap contains only a part of your whole set of observations. It is different from CV as it does not contain a testing set.
Bootstrap is used to train a different classifier each time on a different set of observations. To output your results, a combination method is used, like averaging for example.
Out-of-bag
As said above, not all observations are used to form bootstrap. The part not used forms the out-of-bag classifier, and can be used to assess the error rate of your classifier. Out-of-bag are typically used to compute the error-rate, and not to train your classifier.
|
Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
Altough these 3 approaches consists in dividing a dataset into several subsets, they are still different in the main purpose of this division.
K-Fold Cross Validation (CV)
It consists in dividing the
|
46,660
|
Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
|
First of all, you're right about the similarities: they are all types of resampling-based error estimates. Now about the differences.
cross validation vs. out-of-bootstrap: cross validation (as well as random splitting procedures known as set validation or hold-out validation) use resampling without replacement whereas bootstrap procedures resample with replacement. In cross validation, the resampling without replacement is done in a way that ensures each sample is tested exactly once per "run" of the cross validation.
(There are also non-random splitting procedures for cross validation such as venetian blinds or contiguous blocks which are used in special situations.)
Which of these procedures is best for your model validation depends e.g. on your sample situation and on the model you're validating (bootstrap is of no use with a modeling algorithm that deletes duplicates).
Out-of-bag estimates are very different from the "normal" out-of-bootstrap or cross validation estimates, because they do not estimate the generalization error of single models (typically the one model built on the whole data set) but the generalization error of an aggregated (ensemble) model (bag = bootstrap aggregation). So this difference is like the difference between a single decision tree and a random forest.
Again, you can also use resampling without replacement (e.g. cross validation) to generate the model ensemble for the aggregation - the principle works exactly the same. (Ask me if you need a literature example)
@benmaq already pointed out, bootstrapping is often used for other purposes than validation, particularly to estimate variability due to the random process of sampling that lead to the sample at hand.
An analogous procedure with resampling without replacement or more precisely, using the surrogate models of leave-one-out cross validation is known as jackknifing.
|
Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
|
First of all, you're right about the similarities: they are all types of resampling-based error estimates. Now about the differences.
cross validation vs. out-of-bootstrap: cross validation (as well
|
Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
First of all, you're right about the similarities: they are all types of resampling-based error estimates. Now about the differences.
cross validation vs. out-of-bootstrap: cross validation (as well as random splitting procedures known as set validation or hold-out validation) use resampling without replacement whereas bootstrap procedures resample with replacement. In cross validation, the resampling without replacement is done in a way that ensures each sample is tested exactly once per "run" of the cross validation.
(There are also non-random splitting procedures for cross validation such as venetian blinds or contiguous blocks which are used in special situations.)
Which of these procedures is best for your model validation depends e.g. on your sample situation and on the model you're validating (bootstrap is of no use with a modeling algorithm that deletes duplicates).
Out-of-bag estimates are very different from the "normal" out-of-bootstrap or cross validation estimates, because they do not estimate the generalization error of single models (typically the one model built on the whole data set) but the generalization error of an aggregated (ensemble) model (bag = bootstrap aggregation). So this difference is like the difference between a single decision tree and a random forest.
Again, you can also use resampling without replacement (e.g. cross validation) to generate the model ensemble for the aggregation - the principle works exactly the same. (Ask me if you need a literature example)
@benmaq already pointed out, bootstrapping is often used for other purposes than validation, particularly to estimate variability due to the random process of sampling that lead to the sample at hand.
An analogous procedure with resampling without replacement or more precisely, using the surrogate models of leave-one-out cross validation is known as jackknifing.
|
Are K-Fold Cross Validation , Bootstrap ,Out of Bag fundamentally same?
First of all, you're right about the similarities: they are all types of resampling-based error estimates. Now about the differences.
cross validation vs. out-of-bootstrap: cross validation (as well
|
46,661
|
Can a statistic depend on a parameter?
|
A statistic cannot be a function of unknown parameters by definition. In the case of the $t$ test our test statistic takes the form
$$
\frac{\sqrt{n}(\bar{x} - \mu_0)}{s}
$$
where $\mu_0$ is the hypothesized value for the unknown mean. That is, the $t$ statistic is a function of the data and the particular hypothesis we happen to be testing (which of course is known), and is not a function of any unknown parameters.
|
Can a statistic depend on a parameter?
|
A statistic cannot be a function of unknown parameters by definition. In the case of the $t$ test our test statistic takes the form
$$
\frac{\sqrt{n}(\bar{x} - \mu_0)}{s}
$$
where $\mu_0$ is the hypo
|
Can a statistic depend on a parameter?
A statistic cannot be a function of unknown parameters by definition. In the case of the $t$ test our test statistic takes the form
$$
\frac{\sqrt{n}(\bar{x} - \mu_0)}{s}
$$
where $\mu_0$ is the hypothesized value for the unknown mean. That is, the $t$ statistic is a function of the data and the particular hypothesis we happen to be testing (which of course is known), and is not a function of any unknown parameters.
|
Can a statistic depend on a parameter?
A statistic cannot be a function of unknown parameters by definition. In the case of the $t$ test our test statistic takes the form
$$
\frac{\sqrt{n}(\bar{x} - \mu_0)}{s}
$$
where $\mu_0$ is the hypo
|
46,662
|
Can a statistic depend on a parameter?
|
A test statistic is a function of observable random variables whose distribution does not depend on any unknown parameters. For example, if n is large enough, then the central limit theorem says that the normal distribution with mean zero and variance one is approximately valid for the test statistic:
$$
T=\frac{\bar{X}_n-\mu}{\sigma/\sqrt{n}},
$$
Clearly the test statistic involves unknown parameters. Generally, the inference question in this setting is to test whether or not the population mean, $\mu$, is equal to some value, say $\mu_0$, where $\mu_0$ is known (the test will decide whether or not it is really $\mu_0$). The standard error, $\sigma$, must be estimated. But, the null distribution of the test statistic is N(0,1), which, importantly, does not have any unknown parameters.
Many authors consider significance testing to be the same as hypothesis testing, which perhaps leads to confusion on this point. In hypothesis testing, the size of the test is determined a priori, which means the distribution of the test statistic must be estimable a priori, and hence must not have any unknown parameters. That is, before obtaining data and estimating $\bar{X}_n$ and $\mbox{se}(\bar{X}_n)=\sigma/\sqrt{n}$, the size of the test should be calculable. Here, the size of the test is the probability of making a type I error. More precisely, it is the supremum of the power of the test under the null hypothesis; where the power is the probability of rejecting the null hypothesis under a given parameter(s).
In significance testing, a p-value is determined a posteriori. The p-value is the probability of observing a test statistic "at least as large" as the one observed, based on a null distribution. It was not intended to be used in a hypothesis-test setting. One problem with doing so (e.g., rejecting the null hypothesis if the p-value is < alpha) is that there are different ways to calculate the p-value that can change the result depending on the type of test and the experiment conducted. See Goodman (1999, Ann Intern Med, vol. 130, pp. 995 - 1004) for a good discussion about the differences between the two testing procedures. Also, see the ASA's statement on p-values (2016, https://doi.org/10.1080/00031305.2016.1154108).
In the p-value/significance testing setting, it maybe is not important to have a sample statistic (i.e., $\bar{X}_n$ is a sample statistic because it is a function of observable random variables) have a distribution that is free of unknown parameters because it is calculated after observing the data without controlling for the size of the test.
In summary, a statistic like $\bar{X}_n$ is a sample statistic. Strictly speaking, it is not a test statistic because its distribution, say $N(\mu,\sigma^2)$, depends on unknown parameters. The size, and power, of a hypothesis cannot be regulated a priori with these nuisance parameters. But, authors who consider the two types of testing to be the same maybe do not worry about controlling for the size of the test. In their setting, a sample statistic would be the same as a test statistic. The statistic $(\bar{X}_n-\mu)/\mbox{se}(\bar{X}_n)$ is a test statistic because its distribution, $N(0,1)$, does not depend on unknown parameters, they are zero and one, resp.
|
Can a statistic depend on a parameter?
|
A test statistic is a function of observable random variables whose distribution does not depend on any unknown parameters. For example, if n is large enough, then the central limit theorem says that
|
Can a statistic depend on a parameter?
A test statistic is a function of observable random variables whose distribution does not depend on any unknown parameters. For example, if n is large enough, then the central limit theorem says that the normal distribution with mean zero and variance one is approximately valid for the test statistic:
$$
T=\frac{\bar{X}_n-\mu}{\sigma/\sqrt{n}},
$$
Clearly the test statistic involves unknown parameters. Generally, the inference question in this setting is to test whether or not the population mean, $\mu$, is equal to some value, say $\mu_0$, where $\mu_0$ is known (the test will decide whether or not it is really $\mu_0$). The standard error, $\sigma$, must be estimated. But, the null distribution of the test statistic is N(0,1), which, importantly, does not have any unknown parameters.
Many authors consider significance testing to be the same as hypothesis testing, which perhaps leads to confusion on this point. In hypothesis testing, the size of the test is determined a priori, which means the distribution of the test statistic must be estimable a priori, and hence must not have any unknown parameters. That is, before obtaining data and estimating $\bar{X}_n$ and $\mbox{se}(\bar{X}_n)=\sigma/\sqrt{n}$, the size of the test should be calculable. Here, the size of the test is the probability of making a type I error. More precisely, it is the supremum of the power of the test under the null hypothesis; where the power is the probability of rejecting the null hypothesis under a given parameter(s).
In significance testing, a p-value is determined a posteriori. The p-value is the probability of observing a test statistic "at least as large" as the one observed, based on a null distribution. It was not intended to be used in a hypothesis-test setting. One problem with doing so (e.g., rejecting the null hypothesis if the p-value is < alpha) is that there are different ways to calculate the p-value that can change the result depending on the type of test and the experiment conducted. See Goodman (1999, Ann Intern Med, vol. 130, pp. 995 - 1004) for a good discussion about the differences between the two testing procedures. Also, see the ASA's statement on p-values (2016, https://doi.org/10.1080/00031305.2016.1154108).
In the p-value/significance testing setting, it maybe is not important to have a sample statistic (i.e., $\bar{X}_n$ is a sample statistic because it is a function of observable random variables) have a distribution that is free of unknown parameters because it is calculated after observing the data without controlling for the size of the test.
In summary, a statistic like $\bar{X}_n$ is a sample statistic. Strictly speaking, it is not a test statistic because its distribution, say $N(\mu,\sigma^2)$, depends on unknown parameters. The size, and power, of a hypothesis cannot be regulated a priori with these nuisance parameters. But, authors who consider the two types of testing to be the same maybe do not worry about controlling for the size of the test. In their setting, a sample statistic would be the same as a test statistic. The statistic $(\bar{X}_n-\mu)/\mbox{se}(\bar{X}_n)$ is a test statistic because its distribution, $N(0,1)$, does not depend on unknown parameters, they are zero and one, resp.
|
Can a statistic depend on a parameter?
A test statistic is a function of observable random variables whose distribution does not depend on any unknown parameters. For example, if n is large enough, then the central limit theorem says that
|
46,663
|
Do we say that the $y_i$'s are i.i.d. if $n_iy_i \sim \text{Binomial}(n_i, \theta)$?
|
The $y_i$ are clearly not identically distributed -- their distribution functions
differ!
For example:
$\qquad$ $\qquad$ Two scaled binomials with common $p=0.4$.
As you see, those two functions don't coincide, so the distributions are not identical.
As for how you could describe them, you might say the $y_i$ are independent and have a common mean, or that they're independent sample proportions with the same population proportion, or they're independent estimates of the same proportion -- or a number of other things.
|
Do we say that the $y_i$'s are i.i.d. if $n_iy_i \sim \text{Binomial}(n_i, \theta)$?
|
The $y_i$ are clearly not identically distributed -- their distribution functions
differ!
For example:
$\qquad$ $\qquad$ Two scaled binomials with common $p=0.4$.
As you see, those two functions don'
|
Do we say that the $y_i$'s are i.i.d. if $n_iy_i \sim \text{Binomial}(n_i, \theta)$?
The $y_i$ are clearly not identically distributed -- their distribution functions
differ!
For example:
$\qquad$ $\qquad$ Two scaled binomials with common $p=0.4$.
As you see, those two functions don't coincide, so the distributions are not identical.
As for how you could describe them, you might say the $y_i$ are independent and have a common mean, or that they're independent sample proportions with the same population proportion, or they're independent estimates of the same proportion -- or a number of other things.
|
Do we say that the $y_i$'s are i.i.d. if $n_iy_i \sim \text{Binomial}(n_i, \theta)$?
The $y_i$ are clearly not identically distributed -- their distribution functions
differ!
For example:
$\qquad$ $\qquad$ Two scaled binomials with common $p=0.4$.
As you see, those two functions don'
|
46,664
|
Accuracy of model lower than no-information rate?
|
This is a strong argument why you should never use a discontinuous improper accuracy scoring rule. It should also be a clue that any scoring rule that tempts you to remove data from the sample has to be bogus. If you were truly interested in all-or-nothing classification then just ignore all the data and predict that an observation is always in the majority class. Better would be to develop a probability model (e.g., logistic regression) and use a proper accuracy score to assess the model's value (logarithmic probability scoring rule = deviance = log-likelihood = pseudo $R^2$ for this purpose; Brier score).
|
Accuracy of model lower than no-information rate?
|
This is a strong argument why you should never use a discontinuous improper accuracy scoring rule. It should also be a clue that any scoring rule that tempts you to remove data from the sample has to
|
Accuracy of model lower than no-information rate?
This is a strong argument why you should never use a discontinuous improper accuracy scoring rule. It should also be a clue that any scoring rule that tempts you to remove data from the sample has to be bogus. If you were truly interested in all-or-nothing classification then just ignore all the data and predict that an observation is always in the majority class. Better would be to develop a probability model (e.g., logistic regression) and use a proper accuracy score to assess the model's value (logarithmic probability scoring rule = deviance = log-likelihood = pseudo $R^2$ for this purpose; Brier score).
|
Accuracy of model lower than no-information rate?
This is a strong argument why you should never use a discontinuous improper accuracy scoring rule. It should also be a clue that any scoring rule that tempts you to remove data from the sample has to
|
46,665
|
Accuracy of model lower than no-information rate?
|
In highly skewed data sets beating the default accuracy can be very difficult and the ability to build a successful model may depend on how many positive examples you have and what the goals of your model are. Even with a very strong skew building reasonable models is possible, as an example the ipinyou data set has approx 2.5 million negative examples and only a few thousand positive ones.
With a skewed dataset such as the ipinyou, using training using the AUC can help as this looks at the area under the ROC curve and so predicting only one class doesn't improve the score. Other challenges that can be faced using such datasets is the size, so ensuring you can actually process the data is important and may effect the language (Python, R, etc) you use, where the processing takes place (computer or on the cloud), and what algorithms you try to work with. Linear methods may struggle with highly skewed data where as non-linear methods such as random-forest or XGboost can be much more effective.
Considering careful feature engineering is also important, also sparse matrices and 1 hot vector encoding may help you uncover the patterns within highly skewed data.
|
Accuracy of model lower than no-information rate?
|
In highly skewed data sets beating the default accuracy can be very difficult and the ability to build a successful model may depend on how many positive examples you have and what the goals of your m
|
Accuracy of model lower than no-information rate?
In highly skewed data sets beating the default accuracy can be very difficult and the ability to build a successful model may depend on how many positive examples you have and what the goals of your model are. Even with a very strong skew building reasonable models is possible, as an example the ipinyou data set has approx 2.5 million negative examples and only a few thousand positive ones.
With a skewed dataset such as the ipinyou, using training using the AUC can help as this looks at the area under the ROC curve and so predicting only one class doesn't improve the score. Other challenges that can be faced using such datasets is the size, so ensuring you can actually process the data is important and may effect the language (Python, R, etc) you use, where the processing takes place (computer or on the cloud), and what algorithms you try to work with. Linear methods may struggle with highly skewed data where as non-linear methods such as random-forest or XGboost can be much more effective.
Considering careful feature engineering is also important, also sparse matrices and 1 hot vector encoding may help you uncover the patterns within highly skewed data.
|
Accuracy of model lower than no-information rate?
In highly skewed data sets beating the default accuracy can be very difficult and the ability to build a successful model may depend on how many positive examples you have and what the goals of your m
|
46,666
|
Accuracy of model lower than no-information rate?
|
If I understand the question correctly, it should also be mentioned that a lot of the "standard" model statistics are meaningless on the test set as you have probably only applied the imbalance adjustment techniques on the training set. In this case, as @Jonno Bourne pointed out, the AUC would be a better accuracy measure.
|
Accuracy of model lower than no-information rate?
|
If I understand the question correctly, it should also be mentioned that a lot of the "standard" model statistics are meaningless on the test set as you have probably only applied the imbalance adjust
|
Accuracy of model lower than no-information rate?
If I understand the question correctly, it should also be mentioned that a lot of the "standard" model statistics are meaningless on the test set as you have probably only applied the imbalance adjustment techniques on the training set. In this case, as @Jonno Bourne pointed out, the AUC would be a better accuracy measure.
|
Accuracy of model lower than no-information rate?
If I understand the question correctly, it should also be mentioned that a lot of the "standard" model statistics are meaningless on the test set as you have probably only applied the imbalance adjust
|
46,667
|
Visualizing C5.0 Decision Tree? [closed]
|
I might be missing something in your question but simply
plot(myTree)
gives you a visualization of the tree (based on the infrastructure in partykit)
Of course the tree is very large and you either need to zoom into the image or use a large screen to read it...
You can also use partykit to just display subtrees. For example if you want to just show the left branch below the root (starting from node 2) and the right branch below the root (starting from node 33) you could do:
library("partykit")
myTree2 <- C50:::as.party.C5.0(myTree)
plot(myTree2[2])
plot(myTree2[33])
|
Visualizing C5.0 Decision Tree? [closed]
|
I might be missing something in your question but simply
plot(myTree)
gives you a visualization of the tree (based on the infrastructure in partykit)
Of course the tree is very large and you either
|
Visualizing C5.0 Decision Tree? [closed]
I might be missing something in your question but simply
plot(myTree)
gives you a visualization of the tree (based on the infrastructure in partykit)
Of course the tree is very large and you either need to zoom into the image or use a large screen to read it...
You can also use partykit to just display subtrees. For example if you want to just show the left branch below the root (starting from node 2) and the right branch below the root (starting from node 33) you could do:
library("partykit")
myTree2 <- C50:::as.party.C5.0(myTree)
plot(myTree2[2])
plot(myTree2[33])
|
Visualizing C5.0 Decision Tree? [closed]
I might be missing something in your question but simply
plot(myTree)
gives you a visualization of the tree (based on the infrastructure in partykit)
Of course the tree is very large and you either
|
46,668
|
Gibbs sampler gets stuck in local mode
|
This is certainly possible. It often happens when variables are strongly correlated.
For simplicity, consider a two-parameter model. Because Gibbs sampling alters only one variable at a time, it can only move vertically or horizontally on the Cartesian plane. It will be unable to reach regions of high posterior probability that lie diagonally from one another.
You may be able to resolve this problem by reparameterizing your model so the posterior is roughly spherical or by jointly updated blocks of correlated parameters. For an example of how to reparameterize, you could take a regression model and standardize the covariates. This makes is so that the slope won't have much effect on the fit of the intercept and vice versa. Without standardizing, data may sit far from the origin, and a small change in slope could demand a huge change in intercept for the regression line to fall across the data.
MCMC implementation is difficult and error prone, so you may want to consider tools such as STAN. STAN provides already-implemented state-of-the-art MCMC methods via a language similar in spirit to R's formula syntax. It has interfaces with many common scientific computing languages. (I am not affiliated with STAN -- merely a believer.)
|
Gibbs sampler gets stuck in local mode
|
This is certainly possible. It often happens when variables are strongly correlated.
For simplicity, consider a two-parameter model. Because Gibbs sampling alters only one variable at a time, it can
|
Gibbs sampler gets stuck in local mode
This is certainly possible. It often happens when variables are strongly correlated.
For simplicity, consider a two-parameter model. Because Gibbs sampling alters only one variable at a time, it can only move vertically or horizontally on the Cartesian plane. It will be unable to reach regions of high posterior probability that lie diagonally from one another.
You may be able to resolve this problem by reparameterizing your model so the posterior is roughly spherical or by jointly updated blocks of correlated parameters. For an example of how to reparameterize, you could take a regression model and standardize the covariates. This makes is so that the slope won't have much effect on the fit of the intercept and vice versa. Without standardizing, data may sit far from the origin, and a small change in slope could demand a huge change in intercept for the regression line to fall across the data.
MCMC implementation is difficult and error prone, so you may want to consider tools such as STAN. STAN provides already-implemented state-of-the-art MCMC methods via a language similar in spirit to R's formula syntax. It has interfaces with many common scientific computing languages. (I am not affiliated with STAN -- merely a believer.)
|
Gibbs sampler gets stuck in local mode
This is certainly possible. It often happens when variables are strongly correlated.
For simplicity, consider a two-parameter model. Because Gibbs sampling alters only one variable at a time, it can
|
46,669
|
Gibbs sampler gets stuck in local mode
|
Here's a simple case where Gibbs sampling gets stuck:
Imagine $(\theta_1,\theta_2)$ is a 50-50 mixture of two bivariate normals which each have independent components with variance 1.
The first mixture component is centered at $(10,-10)$. The second is centered at $(-10,10)$.
If you're in the top-left (green) mode, you'll stay there. If you're in the red mode, you'll stay there instead.
Each component is a local mode in the bivariate distribution. When you start, you'll right away get stuck in one mode or the other -- you can easily move around within either mode, but not between them.
In a situation like this one, there are a variety of strategies that might be used. For example, sometimes it is feasible to sample affected variables as a block. Sometimes you can integrate some variables out, which can reduce or even fix the problem. Sometimes you might be able to reparameterize (rewriting as $(\psi_1,\psi_2)=(\theta_1+\theta_2,\theta_1-\theta_2)$ would work for the abobe example). You may be able to add a variable (or more than one) that allow movement between modes that could not previously communicate. There are numerous other strategies; some more suited to particular kinds of multiple modes.
You might also move to a completely different approach, such as Hamiltonian Monte Carlo.
|
Gibbs sampler gets stuck in local mode
|
Here's a simple case where Gibbs sampling gets stuck:
Imagine $(\theta_1,\theta_2)$ is a 50-50 mixture of two bivariate normals which each have independent components with variance 1.
The first mixtur
|
Gibbs sampler gets stuck in local mode
Here's a simple case where Gibbs sampling gets stuck:
Imagine $(\theta_1,\theta_2)$ is a 50-50 mixture of two bivariate normals which each have independent components with variance 1.
The first mixture component is centered at $(10,-10)$. The second is centered at $(-10,10)$.
If you're in the top-left (green) mode, you'll stay there. If you're in the red mode, you'll stay there instead.
Each component is a local mode in the bivariate distribution. When you start, you'll right away get stuck in one mode or the other -- you can easily move around within either mode, but not between them.
In a situation like this one, there are a variety of strategies that might be used. For example, sometimes it is feasible to sample affected variables as a block. Sometimes you can integrate some variables out, which can reduce or even fix the problem. Sometimes you might be able to reparameterize (rewriting as $(\psi_1,\psi_2)=(\theta_1+\theta_2,\theta_1-\theta_2)$ would work for the abobe example). You may be able to add a variable (or more than one) that allow movement between modes that could not previously communicate. There are numerous other strategies; some more suited to particular kinds of multiple modes.
You might also move to a completely different approach, such as Hamiltonian Monte Carlo.
|
Gibbs sampler gets stuck in local mode
Here's a simple case where Gibbs sampling gets stuck:
Imagine $(\theta_1,\theta_2)$ is a 50-50 mixture of two bivariate normals which each have independent components with variance 1.
The first mixtur
|
46,670
|
P-Value for logistic regression model in R [closed]
|
Here's a way to do it:
x <- rnorm(100)
y <- factor(c(rep("ONE",50),rep("TWO",50)))
fmod <- glm(y~x,family = "binomial") ##"full" mod
nmod <- glm(y~1, family = 'binomial') ##"null" mod
anova(nmod, fmod, test = 'Chisq')
This output from this test will give the p value comparing the full model to the null model.
Analysis of Deviance Table
Model 1: y ~ 1
Model 2: y ~ x
Resid. Df Resid. Dev Df Deviance Pr(>Chi)
99 138.63
98 137.28 1 1.3454 **0.2461**
|
P-Value for logistic regression model in R [closed]
|
Here's a way to do it:
x <- rnorm(100)
y <- factor(c(rep("ONE",50),rep("TWO",50)))
fmod <- glm(y~x,family = "binomial") ##"full" mod
nmod <- glm(y~1, family = 'binomial') ##"null" mod
anova(nmod, fmod
|
P-Value for logistic regression model in R [closed]
Here's a way to do it:
x <- rnorm(100)
y <- factor(c(rep("ONE",50),rep("TWO",50)))
fmod <- glm(y~x,family = "binomial") ##"full" mod
nmod <- glm(y~1, family = 'binomial') ##"null" mod
anova(nmod, fmod, test = 'Chisq')
This output from this test will give the p value comparing the full model to the null model.
Analysis of Deviance Table
Model 1: y ~ 1
Model 2: y ~ x
Resid. Df Resid. Dev Df Deviance Pr(>Chi)
99 138.63
98 137.28 1 1.3454 **0.2461**
|
P-Value for logistic regression model in R [closed]
Here's a way to do it:
x <- rnorm(100)
y <- factor(c(rep("ONE",50),rep("TWO",50)))
fmod <- glm(y~x,family = "binomial") ##"full" mod
nmod <- glm(y~1, family = 'binomial') ##"null" mod
anova(nmod, fmod
|
46,671
|
How is the standard error of a slope calculated when the intercept term is omitted?
|
The formulas are the same as always, so let's focus on understanding what's going on.
Here is a small cloud of points. Its slope is uncertain. (Indeed, the coordinates of these points were drawn independently from a standard Normal distribution and then moved a little to the side, as shown in subsequent plots.)
Here is the OLS fit. The intercept is near $3$. That's kind of an accident: the OLS line must pass through the center of mass of the point cloud and where the intercept is depends on how far I moved the point cloud away from the origin. Due to the uncertain slope and the relatively large distance the points were moved to the right, the intercept could be almost anywhere. To illustrate, the slopes of the dashed lines differ from the fitted line by up to $\pm 1/2$. All of them fit the data pretty well.
After lowering the cloud by the height of the intercept, the OLS line (solid gray) goes through the origin, as expected.
The OLS line remains just as uncertain as it was before. The standard error of its slope is high. But if you were to constrain it to pass through the origin, the only wiggle room left is to vary the other end up and down through the point cloud. The dotted lines show the same range of slopes as before: but now the extreme ones don't go anywhere near the cloud. Constraining the fit has greatly increased the certainty in the slope.
|
How is the standard error of a slope calculated when the intercept term is omitted?
|
The formulas are the same as always, so let's focus on understanding what's going on.
Here is a small cloud of points. Its slope is uncertain. (Indeed, the coordinates of these points were drawn ind
|
How is the standard error of a slope calculated when the intercept term is omitted?
The formulas are the same as always, so let's focus on understanding what's going on.
Here is a small cloud of points. Its slope is uncertain. (Indeed, the coordinates of these points were drawn independently from a standard Normal distribution and then moved a little to the side, as shown in subsequent plots.)
Here is the OLS fit. The intercept is near $3$. That's kind of an accident: the OLS line must pass through the center of mass of the point cloud and where the intercept is depends on how far I moved the point cloud away from the origin. Due to the uncertain slope and the relatively large distance the points were moved to the right, the intercept could be almost anywhere. To illustrate, the slopes of the dashed lines differ from the fitted line by up to $\pm 1/2$. All of them fit the data pretty well.
After lowering the cloud by the height of the intercept, the OLS line (solid gray) goes through the origin, as expected.
The OLS line remains just as uncertain as it was before. The standard error of its slope is high. But if you were to constrain it to pass through the origin, the only wiggle room left is to vary the other end up and down through the point cloud. The dotted lines show the same range of slopes as before: but now the extreme ones don't go anywhere near the cloud. Constraining the fit has greatly increased the certainty in the slope.
|
How is the standard error of a slope calculated when the intercept term is omitted?
The formulas are the same as always, so let's focus on understanding what's going on.
Here is a small cloud of points. Its slope is uncertain. (Indeed, the coordinates of these points were drawn ind
|
46,672
|
Difference between standard beta and unstandard beta distributions?
|
Standard beta distribution is beta distribution bounded in $(0, 1)$ interval, so it is what we generally refer to when talking about beta distribution. Beta is not standard if it has other bounds, denoted sometimes as $a$ and $b$ (lower and upper bound), you can find some information here.
So the general form of probability density function is
$$ f(x) = \frac{(x-a)^{\alpha-1}(b-x)^{\beta-1}} {\mathrm{B}(\alpha,\beta) (b-a)^{\alpha+\beta-1}} $$
while in most cases we refer to standard beta, i.e.
$$ f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}} { \mathrm{B}(\alpha,\beta)} $$
If $X$ is beta distributed with bounds $a$ and $b$, then you can transform it to standard beta distributed variable $Z$ by simple normalization
$$ Z = \frac{X-a}{b-a} $$
It is also easy to back-transform standard beta to beta with $a$ and $b$ bounds by
$$ X = Z \times (b-a) + a $$
So to compute pdf, cdf, or random number generation for non-standard beta, you need only the basic functions and formulas for beta distribution. If you want to use density function of standard beta with non-standard beta just remember to normalize the density, i.e. $f(\frac{X-a}{b-a})/(b-a)$.
In most cases people referring to beta distribution are talking about standard beta distribution. If the distribution has different bounds than $(0, 1)$, than it is obviously not a standard beta, so it should be clear from context.
|
Difference between standard beta and unstandard beta distributions?
|
Standard beta distribution is beta distribution bounded in $(0, 1)$ interval, so it is what we generally refer to when talking about beta distribution. Beta is not standard if it has other bounds, den
|
Difference between standard beta and unstandard beta distributions?
Standard beta distribution is beta distribution bounded in $(0, 1)$ interval, so it is what we generally refer to when talking about beta distribution. Beta is not standard if it has other bounds, denoted sometimes as $a$ and $b$ (lower and upper bound), you can find some information here.
So the general form of probability density function is
$$ f(x) = \frac{(x-a)^{\alpha-1}(b-x)^{\beta-1}} {\mathrm{B}(\alpha,\beta) (b-a)^{\alpha+\beta-1}} $$
while in most cases we refer to standard beta, i.e.
$$ f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}} { \mathrm{B}(\alpha,\beta)} $$
If $X$ is beta distributed with bounds $a$ and $b$, then you can transform it to standard beta distributed variable $Z$ by simple normalization
$$ Z = \frac{X-a}{b-a} $$
It is also easy to back-transform standard beta to beta with $a$ and $b$ bounds by
$$ X = Z \times (b-a) + a $$
So to compute pdf, cdf, or random number generation for non-standard beta, you need only the basic functions and formulas for beta distribution. If you want to use density function of standard beta with non-standard beta just remember to normalize the density, i.e. $f(\frac{X-a}{b-a})/(b-a)$.
In most cases people referring to beta distribution are talking about standard beta distribution. If the distribution has different bounds than $(0, 1)$, than it is obviously not a standard beta, so it should be clear from context.
|
Difference between standard beta and unstandard beta distributions?
Standard beta distribution is beta distribution bounded in $(0, 1)$ interval, so it is what we generally refer to when talking about beta distribution. Beta is not standard if it has other bounds, den
|
46,673
|
Interpreting interaction terms in Cox Proportional Hazard model
|
Interactions are tricky. The short answer is: the effect of v2 is bigger if v1 is 0.
For calculation the following holds. The interaction terms suggests that having a zero for v1 and a high v2 score increases readmission. Having a 1 for v1 and a high v2 score also increases readmission, but the same score for v2 leads to a somewhat lower readmission than the first case.
Further interpretation is possible, but this is speculative. It might be that v1 is 1 is associated with a higher v2. Then the additional effect to v1 is less big. If v1 is 0 though, one could say that v2 is the sole contributing factor with a bigger impact.
|
Interpreting interaction terms in Cox Proportional Hazard model
|
Interactions are tricky. The short answer is: the effect of v2 is bigger if v1 is 0.
For calculation the following holds. The interaction terms suggests that having a zero for v1 and a high v2 score i
|
Interpreting interaction terms in Cox Proportional Hazard model
Interactions are tricky. The short answer is: the effect of v2 is bigger if v1 is 0.
For calculation the following holds. The interaction terms suggests that having a zero for v1 and a high v2 score increases readmission. Having a 1 for v1 and a high v2 score also increases readmission, but the same score for v2 leads to a somewhat lower readmission than the first case.
Further interpretation is possible, but this is speculative. It might be that v1 is 1 is associated with a higher v2. Then the additional effect to v1 is less big. If v1 is 0 though, one could say that v2 is the sole contributing factor with a bigger impact.
|
Interpreting interaction terms in Cox Proportional Hazard model
Interactions are tricky. The short answer is: the effect of v2 is bigger if v1 is 0.
For calculation the following holds. The interaction terms suggests that having a zero for v1 and a high v2 score i
|
46,674
|
Gibbs sampling from a complex full conditional
|
Here is an excerpt from our Monte Carlo Statistical Methods book:
10.3.3. Metropolizing the Gibbs Sampler
Hybrid MCMC algorithms are often useful
at an elementary level of the simulation process; that is,
when some components of the Gibbs sampler conditionals cannot be easily simulated. Rather than looking for a customized algorithm such as
Accept--Reject in each of these cases or for alternatives to Gibbs sampling, there is a compromise suggested by
Müller (1991), sometimes called ``Metropolis-within-Gibbs". In any step i of the Gibbs sampling algorithm with a difficult simulation from the full conditional
$g_i(\theta_i|\theta_j,j\neq i)$, substitute a simulation from an instrumental distribution $q_i$. This means replacing step $i$ of the Gibbs sampler
i. Simulate $$ {\tilde \theta}_i \sim
g_i(\theta_i^{(t)}|\theta_1^{(t+1)},\ldots,\theta_{i-1}^{(t+1)},\theta_{i+1}^{(t)},\ldots,\theta_p^{(t)})
$$
with
i.1. Simulate $$ {\tilde \theta}_i \sim
q_i(\theta_i|\theta_1^{(t+1)},\ldots,\theta_i^{(t)},\theta_{i+1}^{(t)}, \ldots,\theta_p^{(t)}) $$ i.2. Take $$ \theta_i^{(t+1)} =
\begin{cases} y_\theta^{(t)} & \hbox{ with probability }1-\rho, \cr
{\tilde \theta}_i & \hbox{ with probability }\rho, \cr \end{cases} $$
where $$ \rho = 1 \wedge \left\{ \frac{\left(
\displaystyle{{g_i({\tilde
\theta}_i|θ_1^{(t+1)},\ldots,θ_{i-1}^{(t+1)},θ_{i+1}^{(t)},
\ldots,θ_p^{(t)}) \over q_i({\tilde
θ}_i|θ_1^{(t+1)},\ldots,θ_{i-1}^{(t+1)},θ_i^{(t)},θ_{i+1}^{(t)},
\ldots,θ_p^{(t)})} }\right)}{\left( \displaystyle{
{g_i(θ_i^{(t)}|θ_1^{(t+1)},\ldots,θ_{i-1}^{(t+1)},θ_{i+1}^{(t)},
\ldots,θ_p^{(t)}) \over
q_i(θ_i^{(t)}|θ_1^{(t+1)},\ldots,θ_{i-1}^{(t+1)},{\tilde
θ}_i,θ_{i+1}^{(t)}, \ldots,θ_p^{(t)})}}\right)} \right\} \;. $$
An important point about this substitution is that the above Metropolis-Hastings step
is only used once in an iteration from the Gibbs sampler. The modified step
thus proposes a single simulation ${\tilde \theta}_i$
instead of trying to approximate $g_i(θ_i|θ_j,j\neq i)$ more accurately
by producing $T$ simulations from $q_i$. The reasons for this choice
are twofold:
First, the resulting hybrid algorithm is valid since
$g$ is its stationary distribution.
Second, Gibbs sampling also leads to an approximation of $g$.
To provide a more "precise" approximation of $g_i(y_i|y_j,j\neq i)$
does not necessarily lead to a better
approximation of $g$ and the replacement of $g_i$ by $q_i$ may even be
beneficial for the speed of excursion of the chain on the surface of $g$.
You may also take a look at this other X validated question.
|
Gibbs sampling from a complex full conditional
|
Here is an excerpt from our Monte Carlo Statistical Methods book:
10.3.3. Metropolizing the Gibbs Sampler
Hybrid MCMC algorithms are often useful
at an elementary level of the simulation process; that
|
Gibbs sampling from a complex full conditional
Here is an excerpt from our Monte Carlo Statistical Methods book:
10.3.3. Metropolizing the Gibbs Sampler
Hybrid MCMC algorithms are often useful
at an elementary level of the simulation process; that is,
when some components of the Gibbs sampler conditionals cannot be easily simulated. Rather than looking for a customized algorithm such as
Accept--Reject in each of these cases or for alternatives to Gibbs sampling, there is a compromise suggested by
Müller (1991), sometimes called ``Metropolis-within-Gibbs". In any step i of the Gibbs sampling algorithm with a difficult simulation from the full conditional
$g_i(\theta_i|\theta_j,j\neq i)$, substitute a simulation from an instrumental distribution $q_i$. This means replacing step $i$ of the Gibbs sampler
i. Simulate $$ {\tilde \theta}_i \sim
g_i(\theta_i^{(t)}|\theta_1^{(t+1)},\ldots,\theta_{i-1}^{(t+1)},\theta_{i+1}^{(t)},\ldots,\theta_p^{(t)})
$$
with
i.1. Simulate $$ {\tilde \theta}_i \sim
q_i(\theta_i|\theta_1^{(t+1)},\ldots,\theta_i^{(t)},\theta_{i+1}^{(t)}, \ldots,\theta_p^{(t)}) $$ i.2. Take $$ \theta_i^{(t+1)} =
\begin{cases} y_\theta^{(t)} & \hbox{ with probability }1-\rho, \cr
{\tilde \theta}_i & \hbox{ with probability }\rho, \cr \end{cases} $$
where $$ \rho = 1 \wedge \left\{ \frac{\left(
\displaystyle{{g_i({\tilde
\theta}_i|θ_1^{(t+1)},\ldots,θ_{i-1}^{(t+1)},θ_{i+1}^{(t)},
\ldots,θ_p^{(t)}) \over q_i({\tilde
θ}_i|θ_1^{(t+1)},\ldots,θ_{i-1}^{(t+1)},θ_i^{(t)},θ_{i+1}^{(t)},
\ldots,θ_p^{(t)})} }\right)}{\left( \displaystyle{
{g_i(θ_i^{(t)}|θ_1^{(t+1)},\ldots,θ_{i-1}^{(t+1)},θ_{i+1}^{(t)},
\ldots,θ_p^{(t)}) \over
q_i(θ_i^{(t)}|θ_1^{(t+1)},\ldots,θ_{i-1}^{(t+1)},{\tilde
θ}_i,θ_{i+1}^{(t)}, \ldots,θ_p^{(t)})}}\right)} \right\} \;. $$
An important point about this substitution is that the above Metropolis-Hastings step
is only used once in an iteration from the Gibbs sampler. The modified step
thus proposes a single simulation ${\tilde \theta}_i$
instead of trying to approximate $g_i(θ_i|θ_j,j\neq i)$ more accurately
by producing $T$ simulations from $q_i$. The reasons for this choice
are twofold:
First, the resulting hybrid algorithm is valid since
$g$ is its stationary distribution.
Second, Gibbs sampling also leads to an approximation of $g$.
To provide a more "precise" approximation of $g_i(y_i|y_j,j\neq i)$
does not necessarily lead to a better
approximation of $g$ and the replacement of $g_i$ by $q_i$ may even be
beneficial for the speed of excursion of the chain on the surface of $g$.
You may also take a look at this other X validated question.
|
Gibbs sampling from a complex full conditional
Here is an excerpt from our Monte Carlo Statistical Methods book:
10.3.3. Metropolizing the Gibbs Sampler
Hybrid MCMC algorithms are often useful
at an elementary level of the simulation process; that
|
46,675
|
Best basis set for polynomial expansion
|
It really depends on your needs.
However, with regression and other "linear-model" problems (such as GLMs), the standard choice is orthogonal polynomials with respect to the observed set of $x$ values (usually just called "orthogonal polynomials" in regression-type contexts). Many packages provide them (e.g. poly in R provides such a basis - you supply x and the desired degree).
That is, if $P$ is the resulting "x-matrix" (not counting the constant column) where the columns represent the linear, quadratic etc components, then $P^\top P=I$.
Like so:
> x=sort(rnorm(10,6,2))
> P=poly(x,4)
> round(crossprod(P),8) # round to 8dp
1 2 3 4
1 1 0 0 0
2 0 1 0 0
3 0 0 1 0
4 0 0 0 1
(This property extends to the constant column if you appropriately normalize it, but it's usually left as-is, so the diagonal of $X^\top X$ would then have a $1,1$ element of $n$ rather than $1$.)
For that particular set of x-values*, they look like this:
These have a number of distinct advantages over other choices (including that the parameter estimates are uncorrelated).
Some references that may be of some use to you:
Sabhash C. Narula (1979),
"Orthogonal Polynomial Regression,"
International Statistical Review, 47:1 (Apr.), pp. 31-36
Kennedy, W. J. Jr and Gentle, J. E. (1980),
Statistical Computing, Marcel Dekker.
* on the off chance anyone cares about the particular values in the example:
x
[1] 4.326638 4.458292 4.459983 4.574794 5.312988 5.380251 7.425735
[8] 8.601912 9.189405 10.864584
|
Best basis set for polynomial expansion
|
It really depends on your needs.
However, with regression and other "linear-model" problems (such as GLMs), the standard choice is orthogonal polynomials with respect to the observed set of $x$ values
|
Best basis set for polynomial expansion
It really depends on your needs.
However, with regression and other "linear-model" problems (such as GLMs), the standard choice is orthogonal polynomials with respect to the observed set of $x$ values (usually just called "orthogonal polynomials" in regression-type contexts). Many packages provide them (e.g. poly in R provides such a basis - you supply x and the desired degree).
That is, if $P$ is the resulting "x-matrix" (not counting the constant column) where the columns represent the linear, quadratic etc components, then $P^\top P=I$.
Like so:
> x=sort(rnorm(10,6,2))
> P=poly(x,4)
> round(crossprod(P),8) # round to 8dp
1 2 3 4
1 1 0 0 0
2 0 1 0 0
3 0 0 1 0
4 0 0 0 1
(This property extends to the constant column if you appropriately normalize it, but it's usually left as-is, so the diagonal of $X^\top X$ would then have a $1,1$ element of $n$ rather than $1$.)
For that particular set of x-values*, they look like this:
These have a number of distinct advantages over other choices (including that the parameter estimates are uncorrelated).
Some references that may be of some use to you:
Sabhash C. Narula (1979),
"Orthogonal Polynomial Regression,"
International Statistical Review, 47:1 (Apr.), pp. 31-36
Kennedy, W. J. Jr and Gentle, J. E. (1980),
Statistical Computing, Marcel Dekker.
* on the off chance anyone cares about the particular values in the example:
x
[1] 4.326638 4.458292 4.459983 4.574794 5.312988 5.380251 7.425735
[8] 8.601912 9.189405 10.864584
|
Best basis set for polynomial expansion
It really depends on your needs.
However, with regression and other "linear-model" problems (such as GLMs), the standard choice is orthogonal polynomials with respect to the observed set of $x$ values
|
46,676
|
Best basis set for polynomial expansion
|
Orthogonal polynomials, by construction come with a weight function $w(x)$, so that orthogonality makes sense only when referring to $w(x)$. Choosing which orthogonal polynomials to use highly depends on your domain. For example, Legendre polynomials are define on $[-1,1]$ whereas Laguerre polynomials are defined on $[0,\infty)$.
|
Best basis set for polynomial expansion
|
Orthogonal polynomials, by construction come with a weight function $w(x)$, so that orthogonality makes sense only when referring to $w(x)$. Choosing which orthogonal polynomials to use highly depends
|
Best basis set for polynomial expansion
Orthogonal polynomials, by construction come with a weight function $w(x)$, so that orthogonality makes sense only when referring to $w(x)$. Choosing which orthogonal polynomials to use highly depends on your domain. For example, Legendre polynomials are define on $[-1,1]$ whereas Laguerre polynomials are defined on $[0,\infty)$.
|
Best basis set for polynomial expansion
Orthogonal polynomials, by construction come with a weight function $w(x)$, so that orthogonality makes sense only when referring to $w(x)$. Choosing which orthogonal polynomials to use highly depends
|
46,677
|
Best basis set for polynomial expansion
|
First, you have to define what is "best". For instance, if you say that the best function is such that minimizes the least squared errors and while still being smooth, then you might end up with cubic spline basis. It all depends on your function and your understanding of what is "best".
|
Best basis set for polynomial expansion
|
First, you have to define what is "best". For instance, if you say that the best function is such that minimizes the least squared errors and while still being smooth, then you might end up with cubic
|
Best basis set for polynomial expansion
First, you have to define what is "best". For instance, if you say that the best function is such that minimizes the least squared errors and while still being smooth, then you might end up with cubic spline basis. It all depends on your function and your understanding of what is "best".
|
Best basis set for polynomial expansion
First, you have to define what is "best". For instance, if you say that the best function is such that minimizes the least squared errors and while still being smooth, then you might end up with cubic
|
46,678
|
Difference between pointwise mutual information and log likelihood ratio
|
They're very closely related. "Log-likelihood" is not a very useful name, because it sounds like it's going to be a log probability. But it's not that at all. "Log likelihood ratio" as used at times in the original paper is more correct, and that immediately hints that we should expect to see this metric as a comparison (ratio) between two different probabilities. Furthermore, the formula is structured as a weighted average of logarithms, just like the basic formula for entropy ($\sum_p p \log(p)$). These all strongly hint that this metric is closely related to an information-theoretic metric like mutual information.
Here's the formula for log likelihood given $p_n$, $k_n$, and $n_n$ as it appears in the paper:
$$ \mathrm{logL}(p_n, n_n, k_n) = k_n \log(p_n) + (n_n - k_n) \log (1 - p_n) $$
As the paper itself notes, $p$ is just $k / n$, so we need only normalize by $n$ to get the entropy formula for a Bernoulli distribution (coin flips with a weighted coin, with probability $p$ of turning up heads):
$$ \begin{align} \mathrm{BernoulliEntropy}(p, n, k) & = \frac{k}{n} \log(p) + \frac{(n - k)}{n} \log (1 - p) \\ \mathrm{BernoulliEntropy}(p) & = p \log(p) + (1 - p)\log(1 - p) \end{align} $$
This normalization is the only real difference, and I find it a little puzzling that the author didn't adopt this simplified approach.
The other equation we need (using this new $n$-normalized formulation) is the formula for cross-entropy. It is almost identical, but it compares two different probabilities: it gives us a way to measure the "cost" of representing one probability distribution with another. (Concretely, suppose you compress data from one distribution with an encoding optimized for another distribution; this tells you how many bits it will take on average.)
$$ \mathrm{CrossEntropy}(p,p') = p \log(p') + (1 - p) \log (1 - p') $$
Note that if $p$ and $p'$ are the same, then this winds up being identical to Bernoulli entropy.
$$ \mathrm{BernoulliEntropy}(p) = \mathrm{CrossEntropy}(p, p) $$
To put these formulas together, we just have to specify the exact comparison we want to make. Suppose we have data from two different coin-flipping sessions, and we want to know whether the same coin was used in both sessions, or whether there were two different coins with different weights.
We propose a null hypothesis: the coin used was the same. To test that hypothesis, we need to perform a total of eight weighted log calculations. Four of them will be simple entropy calculations, while the other four will be cross-entropy calculations; the difference will be in the probabilistic model we use. We will use the data for each session to calculate two different probabilities: $p_{1h}$ and $p_{2h}$. Then we will use all the data to calculate just one combined probability $p_{ch}$. The cross-entropy calculations will compare session probabilities to combined probabilities; the entropy calculations will compare session probabilities to themselves. Under the null hypothesis, the values will be the same, and will cancel each other out.
Recall that the logarithm of a probability is always negative; for clarity later, the formulas below are negated so that they give positive values.
Entropy calculations:
First session
Heads: $-p_{1h} \log(p_{1h})$
Tails: $-(1 - p_{1h}) \log(1 - p_{1h})$
Second session
Heads: $-p_{2h} \log(p_{2h})$
Tails: $-(1 - p_{2h}) \log(1 - p_{2h})$
Cross-entropy calculations:
First session
Heads: $-p_{1h} \log(p_{ch})$
Tails: $-(1 - p_{1h}) \log(1 - p_{ch})$
Second session
Heads: $-p_{2h} \log(p_{ch})$
Tails: $-(1 - p_{2h}) \log(1 - p_{ch})$
Cross entropy is always equal to or greater than entropy, so we want to subtract entropy from cross entropy to get a meaningful value. If they are close to equal, then the result will be zero, and we accept the null hypothesis. If not, then the value is guaranteed to be positive, and for larger and larger values, we will be more and more inclined to reject the null hypothesis.
Recall that logarithms allow us to convert subtraction into division, so subtracting the first entropy value from the first cross entropy value gives
$$ \begin{align}
& -p_{1h} \log(p_{ch}) - -p_{1h} \log(p_{1h}) \\
=\ & p_{1h} \log(p_{1h}) - p_{1h} \log(p_{ch}) \\
=\ & p_{1h} \log\frac{p_{1h}}{p_{ch}} \\
\end{align} $$
Combining all the terms and logarithms together, we get this definition of the combined, normalized log likelihood ratio, $ \mathrm{logL'} $:
$$ \begin{align}
\mathrm{logL'}
&=\ p_{1h}\ \log\frac{p_{1h}}{p_{ch}} \\
&+ (1 - p_{1h})\ \log\frac{(1 - p_{1h})}{(1 - p_{ch})} \\
&+ p_{2h}\ \log\frac{p_{2h}}{p_{ch}} \\
&+ (1 - p_{2h})\ \log\frac{(1 - p_{2h})}{(1 - p_{ch})} \\
\end{align} $$
At this point, converting this to PMI is just a matter of reinterpreting the notation. $p_{1h}$ is the probability of turning up heads in the first session, and so we could also call it the conditional probability of heads given that we're looking only at data from the first session. We can do similar things to the other probabilities from each session:
$$
\begin{align}
p_{1h} &= \mathrm{P}(h|c_1) \\
(1 - p_{1h}) &= \mathrm{P}(t|c_1) \\
p_{2h} &= \mathrm{P}(h|c_2) \\
(1 - p_{2h}) &= \mathrm{P}(t|c_2)
\end{align}
$$
The null hypothesis probabilities are not conditional but "prior" probabilities -- probabilities calculated without taking into account additional information about the sessions:
$$
\begin{align}
p_{ch} &= \mathrm{P}(h) \\
1-p_{ch} &= \mathrm{P}(t) \\
\end{align}
$$
Now, by the definition of conditional probability we have
$$ \begin{align}
\frac{\mathrm{P}(h,c)}{\mathrm{P}(c)} &= \mathrm{P}(h|c) \\
\frac{\mathrm{P}(h,c)}{\mathrm{P}(c)\mathrm{P}(h)} &= \frac{\mathrm{P}(h|c)}{\mathrm{P}(h)}
\end{align} $$
And we see that by converting the $\mathrm{logL'}$ formulas to use conditional probability notation, we have (for example)
$$ \begin{align}
& \mathrm{P}(h|c_1)\ \log\frac{\mathrm{P}(h|c_1)}{\mathrm{P}(h)} \\
\end{align} $$
This is exactly the (pointwise) formula for $\mathrm{D_{KL}}((h|c_1)\ ||\ h)$, the KL divergence of the conditional distribution of heads in session one from the prior distribution of heads. So the log likelihood, when normalized by the number of trials in each session, is the same as the sum, for each possible outcome, of KL divergences of conditional distributions from prior distributions. If you understand KL divergence, this provides a good intuition for how this test works: it measures the "distance" between the conditional and unconditional probabilities for each outcome. If the difference is large, then the null hypothesis is probably false.
The relationship between mutual information and KL divergence is well known. So we're nearly done. Starting from the above formula, we have
$$ \begin{align}
& \mathrm{P}(h|c_1)\ \log\frac{\mathrm{P}(h|c_1)}{\mathrm{P}(h)} \\
=\ & \mathrm{P}(h|c_1)\ \log\frac{\mathrm{P}(h,c_1)}{\mathrm{P}(c_1)\mathrm{P}(h)} \\
=\ & \mathrm{P}(h|c_1) \cdot \mathrm{PMI}(h; c_1)
\end{align} $$
Where the last version is based on the definition of Pointwise Mutual Information (as given here). Putting it all together:
$$ \begin{align}
\mathrm{logL'}
&= \mathrm{P}(h|c_1) \cdot \mathrm{PMI}(h; c_1) \\
&+ \mathrm{P}(t|c_1) \cdot \mathrm{PMI}(t; c_1) \\
&+ \mathrm{P}(h|c_2) \cdot \mathrm{PMI}(h; c_2) \\
&+ \mathrm{P}(t|c_2) \cdot \mathrm{PMI}(t; c_2) \\
\end{align} $$
We could recover the pre-normalized version by using total counts from the first and second sessions: multiplying $\mathrm{P}(h|c_n)$ by the number of trials in session $n$ gives the number of heads in session $n$, which recovers the original definition of $\mathrm{logL}$ at the top of this answer.
Dividing that number by the total number of trials would give $\mathrm{P}(h,c_n)$, converting this formula into the formula for mutual information, the weighted sum of PMI values for each outcome. So the difference between "log likelihood" and mutual information (pointwise or otherwise) is just a matter of normalization scheme.
|
Difference between pointwise mutual information and log likelihood ratio
|
They're very closely related. "Log-likelihood" is not a very useful name, because it sounds like it's going to be a log probability. But it's not that at all. "Log likelihood ratio" as used at times i
|
Difference between pointwise mutual information and log likelihood ratio
They're very closely related. "Log-likelihood" is not a very useful name, because it sounds like it's going to be a log probability. But it's not that at all. "Log likelihood ratio" as used at times in the original paper is more correct, and that immediately hints that we should expect to see this metric as a comparison (ratio) between two different probabilities. Furthermore, the formula is structured as a weighted average of logarithms, just like the basic formula for entropy ($\sum_p p \log(p)$). These all strongly hint that this metric is closely related to an information-theoretic metric like mutual information.
Here's the formula for log likelihood given $p_n$, $k_n$, and $n_n$ as it appears in the paper:
$$ \mathrm{logL}(p_n, n_n, k_n) = k_n \log(p_n) + (n_n - k_n) \log (1 - p_n) $$
As the paper itself notes, $p$ is just $k / n$, so we need only normalize by $n$ to get the entropy formula for a Bernoulli distribution (coin flips with a weighted coin, with probability $p$ of turning up heads):
$$ \begin{align} \mathrm{BernoulliEntropy}(p, n, k) & = \frac{k}{n} \log(p) + \frac{(n - k)}{n} \log (1 - p) \\ \mathrm{BernoulliEntropy}(p) & = p \log(p) + (1 - p)\log(1 - p) \end{align} $$
This normalization is the only real difference, and I find it a little puzzling that the author didn't adopt this simplified approach.
The other equation we need (using this new $n$-normalized formulation) is the formula for cross-entropy. It is almost identical, but it compares two different probabilities: it gives us a way to measure the "cost" of representing one probability distribution with another. (Concretely, suppose you compress data from one distribution with an encoding optimized for another distribution; this tells you how many bits it will take on average.)
$$ \mathrm{CrossEntropy}(p,p') = p \log(p') + (1 - p) \log (1 - p') $$
Note that if $p$ and $p'$ are the same, then this winds up being identical to Bernoulli entropy.
$$ \mathrm{BernoulliEntropy}(p) = \mathrm{CrossEntropy}(p, p) $$
To put these formulas together, we just have to specify the exact comparison we want to make. Suppose we have data from two different coin-flipping sessions, and we want to know whether the same coin was used in both sessions, or whether there were two different coins with different weights.
We propose a null hypothesis: the coin used was the same. To test that hypothesis, we need to perform a total of eight weighted log calculations. Four of them will be simple entropy calculations, while the other four will be cross-entropy calculations; the difference will be in the probabilistic model we use. We will use the data for each session to calculate two different probabilities: $p_{1h}$ and $p_{2h}$. Then we will use all the data to calculate just one combined probability $p_{ch}$. The cross-entropy calculations will compare session probabilities to combined probabilities; the entropy calculations will compare session probabilities to themselves. Under the null hypothesis, the values will be the same, and will cancel each other out.
Recall that the logarithm of a probability is always negative; for clarity later, the formulas below are negated so that they give positive values.
Entropy calculations:
First session
Heads: $-p_{1h} \log(p_{1h})$
Tails: $-(1 - p_{1h}) \log(1 - p_{1h})$
Second session
Heads: $-p_{2h} \log(p_{2h})$
Tails: $-(1 - p_{2h}) \log(1 - p_{2h})$
Cross-entropy calculations:
First session
Heads: $-p_{1h} \log(p_{ch})$
Tails: $-(1 - p_{1h}) \log(1 - p_{ch})$
Second session
Heads: $-p_{2h} \log(p_{ch})$
Tails: $-(1 - p_{2h}) \log(1 - p_{ch})$
Cross entropy is always equal to or greater than entropy, so we want to subtract entropy from cross entropy to get a meaningful value. If they are close to equal, then the result will be zero, and we accept the null hypothesis. If not, then the value is guaranteed to be positive, and for larger and larger values, we will be more and more inclined to reject the null hypothesis.
Recall that logarithms allow us to convert subtraction into division, so subtracting the first entropy value from the first cross entropy value gives
$$ \begin{align}
& -p_{1h} \log(p_{ch}) - -p_{1h} \log(p_{1h}) \\
=\ & p_{1h} \log(p_{1h}) - p_{1h} \log(p_{ch}) \\
=\ & p_{1h} \log\frac{p_{1h}}{p_{ch}} \\
\end{align} $$
Combining all the terms and logarithms together, we get this definition of the combined, normalized log likelihood ratio, $ \mathrm{logL'} $:
$$ \begin{align}
\mathrm{logL'}
&=\ p_{1h}\ \log\frac{p_{1h}}{p_{ch}} \\
&+ (1 - p_{1h})\ \log\frac{(1 - p_{1h})}{(1 - p_{ch})} \\
&+ p_{2h}\ \log\frac{p_{2h}}{p_{ch}} \\
&+ (1 - p_{2h})\ \log\frac{(1 - p_{2h})}{(1 - p_{ch})} \\
\end{align} $$
At this point, converting this to PMI is just a matter of reinterpreting the notation. $p_{1h}$ is the probability of turning up heads in the first session, and so we could also call it the conditional probability of heads given that we're looking only at data from the first session. We can do similar things to the other probabilities from each session:
$$
\begin{align}
p_{1h} &= \mathrm{P}(h|c_1) \\
(1 - p_{1h}) &= \mathrm{P}(t|c_1) \\
p_{2h} &= \mathrm{P}(h|c_2) \\
(1 - p_{2h}) &= \mathrm{P}(t|c_2)
\end{align}
$$
The null hypothesis probabilities are not conditional but "prior" probabilities -- probabilities calculated without taking into account additional information about the sessions:
$$
\begin{align}
p_{ch} &= \mathrm{P}(h) \\
1-p_{ch} &= \mathrm{P}(t) \\
\end{align}
$$
Now, by the definition of conditional probability we have
$$ \begin{align}
\frac{\mathrm{P}(h,c)}{\mathrm{P}(c)} &= \mathrm{P}(h|c) \\
\frac{\mathrm{P}(h,c)}{\mathrm{P}(c)\mathrm{P}(h)} &= \frac{\mathrm{P}(h|c)}{\mathrm{P}(h)}
\end{align} $$
And we see that by converting the $\mathrm{logL'}$ formulas to use conditional probability notation, we have (for example)
$$ \begin{align}
& \mathrm{P}(h|c_1)\ \log\frac{\mathrm{P}(h|c_1)}{\mathrm{P}(h)} \\
\end{align} $$
This is exactly the (pointwise) formula for $\mathrm{D_{KL}}((h|c_1)\ ||\ h)$, the KL divergence of the conditional distribution of heads in session one from the prior distribution of heads. So the log likelihood, when normalized by the number of trials in each session, is the same as the sum, for each possible outcome, of KL divergences of conditional distributions from prior distributions. If you understand KL divergence, this provides a good intuition for how this test works: it measures the "distance" between the conditional and unconditional probabilities for each outcome. If the difference is large, then the null hypothesis is probably false.
The relationship between mutual information and KL divergence is well known. So we're nearly done. Starting from the above formula, we have
$$ \begin{align}
& \mathrm{P}(h|c_1)\ \log\frac{\mathrm{P}(h|c_1)}{\mathrm{P}(h)} \\
=\ & \mathrm{P}(h|c_1)\ \log\frac{\mathrm{P}(h,c_1)}{\mathrm{P}(c_1)\mathrm{P}(h)} \\
=\ & \mathrm{P}(h|c_1) \cdot \mathrm{PMI}(h; c_1)
\end{align} $$
Where the last version is based on the definition of Pointwise Mutual Information (as given here). Putting it all together:
$$ \begin{align}
\mathrm{logL'}
&= \mathrm{P}(h|c_1) \cdot \mathrm{PMI}(h; c_1) \\
&+ \mathrm{P}(t|c_1) \cdot \mathrm{PMI}(t; c_1) \\
&+ \mathrm{P}(h|c_2) \cdot \mathrm{PMI}(h; c_2) \\
&+ \mathrm{P}(t|c_2) \cdot \mathrm{PMI}(t; c_2) \\
\end{align} $$
We could recover the pre-normalized version by using total counts from the first and second sessions: multiplying $\mathrm{P}(h|c_n)$ by the number of trials in session $n$ gives the number of heads in session $n$, which recovers the original definition of $\mathrm{logL}$ at the top of this answer.
Dividing that number by the total number of trials would give $\mathrm{P}(h,c_n)$, converting this formula into the formula for mutual information, the weighted sum of PMI values for each outcome. So the difference between "log likelihood" and mutual information (pointwise or otherwise) is just a matter of normalization scheme.
|
Difference between pointwise mutual information and log likelihood ratio
They're very closely related. "Log-likelihood" is not a very useful name, because it sounds like it's going to be a log probability. But it's not that at all. "Log likelihood ratio" as used at times i
|
46,679
|
What is posterior predictive check, and how I can do that in R?
|
The hierarchical model you describe is a generative model. The model you constructed can be used to generate "fake" data. This is a little different conceptually than using your model to make predictions.
The assumption underlying this concept is that a good model should generate fake data that is similar to the actual data set you used to make your model. A bad model will generate data that is in some way fundamentally or systematically different.
You can assess this visually or by using some metric, such as the pp.check method you tried in JAGS (I am not a JAGS user, so can't comment specifically on how this is implemented).
Procedurally how this works is:
You specify your model. In your case it looks like you want to do an ordinal regression. This looks like a similar example. Specifically I refer you to the chapter called "Ordinal Predicted Variable" in this book.
You sample and obtain posterior distributions for the parameters in your model. Looking at the figure in the linked example, these parameters are $\beta_0$, $\beta_1$ and $\sigma$.
Now draw posterior predictive samples. Over the range of your input (Dollars), draw many samples from the posteriors (or take the samples of your posteriors) of the parameters you estimated, then plug those samples into your model equation, the Happiness ~ log(Dollars) you wrote down.
You should end up with many samples of "Happiness" data at a given log(Dollars). From these samples you could, for instance, compute and plot 90% credible intervals across log(Dollar).
Plot actual data (on the y axis: Happiness, on the x axis: log(Dollars)), then overlay the draws and credible intervals of your posterior predictive samples.
Now check visually. Does your 90% credible interval contain 90% of the actual Happiness data points? Are there systematic departures of the true data from your model? Then resort to metrics such as pp.check.
This is one way of performing model validation, there are many others.
|
What is posterior predictive check, and how I can do that in R?
|
The hierarchical model you describe is a generative model. The model you constructed can be used to generate "fake" data. This is a little different conceptually than using your model to make predic
|
What is posterior predictive check, and how I can do that in R?
The hierarchical model you describe is a generative model. The model you constructed can be used to generate "fake" data. This is a little different conceptually than using your model to make predictions.
The assumption underlying this concept is that a good model should generate fake data that is similar to the actual data set you used to make your model. A bad model will generate data that is in some way fundamentally or systematically different.
You can assess this visually or by using some metric, such as the pp.check method you tried in JAGS (I am not a JAGS user, so can't comment specifically on how this is implemented).
Procedurally how this works is:
You specify your model. In your case it looks like you want to do an ordinal regression. This looks like a similar example. Specifically I refer you to the chapter called "Ordinal Predicted Variable" in this book.
You sample and obtain posterior distributions for the parameters in your model. Looking at the figure in the linked example, these parameters are $\beta_0$, $\beta_1$ and $\sigma$.
Now draw posterior predictive samples. Over the range of your input (Dollars), draw many samples from the posteriors (or take the samples of your posteriors) of the parameters you estimated, then plug those samples into your model equation, the Happiness ~ log(Dollars) you wrote down.
You should end up with many samples of "Happiness" data at a given log(Dollars). From these samples you could, for instance, compute and plot 90% credible intervals across log(Dollar).
Plot actual data (on the y axis: Happiness, on the x axis: log(Dollars)), then overlay the draws and credible intervals of your posterior predictive samples.
Now check visually. Does your 90% credible interval contain 90% of the actual Happiness data points? Are there systematic departures of the true data from your model? Then resort to metrics such as pp.check.
This is one way of performing model validation, there are many others.
|
What is posterior predictive check, and how I can do that in R?
The hierarchical model you describe is a generative model. The model you constructed can be used to generate "fake" data. This is a little different conceptually than using your model to make predic
|
46,680
|
How could I get a correlation value that accounts for gender?
|
This is called partial correlation, basically it, as Wikipedia notices,
measures the degree of association between two random variables, with
the effect of a set of controlling random variables removed
Having correlation coefficients of three variables $X$, $Y$ and $Z$ we can correct correlation $\rho_{XY}$ by controlling correlations of $X$ and $Y$ with $Z$:
$$ \rho_{XY \cdot Z } = \frac{\rho_{XY} - \rho_{XZ}\rho_{ZY}} {\sqrt{(1-\rho_{XZ}^2) (1-\rho_{ZY}^2)}} $$
This is related to multiple linear regression. To test the hypothesis about partial correlation you use $t$-statistic in similar fashion as with regular correlation but with $N-3$ degrees of freedom. The same formula can be used for Pearson's or Spearman's correlation coefficients (for more see Altman, 1991 or Revelle, n.d.).
There are multiple R functions and libraries that enable you to calculate partial correlations like pcor, ggm or psych.
Altman, D.G. (1991). Practical Statistics for Medical Research. Chapman and Hall, pp. 288-299.
Revelle, W. (n.d.). Multiple Correlation and Multiple Regression In: An introduction to psychometric theory with applications in R.
|
How could I get a correlation value that accounts for gender?
|
This is called partial correlation, basically it, as Wikipedia notices,
measures the degree of association between two random variables, with
the effect of a set of controlling random variables re
|
How could I get a correlation value that accounts for gender?
This is called partial correlation, basically it, as Wikipedia notices,
measures the degree of association between two random variables, with
the effect of a set of controlling random variables removed
Having correlation coefficients of three variables $X$, $Y$ and $Z$ we can correct correlation $\rho_{XY}$ by controlling correlations of $X$ and $Y$ with $Z$:
$$ \rho_{XY \cdot Z } = \frac{\rho_{XY} - \rho_{XZ}\rho_{ZY}} {\sqrt{(1-\rho_{XZ}^2) (1-\rho_{ZY}^2)}} $$
This is related to multiple linear regression. To test the hypothesis about partial correlation you use $t$-statistic in similar fashion as with regular correlation but with $N-3$ degrees of freedom. The same formula can be used for Pearson's or Spearman's correlation coefficients (for more see Altman, 1991 or Revelle, n.d.).
There are multiple R functions and libraries that enable you to calculate partial correlations like pcor, ggm or psych.
Altman, D.G. (1991). Practical Statistics for Medical Research. Chapman and Hall, pp. 288-299.
Revelle, W. (n.d.). Multiple Correlation and Multiple Regression In: An introduction to psychometric theory with applications in R.
|
How could I get a correlation value that accounts for gender?
This is called partial correlation, basically it, as Wikipedia notices,
measures the degree of association between two random variables, with
the effect of a set of controlling random variables re
|
46,681
|
What are the consequences of removing the tails of a distribution?
|
Removing observations below the $k$th percentile and above the (100-$k$)th before calculating some estimator is the same as trimming (i.e. calculating a trimmed estimator).
The effect of trimming on the distribution is effectively that of truncation at both ends. The impact of sample-based trimming on the observed distribution (averaged across many samples) is a little different because the quantiles are random variables rather than fixed numbers; in very large samples it becomes quite similar to truncation though.
The impact it has on whatever you're doing depends on the circumstances; for example, the impact on some estimator depends on the estimator you're applying after trimming off the observations, and on the distribution you apply the procedure to.
It can certainly reduce the effect contamination that produces gross outliers (though other estimators, such as M-estimators for location, might be a better choice than a trimmed location estimator in many situations; similarly for other kinds of estimators).
If you apply it to (for example) a variance calculation, it will result in downward bias. Some authors have suggested trimming for means and Winsorizing rather than trimming for calculating variances or standard deviations (which doesn't eliminate the bias, but will reduce it).
What statistical principles are being violated here?
I'm not sure quite what you're asking for, but the properties of many things will change; for example see the answer to the next question.
How would this change any conclusions reached during analysis of such data?
It depends on what you're doing! For example, t-tests applied to trimmed samples will no longer have the nominal significance level; but an adjustment of the variance and of the degrees of freedom should enable you to get close to the desired type I error rate.
This approach is certainly sometimes used. It's a common simple choice and in some situations it performs pretty well -- but it's not always the best choice.
You may find it helpful to read a little about robust statistics.
Edit: further answers in response to new question
You previously described a univariate procedure (trimming off large and small values from a distribution), but now you're asking about regression, which involves more than one variable. This changes things.
With regression, you're talking about a different conditional distribution of the response for each point -- you can't simply ignore the IV when trying to figure out what's most extreme.
Suppose that we would like to calculate standard scores for a measurement that account for some covariate. We do this by regressing the covariate against the measurement, then obtain standard scores using the predicted responses from the regression.
It's not clear to me how this gives you what you want. (Nor indeed is it clear which way around your regression goes; I suspect that when you says "regress against" you are phrasing it as IV "regressed against" DV, which would seem to be reversed from the usual convention.)
We would like to make this process more robust to outliers.
I'll address this leaving aside my concerns above.
Is it advisable to trim data prior to any analysis (without retaining the discarded values), then perform the analysis?
If I have understood you correctly, no, since you're applying a marginal (i.e. unconditional) approach to correct problems with a conditional model, before you have had a chance to even assess whether it's conditionally unusual. I'd instead advise considering robust regression methods.
As an alternative to simply discarding the data, could one fit a regression model using a trimmed subset, then apply this model to standardize the entire dataset?
I would advise against it for the reason outlined above.
Would this be similar to Least Trimmed Squares Regression?
If I have understood you correctly, in no way is that similar. (It does involve trimming of something, but it's not at all like I understand from what you're proposing.)
It might be worth addressing what kind of problems you anticipate with your data - is it y-outliers?, x-outliers?, some of each?, both together?
Answers to new edit:
If it's only y-outliers at issue (i.e. influential observations are not present), an M-estimate may be reasonable, but so would many other robust regression estimators. [You can use trimming, as well, but you would need to apply it to residuals ... from a robust esitmate, and if you have one already, you probably don't need to trim.]
|
What are the consequences of removing the tails of a distribution?
|
Removing observations below the $k$th percentile and above the (100-$k$)th before calculating some estimator is the same as trimming (i.e. calculating a trimmed estimator).
The effect of trimming on t
|
What are the consequences of removing the tails of a distribution?
Removing observations below the $k$th percentile and above the (100-$k$)th before calculating some estimator is the same as trimming (i.e. calculating a trimmed estimator).
The effect of trimming on the distribution is effectively that of truncation at both ends. The impact of sample-based trimming on the observed distribution (averaged across many samples) is a little different because the quantiles are random variables rather than fixed numbers; in very large samples it becomes quite similar to truncation though.
The impact it has on whatever you're doing depends on the circumstances; for example, the impact on some estimator depends on the estimator you're applying after trimming off the observations, and on the distribution you apply the procedure to.
It can certainly reduce the effect contamination that produces gross outliers (though other estimators, such as M-estimators for location, might be a better choice than a trimmed location estimator in many situations; similarly for other kinds of estimators).
If you apply it to (for example) a variance calculation, it will result in downward bias. Some authors have suggested trimming for means and Winsorizing rather than trimming for calculating variances or standard deviations (which doesn't eliminate the bias, but will reduce it).
What statistical principles are being violated here?
I'm not sure quite what you're asking for, but the properties of many things will change; for example see the answer to the next question.
How would this change any conclusions reached during analysis of such data?
It depends on what you're doing! For example, t-tests applied to trimmed samples will no longer have the nominal significance level; but an adjustment of the variance and of the degrees of freedom should enable you to get close to the desired type I error rate.
This approach is certainly sometimes used. It's a common simple choice and in some situations it performs pretty well -- but it's not always the best choice.
You may find it helpful to read a little about robust statistics.
Edit: further answers in response to new question
You previously described a univariate procedure (trimming off large and small values from a distribution), but now you're asking about regression, which involves more than one variable. This changes things.
With regression, you're talking about a different conditional distribution of the response for each point -- you can't simply ignore the IV when trying to figure out what's most extreme.
Suppose that we would like to calculate standard scores for a measurement that account for some covariate. We do this by regressing the covariate against the measurement, then obtain standard scores using the predicted responses from the regression.
It's not clear to me how this gives you what you want. (Nor indeed is it clear which way around your regression goes; I suspect that when you says "regress against" you are phrasing it as IV "regressed against" DV, which would seem to be reversed from the usual convention.)
We would like to make this process more robust to outliers.
I'll address this leaving aside my concerns above.
Is it advisable to trim data prior to any analysis (without retaining the discarded values), then perform the analysis?
If I have understood you correctly, no, since you're applying a marginal (i.e. unconditional) approach to correct problems with a conditional model, before you have had a chance to even assess whether it's conditionally unusual. I'd instead advise considering robust regression methods.
As an alternative to simply discarding the data, could one fit a regression model using a trimmed subset, then apply this model to standardize the entire dataset?
I would advise against it for the reason outlined above.
Would this be similar to Least Trimmed Squares Regression?
If I have understood you correctly, in no way is that similar. (It does involve trimming of something, but it's not at all like I understand from what you're proposing.)
It might be worth addressing what kind of problems you anticipate with your data - is it y-outliers?, x-outliers?, some of each?, both together?
Answers to new edit:
If it's only y-outliers at issue (i.e. influential observations are not present), an M-estimate may be reasonable, but so would many other robust regression estimators. [You can use trimming, as well, but you would need to apply it to residuals ... from a robust esitmate, and if you have one already, you probably don't need to trim.]
|
What are the consequences of removing the tails of a distribution?
Removing observations below the $k$th percentile and above the (100-$k$)th before calculating some estimator is the same as trimming (i.e. calculating a trimmed estimator).
The effect of trimming on t
|
46,682
|
Graphical lasso numerical problem (not SPD matrix result)
|
I ran into the same issue with some data I was using in my research- while I don't quite understand what leads to this mathematically/computationally, hopefully my answer and the code below helps:
Two comments on your problem:
The raw CSV file includes data fields which have not been de-meaned or scaled. Normalizing the data is a helpful step that is important for some types of processing. This can be accomplished with the sklearn StandardScaler() class.
The l1-regularized covariance implementation seems to be sensitive to instabilities when the empirical covariance matrix has a broad eigenvalue range. Your initial data has eigenvalues in the range of [0, 3e6].
After normalizing your input data, the eigenvalues of your empirical covariance matrix still span a relatively large range of about [0-8]. Shrinking this using the sklearn.covariance.shrunk_covariance() function can bring it into a more computationally acceptable range (from what I've read, [0,1] is ideal but slgihtly larger ranges also appear to work).
If anyone knows what's going on mathematically/computationally that causes this error, and what the caveats of shrinking the covariance matrix are in terms of the interpretation of the output, I'd love to hear your comments and improvements. However, the code below appears to both work for the problem presented by @rano, and the errors that I've run into with my research (~10k samples of data in the energy market).
import numpy as np
import pandas as pd
from sklearn import covariance, preprocessing
myData = pd.read_csv('Data/weight_comp_simple_prop.df.train.csv')
X = myData.values.astype('float64')
myScaler = preprocessing.StandardScaler()
X = myScaler.fit_transform(X)
emp_cov = covariance.empirical_covariance(X)
shrunk_cov = covariance.shrunk_covariance(emp_cov, shrinkage=0.8) # Set shrinkage closer to 1 for poorly-conditioned data
alphaRange = 10.0 ** np.arange(-8,0) # 1e-7 to 1e-1 by order of magnitude
for alpha in alphaRange:
try:
graphCov = covariance.graph_lasso(shrunk_cov, alpha)
print("Calculated graph-lasso covariance matrix for alpha=%s"%alpha)
except FloatingPointError:
print("Failed at alpha=%s"%alpha)
|
Graphical lasso numerical problem (not SPD matrix result)
|
I ran into the same issue with some data I was using in my research- while I don't quite understand what leads to this mathematically/computationally, hopefully my answer and the code below helps:
Two
|
Graphical lasso numerical problem (not SPD matrix result)
I ran into the same issue with some data I was using in my research- while I don't quite understand what leads to this mathematically/computationally, hopefully my answer and the code below helps:
Two comments on your problem:
The raw CSV file includes data fields which have not been de-meaned or scaled. Normalizing the data is a helpful step that is important for some types of processing. This can be accomplished with the sklearn StandardScaler() class.
The l1-regularized covariance implementation seems to be sensitive to instabilities when the empirical covariance matrix has a broad eigenvalue range. Your initial data has eigenvalues in the range of [0, 3e6].
After normalizing your input data, the eigenvalues of your empirical covariance matrix still span a relatively large range of about [0-8]. Shrinking this using the sklearn.covariance.shrunk_covariance() function can bring it into a more computationally acceptable range (from what I've read, [0,1] is ideal but slgihtly larger ranges also appear to work).
If anyone knows what's going on mathematically/computationally that causes this error, and what the caveats of shrinking the covariance matrix are in terms of the interpretation of the output, I'd love to hear your comments and improvements. However, the code below appears to both work for the problem presented by @rano, and the errors that I've run into with my research (~10k samples of data in the energy market).
import numpy as np
import pandas as pd
from sklearn import covariance, preprocessing
myData = pd.read_csv('Data/weight_comp_simple_prop.df.train.csv')
X = myData.values.astype('float64')
myScaler = preprocessing.StandardScaler()
X = myScaler.fit_transform(X)
emp_cov = covariance.empirical_covariance(X)
shrunk_cov = covariance.shrunk_covariance(emp_cov, shrinkage=0.8) # Set shrinkage closer to 1 for poorly-conditioned data
alphaRange = 10.0 ** np.arange(-8,0) # 1e-7 to 1e-1 by order of magnitude
for alpha in alphaRange:
try:
graphCov = covariance.graph_lasso(shrunk_cov, alpha)
print("Calculated graph-lasso covariance matrix for alpha=%s"%alpha)
except FloatingPointError:
print("Failed at alpha=%s"%alpha)
|
Graphical lasso numerical problem (not SPD matrix result)
I ran into the same issue with some data I was using in my research- while I don't quite understand what leads to this mathematically/computationally, hopefully my answer and the code below helps:
Two
|
46,683
|
Graphical lasso numerical problem (not SPD matrix result)
|
I also have run into this SPD problem. I was unable to avoid it by rescaling my data because I was interested in conducting simulations in a particular (strange) statistical regime.
I then found the recent GGLasso python package, using more recent ADMM algorithms to solve the GLasso problem. So far this has worked well in all the cases I've tried. It also contains implementations of a number of extensions to the graphical lasso, including to latent variables and multiple graphical lasso problems, which might be more appropriate and are worth checking out.
Hope this helps.
|
Graphical lasso numerical problem (not SPD matrix result)
|
I also have run into this SPD problem. I was unable to avoid it by rescaling my data because I was interested in conducting simulations in a particular (strange) statistical regime.
I then found the r
|
Graphical lasso numerical problem (not SPD matrix result)
I also have run into this SPD problem. I was unable to avoid it by rescaling my data because I was interested in conducting simulations in a particular (strange) statistical regime.
I then found the recent GGLasso python package, using more recent ADMM algorithms to solve the GLasso problem. So far this has worked well in all the cases I've tried. It also contains implementations of a number of extensions to the graphical lasso, including to latent variables and multiple graphical lasso problems, which might be more appropriate and are worth checking out.
Hope this helps.
|
Graphical lasso numerical problem (not SPD matrix result)
I also have run into this SPD problem. I was unable to avoid it by rescaling my data because I was interested in conducting simulations in a particular (strange) statistical regime.
I then found the r
|
46,684
|
Clustering a dense dataset
|
As @Anony-Mousse implies, it isn't clear right now that your data actually are clusterable. In the end, you may choose to simply chop your data into partitions, if that will serve your business purposes, but there may not be any real latent groupings.
From where I sit, I cannot provide any guaranteed solutions, but perhaps I can offer some suggestions that will be profitable:
You have a single clear outlier (in the upper right corner of the [2,3] scatterplot, e.g.) that will likely distort any analysis you try. You may want to try to investigate that store separately. In the interim, I would set that point aside.
It isn't clear how much data you have, but it looks like a lot. You state that you have "under 4k observations". If it is close to that amount, say >3k, then you have a lot. Since a good deal of exploratory data analysis will be necessary, I would randomly partition your data into two halves and explore the first half and then validate your choices with the other half afterwards.
I would experiment with various transformations of your variables to see if you can get better (i.e., more spherical) distributions. For example, taking the logarithm of your data may be appropriate. After finding a suitable transformation, check again for outliers.
Then you will need to standardize each variable so that its mean is 0 and standard deviation is 1. Be sure to keep the original mean and SD for each variable so that you can apply exactly the same transformation later when you work with the second set.
At this point (and only now), you can try clustering. I would not use k-means or k-medoids. Since you will have overlapping clusters, you will need a method that can handle that. The clustering algorithms I am familiar with that can do so are fuzzy k-means, Gaussian mixture modeling, and clustering by kernel density estimation.
Fuzzy k-means is discussed on CV here; you can also try this search. To perform fuzzy k-means in R, you can use ?fanny.
Threads about Gaussian mixture modeling can be found on the site with the gaussian-mixture tag. Finite mixture modeling can be done in R with the mclust package. I have demonstrated GMM with mclust on CV here and here.
Clustering by kernel density estimation is probably more esoteric. You can read the original paper1 here. You can use kernel densities to cluster in R with the pdfCluster package.
There is a continuity here: Fuzzy k-means essentially approximates GMM, but imposes sphericality on your clusters, which GMM does not do. GMMs make a very strong assumption that each cluster is multivariate normal (albeit possibly with different variances and covariances). If that isn't (nearly) perfectly true, the results can be distorted. Moreover, although kernel density estimates use a multivariate Gaussian kernel by default, the end result can be much more flexible and needn't yield multivariate normal clusters at all. This line of reasoning may suggest you simply go with the latter, but if the former constraints / assumptions hold they will benefit your analysis.
You mention a variety of cluster validation metrics that you are using. Those are valuable, but I would select the method and the final clustering solution by which possibility makes sense of the data given your knowledge of the topic and whether it provides actionable business intelligence. You should also try to visualize the clusters in various ways.
Check your chosen strategy by performing the exact same preprocessing and clustering on the other half of your data and see if you get similar and equally coherent / valuable results.
1. Azzalini, A. & Torelli, N. (2007). Clustering via nonparametric density estimation, Statisticis and Computing, 17, 1, pp. 71-80.
|
Clustering a dense dataset
|
As @Anony-Mousse implies, it isn't clear right now that your data actually are clusterable. In the end, you may choose to simply chop your data into partitions, if that will serve your business purpo
|
Clustering a dense dataset
As @Anony-Mousse implies, it isn't clear right now that your data actually are clusterable. In the end, you may choose to simply chop your data into partitions, if that will serve your business purposes, but there may not be any real latent groupings.
From where I sit, I cannot provide any guaranteed solutions, but perhaps I can offer some suggestions that will be profitable:
You have a single clear outlier (in the upper right corner of the [2,3] scatterplot, e.g.) that will likely distort any analysis you try. You may want to try to investigate that store separately. In the interim, I would set that point aside.
It isn't clear how much data you have, but it looks like a lot. You state that you have "under 4k observations". If it is close to that amount, say >3k, then you have a lot. Since a good deal of exploratory data analysis will be necessary, I would randomly partition your data into two halves and explore the first half and then validate your choices with the other half afterwards.
I would experiment with various transformations of your variables to see if you can get better (i.e., more spherical) distributions. For example, taking the logarithm of your data may be appropriate. After finding a suitable transformation, check again for outliers.
Then you will need to standardize each variable so that its mean is 0 and standard deviation is 1. Be sure to keep the original mean and SD for each variable so that you can apply exactly the same transformation later when you work with the second set.
At this point (and only now), you can try clustering. I would not use k-means or k-medoids. Since you will have overlapping clusters, you will need a method that can handle that. The clustering algorithms I am familiar with that can do so are fuzzy k-means, Gaussian mixture modeling, and clustering by kernel density estimation.
Fuzzy k-means is discussed on CV here; you can also try this search. To perform fuzzy k-means in R, you can use ?fanny.
Threads about Gaussian mixture modeling can be found on the site with the gaussian-mixture tag. Finite mixture modeling can be done in R with the mclust package. I have demonstrated GMM with mclust on CV here and here.
Clustering by kernel density estimation is probably more esoteric. You can read the original paper1 here. You can use kernel densities to cluster in R with the pdfCluster package.
There is a continuity here: Fuzzy k-means essentially approximates GMM, but imposes sphericality on your clusters, which GMM does not do. GMMs make a very strong assumption that each cluster is multivariate normal (albeit possibly with different variances and covariances). If that isn't (nearly) perfectly true, the results can be distorted. Moreover, although kernel density estimates use a multivariate Gaussian kernel by default, the end result can be much more flexible and needn't yield multivariate normal clusters at all. This line of reasoning may suggest you simply go with the latter, but if the former constraints / assumptions hold they will benefit your analysis.
You mention a variety of cluster validation metrics that you are using. Those are valuable, but I would select the method and the final clustering solution by which possibility makes sense of the data given your knowledge of the topic and whether it provides actionable business intelligence. You should also try to visualize the clusters in various ways.
Check your chosen strategy by performing the exact same preprocessing and clustering on the other half of your data and see if you get similar and equally coherent / valuable results.
1. Azzalini, A. & Torelli, N. (2007). Clustering via nonparametric density estimation, Statisticis and Computing, 17, 1, pp. 71-80.
|
Clustering a dense dataset
As @Anony-Mousse implies, it isn't clear right now that your data actually are clusterable. In the end, you may choose to simply chop your data into partitions, if that will serve your business purpo
|
46,685
|
A method for propagating labels to unlabelled data
|
Learning from positive and unlabeled data is often referred to as PU learning. what you describe is a common approach to these kinds of problems, though I personally dislike such iterative approaches because they are highly sensitive to false positives (if you have any).
You might want to check out two of my papers and references therein for an up-to-date overview on current research for these problems:
A Robust Ensemble Approach to Learn From Positive and Unlabeled Data Using SVM Base Models http://arxiv.org/abs/1402.3144 (published in Neurocomputing)
Assessing binary classifiers using only positive and unlabeled data: http://arxiv.org/abs/1504.06837
The first paper describes a state-of-the-art method to learn classifiers and the second is the only approach that allows you to estimate any performance metric based on contingency tables from test sets without known negatives (you read that right).
Both papers also provide a good overview of the existing literature on this subject.
|
A method for propagating labels to unlabelled data
|
Learning from positive and unlabeled data is often referred to as PU learning. what you describe is a common approach to these kinds of problems, though I personally dislike such iterative approaches
|
A method for propagating labels to unlabelled data
Learning from positive and unlabeled data is often referred to as PU learning. what you describe is a common approach to these kinds of problems, though I personally dislike such iterative approaches because they are highly sensitive to false positives (if you have any).
You might want to check out two of my papers and references therein for an up-to-date overview on current research for these problems:
A Robust Ensemble Approach to Learn From Positive and Unlabeled Data Using SVM Base Models http://arxiv.org/abs/1402.3144 (published in Neurocomputing)
Assessing binary classifiers using only positive and unlabeled data: http://arxiv.org/abs/1504.06837
The first paper describes a state-of-the-art method to learn classifiers and the second is the only approach that allows you to estimate any performance metric based on contingency tables from test sets without known negatives (you read that right).
Both papers also provide a good overview of the existing literature on this subject.
|
A method for propagating labels to unlabelled data
Learning from positive and unlabeled data is often referred to as PU learning. what you describe is a common approach to these kinds of problems, though I personally dislike such iterative approaches
|
46,686
|
A method for propagating labels to unlabelled data
|
What you describe is very sound idea. It is called Semi-Supervised Expectation-Maximization and is oftenly used in text classification. Here is some literature:
http://research.microsoft.com/en-us/um/people/xiaohe/nips08/paperaccepted/nips2008wsl1_02.pdf
http://ciitresearch.org/dl/index.php/aiml/article/view/AIML052012012
http://www.cs.cmu.edu/~tom/pubs/NigamEtAl-bookChapter.pdf
|
A method for propagating labels to unlabelled data
|
What you describe is very sound idea. It is called Semi-Supervised Expectation-Maximization and is oftenly used in text classification. Here is some literature:
http://research.microsoft.com/en-us/um/
|
A method for propagating labels to unlabelled data
What you describe is very sound idea. It is called Semi-Supervised Expectation-Maximization and is oftenly used in text classification. Here is some literature:
http://research.microsoft.com/en-us/um/people/xiaohe/nips08/paperaccepted/nips2008wsl1_02.pdf
http://ciitresearch.org/dl/index.php/aiml/article/view/AIML052012012
http://www.cs.cmu.edu/~tom/pubs/NigamEtAl-bookChapter.pdf
|
A method for propagating labels to unlabelled data
What you describe is very sound idea. It is called Semi-Supervised Expectation-Maximization and is oftenly used in text classification. Here is some literature:
http://research.microsoft.com/en-us/um/
|
46,687
|
What happens when I use gradient descent over a zero slope?
|
It won't -- gradient descent only finds a local minima*, and that "plateau" is one.
However, there are several ways to modify gradient descent to avoid problems like this one. One option is to re-run the descent algorithm multiple times, using different starting locations for each run. Runs started between B and C will converge to z=4. Runs started between D and E will converge to z=1. Since that's smaller, you'll decide that D is the (best) local minima and choose that value.
Alternatively, you can add a momentum term. Imagine a heavy cannonball rolling down a hill. Its momentum causes it to continue through small dips in the hill until it settles at the bottom. By taking into account the gradient at this timestep AND the previous ones, you may be able to jump over (smaller) local minima.
* Although it's almost universally described as a local-minima finder, Neil G points out that gradient descent actually finds regions of zero curvature. Since these are found by moving downwards as rapidly as possible, these are (hopefully) local minima, though it can settle anywhere the error surface is flat, as in your example.
|
What happens when I use gradient descent over a zero slope?
|
It won't -- gradient descent only finds a local minima*, and that "plateau" is one.
However, there are several ways to modify gradient descent to avoid problems like this one. One option is to re-run
|
What happens when I use gradient descent over a zero slope?
It won't -- gradient descent only finds a local minima*, and that "plateau" is one.
However, there are several ways to modify gradient descent to avoid problems like this one. One option is to re-run the descent algorithm multiple times, using different starting locations for each run. Runs started between B and C will converge to z=4. Runs started between D and E will converge to z=1. Since that's smaller, you'll decide that D is the (best) local minima and choose that value.
Alternatively, you can add a momentum term. Imagine a heavy cannonball rolling down a hill. Its momentum causes it to continue through small dips in the hill until it settles at the bottom. By taking into account the gradient at this timestep AND the previous ones, you may be able to jump over (smaller) local minima.
* Although it's almost universally described as a local-minima finder, Neil G points out that gradient descent actually finds regions of zero curvature. Since these are found by moving downwards as rapidly as possible, these are (hopefully) local minima, though it can settle anywhere the error surface is flat, as in your example.
|
What happens when I use gradient descent over a zero slope?
It won't -- gradient descent only finds a local minima*, and that "plateau" is one.
However, there are several ways to modify gradient descent to avoid problems like this one. One option is to re-run
|
46,688
|
What happens when I use gradient descent over a zero slope?
|
Simple answer: it won't.
Gradient descent climbs down a hill. If it reaches a plateau, it considers the algorithm converged and moves no more.
If you think that this is a fault of gradient descent, one should know that multi-modal problems are very difficult and outside of a fine grid search (which can easily be prohibitively computationally expensive and requires you to pinpoint a region where the solution must be), there's no real generic algorithm for multi-modal problems.
A simple method for handling this is restart your hill climbing algorithm (sorry, I'm used to the maximization terminology, rather than the minimization) several times from random starting points and use the best solution you get. If the problem is uni-modal, all your solutions should be relatively close. If the problem is multi-modal, hopefully one of your random start points was on the correct hill.
|
What happens when I use gradient descent over a zero slope?
|
Simple answer: it won't.
Gradient descent climbs down a hill. If it reaches a plateau, it considers the algorithm converged and moves no more.
If you think that this is a fault of gradient descent,
|
What happens when I use gradient descent over a zero slope?
Simple answer: it won't.
Gradient descent climbs down a hill. If it reaches a plateau, it considers the algorithm converged and moves no more.
If you think that this is a fault of gradient descent, one should know that multi-modal problems are very difficult and outside of a fine grid search (which can easily be prohibitively computationally expensive and requires you to pinpoint a region where the solution must be), there's no real generic algorithm for multi-modal problems.
A simple method for handling this is restart your hill climbing algorithm (sorry, I'm used to the maximization terminology, rather than the minimization) several times from random starting points and use the best solution you get. If the problem is uni-modal, all your solutions should be relatively close. If the problem is multi-modal, hopefully one of your random start points was on the correct hill.
|
What happens when I use gradient descent over a zero slope?
Simple answer: it won't.
Gradient descent climbs down a hill. If it reaches a plateau, it considers the algorithm converged and moves no more.
If you think that this is a fault of gradient descent,
|
46,689
|
What happens when I use gradient descent over a zero slope?
|
There's only one thing you need to know about gradient descent. It is complete and utter garbage, and an absolutely horrible algorithm which should not even be considered unless there are at least hundreds of millions of variables, in which case don't expect it to work well, except when solving the same problem over and over again, for which good values of learning rates have been found. it is a poor man's unsafeguarded version of steepest descent, which even in safeguarded form is bad. You'll be much better of with a trust region or line search Quasi-Newton method. Don't write your own.
Gradient descent is a misnomer. it may not even descend. Safeguarded algorithms, which use trust regions or line searches, either descend, or terminate if unable to descend. The "learning rates", in some manner of speaking, are adaptively determined by the algorithms based on what they encounter.They won't overshoot as gradient descent can, and can automatically speed up when warranted. Gradient descent wasn't even a good algorithm a century ago.
A protracted zero slope region could cause problems for any optimization algorithm, unless it is a rigorous global optimization algorithm. Rigorous global optimization algorithms, for instance based on branch and bound, do exist (I'm not talking about genetic algorithms and other heuristic rubbish, which are the moral equivalent of gradient descent), but may not succeed in solving a problem if it is too large or too difficult, and may not accept all functions. Your local optimization algorithm algorithm should check 2nd order optimum conditions if possible. That will distinguish a local minimum from a local maximum or saddlepoint.
As stated in other answers, it is a good idea to run a local optimization algorithm with several different starting values. But that algorithm should generally not be gradient descent.
In my opinion, Andrew Ng has done a great disservice to people by teaching them gradient descent. People think they know how to optimize, when they know no more about how to optimize or optimization than a 3 year old kid "driving" with a placebo steering wheel a plastic car attached to the front of a supermarket shopping cart does about driving. (And for the benefit of a certain commenter who claimed that my providing an explicit formulation of his (her) problem as a constrained optimization problem, said that I had only repeated his(her) problem, and that imposing constraints after the fact was not a good way to solve an optimization problem, and then refused to change his(her) view after I explained how constrained optimization works, which is not imposing constraints "after the fact", and that there is a very well-developed theory for constrained optimization and practical ready to go software to solve constrained optimization problems, then he (she) downvoted that very detailed, thoughtful, and friendly answer, and wrote that we can both agree that I didn't answer his(her) question) there is practical ready to go software to solve constrained optimization problems, which apparently many people who "learned" optimization from Andrew Ng et al have no idea even exists. And non-specialists are not going to do a good job of writing their own constrained optimization software (or unconstrained optimization software either). Andrew Ng does people a disservice by making them think they can. Nor is there a need to do so, since good optimization software exists, although R is littered with not so good optimization software. Any improvements on good off the shelf software to take advantage of special problem structure, for instance, are unlikely to be made effectively by someone other than an exeprt in numerical optimization and numerical analysis.
|
What happens when I use gradient descent over a zero slope?
|
There's only one thing you need to know about gradient descent. It is complete and utter garbage, and an absolutely horrible algorithm which should not even be considered unless there are at least hun
|
What happens when I use gradient descent over a zero slope?
There's only one thing you need to know about gradient descent. It is complete and utter garbage, and an absolutely horrible algorithm which should not even be considered unless there are at least hundreds of millions of variables, in which case don't expect it to work well, except when solving the same problem over and over again, for which good values of learning rates have been found. it is a poor man's unsafeguarded version of steepest descent, which even in safeguarded form is bad. You'll be much better of with a trust region or line search Quasi-Newton method. Don't write your own.
Gradient descent is a misnomer. it may not even descend. Safeguarded algorithms, which use trust regions or line searches, either descend, or terminate if unable to descend. The "learning rates", in some manner of speaking, are adaptively determined by the algorithms based on what they encounter.They won't overshoot as gradient descent can, and can automatically speed up when warranted. Gradient descent wasn't even a good algorithm a century ago.
A protracted zero slope region could cause problems for any optimization algorithm, unless it is a rigorous global optimization algorithm. Rigorous global optimization algorithms, for instance based on branch and bound, do exist (I'm not talking about genetic algorithms and other heuristic rubbish, which are the moral equivalent of gradient descent), but may not succeed in solving a problem if it is too large or too difficult, and may not accept all functions. Your local optimization algorithm algorithm should check 2nd order optimum conditions if possible. That will distinguish a local minimum from a local maximum or saddlepoint.
As stated in other answers, it is a good idea to run a local optimization algorithm with several different starting values. But that algorithm should generally not be gradient descent.
In my opinion, Andrew Ng has done a great disservice to people by teaching them gradient descent. People think they know how to optimize, when they know no more about how to optimize or optimization than a 3 year old kid "driving" with a placebo steering wheel a plastic car attached to the front of a supermarket shopping cart does about driving. (And for the benefit of a certain commenter who claimed that my providing an explicit formulation of his (her) problem as a constrained optimization problem, said that I had only repeated his(her) problem, and that imposing constraints after the fact was not a good way to solve an optimization problem, and then refused to change his(her) view after I explained how constrained optimization works, which is not imposing constraints "after the fact", and that there is a very well-developed theory for constrained optimization and practical ready to go software to solve constrained optimization problems, then he (she) downvoted that very detailed, thoughtful, and friendly answer, and wrote that we can both agree that I didn't answer his(her) question) there is practical ready to go software to solve constrained optimization problems, which apparently many people who "learned" optimization from Andrew Ng et al have no idea even exists. And non-specialists are not going to do a good job of writing their own constrained optimization software (or unconstrained optimization software either). Andrew Ng does people a disservice by making them think they can. Nor is there a need to do so, since good optimization software exists, although R is littered with not so good optimization software. Any improvements on good off the shelf software to take advantage of special problem structure, for instance, are unlikely to be made effectively by someone other than an exeprt in numerical optimization and numerical analysis.
|
What happens when I use gradient descent over a zero slope?
There's only one thing you need to know about gradient descent. It is complete and utter garbage, and an absolutely horrible algorithm which should not even be considered unless there are at least hun
|
46,690
|
Bayesian regression full conditional distribution
|
As you show in the reproduction what is written in this book, the solution is incorrect for the simple reason that the quantity $(X^{T}X)^{-1}$ is a $p\times p$ matrix, not a scalar. Hence you cannot divide by $(X^{T}X)^{-1}$. (This is a terrible way of explaining this standard derivation!)
What you can write instead is
$$
\beta^{T}X^{T}X\beta - 2\beta^{T}X^{T}Y=\beta^{T}X^{T}X\beta - 2\beta^{T}(X^{T}X)(X^{T}X)^{-1}X^{T}Y
$$
These are the first two terms of the perfect squared norm
$$
\left(\beta-\hat\beta\right)^T (X^{T}X) \left(\beta-\hat\beta\right)
$$
where $\hat\beta=(X^{T}X)^{-1}X^{T}Y$ is the least square estimator.
Therefore, the full conditional posterior distribution of $\beta$, given $\sigma$ and $\hat\beta$ is a normal distribution$$\mathcal{N}_p(\hat\beta,(X^{T}X)^{-1})$$
Note: The prior on $\sigma$ corresponding to the joint posterior is $1/\sigma^2$ rather than a uniform prior.
|
Bayesian regression full conditional distribution
|
As you show in the reproduction what is written in this book, the solution is incorrect for the simple reason that the quantity $(X^{T}X)^{-1}$ is a $p\times p$ matrix, not a scalar. Hence you cannot
|
Bayesian regression full conditional distribution
As you show in the reproduction what is written in this book, the solution is incorrect for the simple reason that the quantity $(X^{T}X)^{-1}$ is a $p\times p$ matrix, not a scalar. Hence you cannot divide by $(X^{T}X)^{-1}$. (This is a terrible way of explaining this standard derivation!)
What you can write instead is
$$
\beta^{T}X^{T}X\beta - 2\beta^{T}X^{T}Y=\beta^{T}X^{T}X\beta - 2\beta^{T}(X^{T}X)(X^{T}X)^{-1}X^{T}Y
$$
These are the first two terms of the perfect squared norm
$$
\left(\beta-\hat\beta\right)^T (X^{T}X) \left(\beta-\hat\beta\right)
$$
where $\hat\beta=(X^{T}X)^{-1}X^{T}Y$ is the least square estimator.
Therefore, the full conditional posterior distribution of $\beta$, given $\sigma$ and $\hat\beta$ is a normal distribution$$\mathcal{N}_p(\hat\beta,(X^{T}X)^{-1})$$
Note: The prior on $\sigma$ corresponding to the joint posterior is $1/\sigma^2$ rather than a uniform prior.
|
Bayesian regression full conditional distribution
As you show in the reproduction what is written in this book, the solution is incorrect for the simple reason that the quantity $(X^{T}X)^{-1}$ is a $p\times p$ matrix, not a scalar. Hence you cannot
|
46,691
|
In a neural network, do biases essentially need updates when being trained?
|
If you try to leave the biases fixed at any value, then each neuron will try to use its "overall input activation" as a kind of bias (by having a small weight to all of its inputs). This makes your learning less stable than just having a bias. If you try to fight this desire with regularization, it may not be possible for the network to encode a good solution to your problem.
How much memory are you really saving?
|
In a neural network, do biases essentially need updates when being trained?
|
If you try to leave the biases fixed at any value, then each neuron will try to use its "overall input activation" as a kind of bias (by having a small weight to all of its inputs). This makes your l
|
In a neural network, do biases essentially need updates when being trained?
If you try to leave the biases fixed at any value, then each neuron will try to use its "overall input activation" as a kind of bias (by having a small weight to all of its inputs). This makes your learning less stable than just having a bias. If you try to fight this desire with regularization, it may not be possible for the network to encode a good solution to your problem.
How much memory are you really saving?
|
In a neural network, do biases essentially need updates when being trained?
If you try to leave the biases fixed at any value, then each neuron will try to use its "overall input activation" as a kind of bias (by having a small weight to all of its inputs). This makes your l
|
46,692
|
In a neural network, do biases essentially need updates when being trained?
|
The idea is to learn the bias weights but have the activation fixed at 1. Anything else would make it an additional ordinary unit.
|
In a neural network, do biases essentially need updates when being trained?
|
The idea is to learn the bias weights but have the activation fixed at 1. Anything else would make it an additional ordinary unit.
|
In a neural network, do biases essentially need updates when being trained?
The idea is to learn the bias weights but have the activation fixed at 1. Anything else would make it an additional ordinary unit.
|
In a neural network, do biases essentially need updates when being trained?
The idea is to learn the bias weights but have the activation fixed at 1. Anything else would make it an additional ordinary unit.
|
46,693
|
Significance of regression coefficients and their equality
|
Yes. This answer interprets the question in the following way:
$\beta_1$ is significantly different from zero in the full model
$$y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \varepsilon$$
$\beta_2$ is not significantly different from zero in the full model.
Either (a) $\beta_1=\beta_2$ or (b) a test of $H_0:\beta_1=\beta_2$ is not significant. The latter is equivalent to the full model not being significantly better than the reduced model
$$y = \alpha + \beta(x_1 + x_2) + \varepsilon.$$
Intuitively, $y$ must have a detectable linear relationship with $x_1$ but not with $x_2$, even though the coefficients ("slopes") of those relationships are the same. This could happen when the spread of $x_1$ in the data is substantially greater than the spread of $x_2$: the wider spread of $x_1$ will induce greater changes in $y$, even when $\beta_1 \approx \beta_2$, making $\beta_1$ more readily detectable than $\beta_2$.
To illustrate, I played around with (a) the amount of data $n$ and (b) the variance of $\varepsilon$ to produce this phenomenon. The data are
$$(x_1, x_2, y) = ((1, 2, \ldots, 2n), (1,\ldots,1,-1,\ldots,-1), x_1+x_2+\varepsilon)$$
where $\varepsilon$ are independently and identically distributed with a mean of zero and standard deviation of $3$. As $n$ grows, $x_1$ becomes more spread out (from $1$ through $2n$) while $x_2$ is confined to the interval $[-1,1]$. The true underlying relationship is $\alpha=0, \beta_1=\beta_2=1$.
The following is R code to generate this example.
n <- 12
x1 <- 1:(2*n)
x2 <- c(rep(-1,n), rep(1,n))
set.seed(17)
y <- x1 + x2 + rnorm(2*n, sd=3)
Here is the fit of the full model.
> summary(fit.full <- lm(y ~ x1+x2))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.5223 1.8358 -0.284 0.779
x1 1.1400 0.1416 8.053 7.41e-08 ***
x2 0.4886 0.9800 0.499 0.623
$\beta_1$ is significant at any reasonable threshold ($p$ is essentially zero), while, $\beta_2$ is not significant at any reasonable threshold ($p=0.623$).
The full model is not a significant improvement over the full model ($p = 0.5618$):
>fit.partial <- lm(y ~ I(x1+x2))
>anova(fit.partial, fit.full)
Analysis of Variance Table
Model 1: y ~ I(x1 + x2)
Model 2: y ~ x1 + x2
Res.Df RSS Df Sum of Sq F Pr(>F)
1 22 122.36
2 21 120.37 1 1.9924 0.3476 0.5618
|
Significance of regression coefficients and their equality
|
Yes. This answer interprets the question in the following way:
$\beta_1$ is significantly different from zero in the full model
$$y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \varepsilon$$
$\beta_2$ is
|
Significance of regression coefficients and their equality
Yes. This answer interprets the question in the following way:
$\beta_1$ is significantly different from zero in the full model
$$y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \varepsilon$$
$\beta_2$ is not significantly different from zero in the full model.
Either (a) $\beta_1=\beta_2$ or (b) a test of $H_0:\beta_1=\beta_2$ is not significant. The latter is equivalent to the full model not being significantly better than the reduced model
$$y = \alpha + \beta(x_1 + x_2) + \varepsilon.$$
Intuitively, $y$ must have a detectable linear relationship with $x_1$ but not with $x_2$, even though the coefficients ("slopes") of those relationships are the same. This could happen when the spread of $x_1$ in the data is substantially greater than the spread of $x_2$: the wider spread of $x_1$ will induce greater changes in $y$, even when $\beta_1 \approx \beta_2$, making $\beta_1$ more readily detectable than $\beta_2$.
To illustrate, I played around with (a) the amount of data $n$ and (b) the variance of $\varepsilon$ to produce this phenomenon. The data are
$$(x_1, x_2, y) = ((1, 2, \ldots, 2n), (1,\ldots,1,-1,\ldots,-1), x_1+x_2+\varepsilon)$$
where $\varepsilon$ are independently and identically distributed with a mean of zero and standard deviation of $3$. As $n$ grows, $x_1$ becomes more spread out (from $1$ through $2n$) while $x_2$ is confined to the interval $[-1,1]$. The true underlying relationship is $\alpha=0, \beta_1=\beta_2=1$.
The following is R code to generate this example.
n <- 12
x1 <- 1:(2*n)
x2 <- c(rep(-1,n), rep(1,n))
set.seed(17)
y <- x1 + x2 + rnorm(2*n, sd=3)
Here is the fit of the full model.
> summary(fit.full <- lm(y ~ x1+x2))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.5223 1.8358 -0.284 0.779
x1 1.1400 0.1416 8.053 7.41e-08 ***
x2 0.4886 0.9800 0.499 0.623
$\beta_1$ is significant at any reasonable threshold ($p$ is essentially zero), while, $\beta_2$ is not significant at any reasonable threshold ($p=0.623$).
The full model is not a significant improvement over the full model ($p = 0.5618$):
>fit.partial <- lm(y ~ I(x1+x2))
>anova(fit.partial, fit.full)
Analysis of Variance Table
Model 1: y ~ I(x1 + x2)
Model 2: y ~ x1 + x2
Res.Df RSS Df Sum of Sq F Pr(>F)
1 22 122.36
2 21 120.37 1 1.9924 0.3476 0.5618
|
Significance of regression coefficients and their equality
Yes. This answer interprets the question in the following way:
$\beta_1$ is significantly different from zero in the full model
$$y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \varepsilon$$
$\beta_2$ is
|
46,694
|
Use Edge detection in Image classification
|
Your approach goes in the line of the popular histogram of gradients approach. See here and the corresponding Wikipedia entry. Now unless you have some already labelled data, training such a system is quite laborious. If possible, I would start by using some available implementation to experiment with, like the one offered by scikit-image.
There are some other features, like Linear Binary Pattern, but they're not as powerful as HOG. See in the module corresponding of scikit-image for a list of features and their implementations.
As for CNN, you should not need to extract any features. The system learns the features automatically. That is one of the nice properties of deep architectures. A huge number of papers show that these systems learn some edge oriented filters features (in the same line as the idea you are considering).
Note that these features do not consider color. That may be an interesting feature for you to consider. Or extract the features for each of the color channels.
Hope this helps.
|
Use Edge detection in Image classification
|
Your approach goes in the line of the popular histogram of gradients approach. See here and the corresponding Wikipedia entry. Now unless you have some already labelled data, training such a system is
|
Use Edge detection in Image classification
Your approach goes in the line of the popular histogram of gradients approach. See here and the corresponding Wikipedia entry. Now unless you have some already labelled data, training such a system is quite laborious. If possible, I would start by using some available implementation to experiment with, like the one offered by scikit-image.
There are some other features, like Linear Binary Pattern, but they're not as powerful as HOG. See in the module corresponding of scikit-image for a list of features and their implementations.
As for CNN, you should not need to extract any features. The system learns the features automatically. That is one of the nice properties of deep architectures. A huge number of papers show that these systems learn some edge oriented filters features (in the same line as the idea you are considering).
Note that these features do not consider color. That may be an interesting feature for you to consider. Or extract the features for each of the color channels.
Hope this helps.
|
Use Edge detection in Image classification
Your approach goes in the line of the popular histogram of gradients approach. See here and the corresponding Wikipedia entry. Now unless you have some already labelled data, training such a system is
|
46,695
|
Use Edge detection in Image classification
|
If you are going to use edge detection, you will have to use distance transform to do the kind of classification you are thinking of. Once that is done you need to create a distance matrix between the test image(s) (ones without the label) and the training image(s) (ones with the label).
But may I suggest using HoG transform instead, or at least a Sobel filter instead of an edge detector. The Sobel filter at least is simple to implement in Matlab and I'm sure someone has implemented the HoG filter. The reason is simple: the edge detectors give you binary features and this in my opinion makes it harder to compare since it is not scale and position invariant.
Once the feature vector is done, choose a classifier (SVMs, CART, NN etc.) to classify into the classes.
|
Use Edge detection in Image classification
|
If you are going to use edge detection, you will have to use distance transform to do the kind of classification you are thinking of. Once that is done you need to create a distance matrix between the
|
Use Edge detection in Image classification
If you are going to use edge detection, you will have to use distance transform to do the kind of classification you are thinking of. Once that is done you need to create a distance matrix between the test image(s) (ones without the label) and the training image(s) (ones with the label).
But may I suggest using HoG transform instead, or at least a Sobel filter instead of an edge detector. The Sobel filter at least is simple to implement in Matlab and I'm sure someone has implemented the HoG filter. The reason is simple: the edge detectors give you binary features and this in my opinion makes it harder to compare since it is not scale and position invariant.
Once the feature vector is done, choose a classifier (SVMs, CART, NN etc.) to classify into the classes.
|
Use Edge detection in Image classification
If you are going to use edge detection, you will have to use distance transform to do the kind of classification you are thinking of. Once that is done you need to create a distance matrix between the
|
46,696
|
Use Edge detection in Image classification
|
Remember that when doing computer vision and image processing you should assure that all images are taken in the same conditions. Preliminary preparation of data (exposure, resizing, lighting, filtering etc.) dramatically reduces problems that might occur later.
Yes, choosing image pixels as an input features seems to be a reasonable choice. Remember that for even a relatively small image the number of features grows very fast (i.e. 256 x 256 px image resutls in 65 536 features). Therefore some dimension reduction technique should be applied (i.e PCA). You might use Python scikit-learn library that provides all necessary tools.
I'm not sure about the performance of your approach - if your dataset does not have a representative amount of images of each class it would probably fail. You could consider experimenting with other features obtained from Gray Level Co-ocurance Matrix (enables many useful metrics, but your image should be represented in gray scale) or Zernike Moments to describe objects/shapes in an image (more info here).
Regards.
|
Use Edge detection in Image classification
|
Remember that when doing computer vision and image processing you should assure that all images are taken in the same conditions. Preliminary preparation of data (exposure, resizing, lighting, filteri
|
Use Edge detection in Image classification
Remember that when doing computer vision and image processing you should assure that all images are taken in the same conditions. Preliminary preparation of data (exposure, resizing, lighting, filtering etc.) dramatically reduces problems that might occur later.
Yes, choosing image pixels as an input features seems to be a reasonable choice. Remember that for even a relatively small image the number of features grows very fast (i.e. 256 x 256 px image resutls in 65 536 features). Therefore some dimension reduction technique should be applied (i.e PCA). You might use Python scikit-learn library that provides all necessary tools.
I'm not sure about the performance of your approach - if your dataset does not have a representative amount of images of each class it would probably fail. You could consider experimenting with other features obtained from Gray Level Co-ocurance Matrix (enables many useful metrics, but your image should be represented in gray scale) or Zernike Moments to describe objects/shapes in an image (more info here).
Regards.
|
Use Edge detection in Image classification
Remember that when doing computer vision and image processing you should assure that all images are taken in the same conditions. Preliminary preparation of data (exposure, resizing, lighting, filteri
|
46,697
|
Visualizing relationship between independent variable and binary response
|
I can't speak to the modeling (except to guess that the bend near 100 is too sharp to be captured by a logistic curve), but a visualization idea is to continue your binning idea to the extreme. Consider a bin for every possible interactions value which extends some fixed amount on each side. Compute the mean and CI for each of those bins. But instead of plotting 100s of interval bars, plot the means as a connected line and the upper and lower CI bounds as an area.
Here's a plot I made with your data (Thanks for sharing!) and bins of +/- 25. I smoothed the mean since it was easy to do in my software and communicates the trend better. I didn't smooth the confidence interval limits only because it would have been harder. Presumably all the computed bin stats would be smoother if I had used weighting so that the central values of each bin counted more.
More on the moving bins: For each interaction value, say 57, I looked at the interval +/25, which would be [32 .. 82). For all the values in that range (3071 for this example) I computed the mean and Std Error. Each interval may have a different count, but the SE is taking the number into account. Other methods like Loess typically look at weighted intervals of equal count. I don't know the statistical merits either way, but the graph can at least be used to suggest a non-linear function that's better than a logistic curve.
Colophon: I made the graph interactively in JMP. The graph is a relatively straightforward combination of a smoother element and an area element in JMP's Graph Builder. The hard part was in computing the bin stats using table formula columns.
|
Visualizing relationship between independent variable and binary response
|
I can't speak to the modeling (except to guess that the bend near 100 is too sharp to be captured by a logistic curve), but a visualization idea is to continue your binning idea to the extreme. Consid
|
Visualizing relationship between independent variable and binary response
I can't speak to the modeling (except to guess that the bend near 100 is too sharp to be captured by a logistic curve), but a visualization idea is to continue your binning idea to the extreme. Consider a bin for every possible interactions value which extends some fixed amount on each side. Compute the mean and CI for each of those bins. But instead of plotting 100s of interval bars, plot the means as a connected line and the upper and lower CI bounds as an area.
Here's a plot I made with your data (Thanks for sharing!) and bins of +/- 25. I smoothed the mean since it was easy to do in my software and communicates the trend better. I didn't smooth the confidence interval limits only because it would have been harder. Presumably all the computed bin stats would be smoother if I had used weighting so that the central values of each bin counted more.
More on the moving bins: For each interaction value, say 57, I looked at the interval +/25, which would be [32 .. 82). For all the values in that range (3071 for this example) I computed the mean and Std Error. Each interval may have a different count, but the SE is taking the number into account. Other methods like Loess typically look at weighted intervals of equal count. I don't know the statistical merits either way, but the graph can at least be used to suggest a non-linear function that's better than a logistic curve.
Colophon: I made the graph interactively in JMP. The graph is a relatively straightforward combination of a smoother element and an area element in JMP's Graph Builder. The hard part was in computing the bin stats using table formula columns.
|
Visualizing relationship between independent variable and binary response
I can't speak to the modeling (except to guess that the bend near 100 is too sharp to be captured by a logistic curve), but a visualization idea is to continue your binning idea to the extreme. Consid
|
46,698
|
Gaussian process regression: leave-one-out prediction
|
In the general noisy or ``signal $+$ noise'' framework $y_i = f(\mathbf{x}_i)
+ \epsilon_i$, several observations can be done at the same location
$\mathbf{x}_i$, so the notations $Y(\mathbf{x}_i)$ and $f_{-i}$
can be misleading then.
Suppose first that the $n$ locations $\mathbf{x}_i$ are distinct, so
that deleting an observation and deleting a location are the same
thing. Using gaussian conditional expectation, the predicted (or
smoothed) value for the signal vector $\mathbf{f}$ is
$\widehat{\mathbf{f}} = \mathbf{H} \mathbf{y}$ where $\mathbf{H}:=
\mathbf{K}(\mathbf{K} + \sigma^2_\epsilon \mathbf{I}_n)^{-1}$. Then we can use
Cook and Weisberg's formula for a linear smoother
$$
y_i - \widehat{f}_{-i} = \frac{y_i - \widehat{f}_i }{1 - H_{ii}}
$$
which requires $H_{ii} \neq 1$. This rewrites in a form similar to
yours $\widehat{f}_{-i} = y_i - [\mathbf{B}\mathbf{y}]_i /
[\mathbf{B}]_{ii}$ with
$
\mathbf{B} := \mathbf{I}_n - \mathbf{H} =
\sigma^2_\epsilon (\mathbf{K} + \sigma^2_\epsilon \mathbf{I}_n)^{-1}
$
and we get your formula after simplifying $\sigma^2_\epsilon$ in the fraction.
So in this case it is enough to replace the covariance $\mathbf{K}$ by
$\mathbf{K} + \sigma^2_\epsilon \mathbf{I}_n$.
In the general case, the $n$ observations $y_i$ are related to $p
\leqslant n$ signal values $f_k:=f(\mathbf{x}_k)$ at $p$ distinct
points. Most likely, a cross-validation will then leave out all the
$y_i$ at the same location $\mathbf{x}_k$ (and not only one $y_i$). Then we are back to the first case by averaging the
observations at the same location and by modifying consequently the
variance. For the location $\mathbf{x}_k$, the variance
$\sigma^2_\epsilon$ must be replaced by $\sigma^2_\epsilon/n_k$ where
$n_k$ is the number of observations at site $k$, hence $\sigma^2_\epsilon
\mathbf{I}_n$ above will be replaced by a diagonal matrix. This general
context can be regarded as a Bayes linear regression where the
parameter $\mathbf{f}$ has prior
$\text{Norm}(\mathbf{0},\,\mathbf{K})$ and $\sigma^2_\epsilon$ is
known, so conjugacy holds and we have update formulas that can be used
for the deletion of one observation if needed.
The Cook and Weisberg's formula above (p. 33 of their famous 1982 book
Residuals and Influence in Regression) holds for smoothing splines
as well as for linear regression. It can be proved by using the
Woodbury matrix identiy
or by purely statistical arguments involving optimality (e.g. minimal
variance). In
$$
\widehat{f}_i = H_{ii} \,y_i + (1- H_{ii}) \sum_{j \neq i} \frac{H_{ij}}{1- H_{ii}} \,y_j,
$$
the sum $\sum$ at right hand side can be identified as the optimal
prediction of $f_i$ based on the $y_j$ with $ j\neq i$. Indeed it is
unbiased, and it is also optimal because $\widehat{f}_i$ would
otherwise no longer be such. The formula does not hold as such for
non-noisy (interpolating) kriging because then $H_{ii} = 1$.
|
Gaussian process regression: leave-one-out prediction
|
In the general noisy or ``signal $+$ noise'' framework $y_i = f(\mathbf{x}_i)
+ \epsilon_i$, several observations can be done at the same location
$\mathbf{x}_i$, so the notations $Y(\mathbf{x}_i)$ an
|
Gaussian process regression: leave-one-out prediction
In the general noisy or ``signal $+$ noise'' framework $y_i = f(\mathbf{x}_i)
+ \epsilon_i$, several observations can be done at the same location
$\mathbf{x}_i$, so the notations $Y(\mathbf{x}_i)$ and $f_{-i}$
can be misleading then.
Suppose first that the $n$ locations $\mathbf{x}_i$ are distinct, so
that deleting an observation and deleting a location are the same
thing. Using gaussian conditional expectation, the predicted (or
smoothed) value for the signal vector $\mathbf{f}$ is
$\widehat{\mathbf{f}} = \mathbf{H} \mathbf{y}$ where $\mathbf{H}:=
\mathbf{K}(\mathbf{K} + \sigma^2_\epsilon \mathbf{I}_n)^{-1}$. Then we can use
Cook and Weisberg's formula for a linear smoother
$$
y_i - \widehat{f}_{-i} = \frac{y_i - \widehat{f}_i }{1 - H_{ii}}
$$
which requires $H_{ii} \neq 1$. This rewrites in a form similar to
yours $\widehat{f}_{-i} = y_i - [\mathbf{B}\mathbf{y}]_i /
[\mathbf{B}]_{ii}$ with
$
\mathbf{B} := \mathbf{I}_n - \mathbf{H} =
\sigma^2_\epsilon (\mathbf{K} + \sigma^2_\epsilon \mathbf{I}_n)^{-1}
$
and we get your formula after simplifying $\sigma^2_\epsilon$ in the fraction.
So in this case it is enough to replace the covariance $\mathbf{K}$ by
$\mathbf{K} + \sigma^2_\epsilon \mathbf{I}_n$.
In the general case, the $n$ observations $y_i$ are related to $p
\leqslant n$ signal values $f_k:=f(\mathbf{x}_k)$ at $p$ distinct
points. Most likely, a cross-validation will then leave out all the
$y_i$ at the same location $\mathbf{x}_k$ (and not only one $y_i$). Then we are back to the first case by averaging the
observations at the same location and by modifying consequently the
variance. For the location $\mathbf{x}_k$, the variance
$\sigma^2_\epsilon$ must be replaced by $\sigma^2_\epsilon/n_k$ where
$n_k$ is the number of observations at site $k$, hence $\sigma^2_\epsilon
\mathbf{I}_n$ above will be replaced by a diagonal matrix. This general
context can be regarded as a Bayes linear regression where the
parameter $\mathbf{f}$ has prior
$\text{Norm}(\mathbf{0},\,\mathbf{K})$ and $\sigma^2_\epsilon$ is
known, so conjugacy holds and we have update formulas that can be used
for the deletion of one observation if needed.
The Cook and Weisberg's formula above (p. 33 of their famous 1982 book
Residuals and Influence in Regression) holds for smoothing splines
as well as for linear regression. It can be proved by using the
Woodbury matrix identiy
or by purely statistical arguments involving optimality (e.g. minimal
variance). In
$$
\widehat{f}_i = H_{ii} \,y_i + (1- H_{ii}) \sum_{j \neq i} \frac{H_{ij}}{1- H_{ii}} \,y_j,
$$
the sum $\sum$ at right hand side can be identified as the optimal
prediction of $f_i$ based on the $y_j$ with $ j\neq i$. Indeed it is
unbiased, and it is also optimal because $\widehat{f}_i$ would
otherwise no longer be such. The formula does not hold as such for
non-noisy (interpolating) kriging because then $H_{ii} = 1$.
|
Gaussian process regression: leave-one-out prediction
In the general noisy or ``signal $+$ noise'' framework $y_i = f(\mathbf{x}_i)
+ \epsilon_i$, several observations can be done at the same location
$\mathbf{x}_i$, so the notations $Y(\mathbf{x}_i)$ an
|
46,699
|
Likelihood-based hypothesis testing
|
As noted in the comments, the Wald statistic is simple, powerful and therefore a good choice for this problem. Now, for two Poisson populations, presumably independent, we wish to test the hypotheses that their parameters are equal, namely:
$$H_0: \lambda_1=\lambda_2\quad \text{vs} \quad H_1 :\lambda_1 \neq \lambda_2$$
The Wald statistic in this case is defined as
$$Z=\frac{\widehat{\lambda}_1-\widehat{\lambda}_2}{\sqrt{var({\widehat{\lambda}_1})+var({\widehat{\lambda}_2)}}}$$
and according to the theory of maximum likelihood it has an asymptotic standard normal distribution. The mle for the parameter $\lambda$ is of course the sample mean, so this is what should go in the numerator.
The denominator is a little more complicated. To see this, note that for the sample mean
$$var(\bar{X})=\frac{\sigma^2}{n}$$
but under the Poisson assumption, $\sigma^2=\mu$, right? So the question is, which estimator should we use for $\sigma^2$, the sample variance or the sample mean? The asymptotic distribution holds either way.
The answer is the sample mean, despite the fact that this might seem counter-intuitive. The reason is that the sample mean in a Poisson distribution is the UMVUE for the parameter $\lambda$ and therefore by using that instead of the sample variance, we gain precision.
We now have everything we need. The test takes the form:
$$Z=\frac{\widehat{\lambda}_1-\widehat{\lambda}_2}{\sqrt{\frac{{\widehat{\lambda}_1}}{n_1}+\frac{{\widehat{\lambda}_2}}{n_2}}}$$
Once you compute it, you can find the two-sided p-value from the Normal distribution or you can square it and look at the one-sided p-value of the $\chi^2 (1) $ distribution. This is often more convenient.
Hope this helps.
|
Likelihood-based hypothesis testing
|
As noted in the comments, the Wald statistic is simple, powerful and therefore a good choice for this problem. Now, for two Poisson populations, presumably independent, we wish to test the hypotheses
|
Likelihood-based hypothesis testing
As noted in the comments, the Wald statistic is simple, powerful and therefore a good choice for this problem. Now, for two Poisson populations, presumably independent, we wish to test the hypotheses that their parameters are equal, namely:
$$H_0: \lambda_1=\lambda_2\quad \text{vs} \quad H_1 :\lambda_1 \neq \lambda_2$$
The Wald statistic in this case is defined as
$$Z=\frac{\widehat{\lambda}_1-\widehat{\lambda}_2}{\sqrt{var({\widehat{\lambda}_1})+var({\widehat{\lambda}_2)}}}$$
and according to the theory of maximum likelihood it has an asymptotic standard normal distribution. The mle for the parameter $\lambda$ is of course the sample mean, so this is what should go in the numerator.
The denominator is a little more complicated. To see this, note that for the sample mean
$$var(\bar{X})=\frac{\sigma^2}{n}$$
but under the Poisson assumption, $\sigma^2=\mu$, right? So the question is, which estimator should we use for $\sigma^2$, the sample variance or the sample mean? The asymptotic distribution holds either way.
The answer is the sample mean, despite the fact that this might seem counter-intuitive. The reason is that the sample mean in a Poisson distribution is the UMVUE for the parameter $\lambda$ and therefore by using that instead of the sample variance, we gain precision.
We now have everything we need. The test takes the form:
$$Z=\frac{\widehat{\lambda}_1-\widehat{\lambda}_2}{\sqrt{\frac{{\widehat{\lambda}_1}}{n_1}+\frac{{\widehat{\lambda}_2}}{n_2}}}$$
Once you compute it, you can find the two-sided p-value from the Normal distribution or you can square it and look at the one-sided p-value of the $\chi^2 (1) $ distribution. This is often more convenient.
Hope this helps.
|
Likelihood-based hypothesis testing
As noted in the comments, the Wald statistic is simple, powerful and therefore a good choice for this problem. Now, for two Poisson populations, presumably independent, we wish to test the hypotheses
|
46,700
|
t-statistic before-after
|
You did the problem correctly; the site did not. It committed a well-known error of not retaining intermediate results to sufficient precision, causing its final answer to be erroneous.
Forensic Analysis
This site takes the student through a guided sequence of questions to go through the steps of conducting a t-test. After formulating null and alternative hypotheses, the student is asked to compute intermediate results such as the mean difference (-3) and its standard error (approximately 0.421637). However, it insists that the values be entered only to limited precision. The only way to proceed is to round the SE to 0.42. At this point, the system requires the student to replace the correct value of the SE with the rounded value. This causes the correct t-statistic, approximately equal to -7.115125, to be computed as -3/0.42 = -7.14. That (or something very close to it) is the answer one must enter in order to proceed!
Post-Mortem Rant
The pedagogical errors in this approach are appalling: the practice of statistics is reduced to remembering names for situations and procedures, using them to look up and compute a series of formulas. Correct answers and many near-correct answers are considered wrong. Forcibly incorrect answers have to be propagated through a calculation ultimately to produce an incorrect final answer. Students are reduced to guessing what the site might accept, without having any guidance concerning the errors they might possibly have made. It is difficult to imagine a nastier climate in which to try to learn anything.
|
t-statistic before-after
|
You did the problem correctly; the site did not. It committed a well-known error of not retaining intermediate results to sufficient precision, causing its final answer to be erroneous.
Forensic Ana
|
t-statistic before-after
You did the problem correctly; the site did not. It committed a well-known error of not retaining intermediate results to sufficient precision, causing its final answer to be erroneous.
Forensic Analysis
This site takes the student through a guided sequence of questions to go through the steps of conducting a t-test. After formulating null and alternative hypotheses, the student is asked to compute intermediate results such as the mean difference (-3) and its standard error (approximately 0.421637). However, it insists that the values be entered only to limited precision. The only way to proceed is to round the SE to 0.42. At this point, the system requires the student to replace the correct value of the SE with the rounded value. This causes the correct t-statistic, approximately equal to -7.115125, to be computed as -3/0.42 = -7.14. That (or something very close to it) is the answer one must enter in order to proceed!
Post-Mortem Rant
The pedagogical errors in this approach are appalling: the practice of statistics is reduced to remembering names for situations and procedures, using them to look up and compute a series of formulas. Correct answers and many near-correct answers are considered wrong. Forcibly incorrect answers have to be propagated through a calculation ultimately to produce an incorrect final answer. Students are reduced to guessing what the site might accept, without having any guidance concerning the errors they might possibly have made. It is difficult to imagine a nastier climate in which to try to learn anything.
|
t-statistic before-after
You did the problem correctly; the site did not. It committed a well-known error of not retaining intermediate results to sufficient precision, causing its final answer to be erroneous.
Forensic Ana
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.