idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
6,901
|
What are the dangers of violating the homoscedasticity assumption for linear regression?
|
It is good to remember that having unbiased estimators does not mean that the model is "right". In many situations, the least squares criterion for regression coefficient estimation gives rise to a model that either has (1) regression coefficients that don't have the right meaning or (2) predictions that are tilted towards minimizing large errors but that make up for it by having many small errors. For example, some analysts believe that even when transforming to $\log(Y)$ makes the model fit well it is valid to predict $Y$ using OLS because estimates are unbiased. This will minimize the sum of squared errors but partition the effects across the $\beta$s incorrectly and result in a non-competitive sum of absolute errors. Sometimes lack of constancy of variance signals a more fundamental modeling problem.
When looking at competing models (e.g., for $Y$ vs. $\log(Y)$ vs. ordinal regression) I like to compare predictive accuracy using measures that were not optimized by definition by the fitting process.
|
What are the dangers of violating the homoscedasticity assumption for linear regression?
|
It is good to remember that having unbiased estimators does not mean that the model is "right". In many situations, the least squares criterion for regression coefficient estimation gives rise to a m
|
What are the dangers of violating the homoscedasticity assumption for linear regression?
It is good to remember that having unbiased estimators does not mean that the model is "right". In many situations, the least squares criterion for regression coefficient estimation gives rise to a model that either has (1) regression coefficients that don't have the right meaning or (2) predictions that are tilted towards minimizing large errors but that make up for it by having many small errors. For example, some analysts believe that even when transforming to $\log(Y)$ makes the model fit well it is valid to predict $Y$ using OLS because estimates are unbiased. This will minimize the sum of squared errors but partition the effects across the $\beta$s incorrectly and result in a non-competitive sum of absolute errors. Sometimes lack of constancy of variance signals a more fundamental modeling problem.
When looking at competing models (e.g., for $Y$ vs. $\log(Y)$ vs. ordinal regression) I like to compare predictive accuracy using measures that were not optimized by definition by the fitting process.
|
What are the dangers of violating the homoscedasticity assumption for linear regression?
It is good to remember that having unbiased estimators does not mean that the model is "right". In many situations, the least squares criterion for regression coefficient estimation gives rise to a m
|
6,902
|
What are the dangers of violating the homoscedasticity assumption for linear regression?
|
There is good information here in the other answers, particularly to your first question. I thought I would add some complimentary information regarding your last two questions.
The problems associated with heteroscedasticity are not limited to extrapolation. Since they primarily involve confidence intervals, p-values, and prediction limits being incorrect, they apply throughout the range of your data.
Strictly speaking, the problems associated with heteroscedasticity exist with even the smallest amount of heteroscedasticity. However, as you might suspect, with very little heteroscedasticity, the problems are very small as well. There is no true 'bright line' where heteroscedasticity becomes too much, but a rule of thumb is that linear models are not too affected by heteroscedasticity when the largest variance is $\le 4\times$ the smallest variance.
|
What are the dangers of violating the homoscedasticity assumption for linear regression?
|
There is good information here in the other answers, particularly to your first question. I thought I would add some complimentary information regarding your last two questions.
The problems associ
|
What are the dangers of violating the homoscedasticity assumption for linear regression?
There is good information here in the other answers, particularly to your first question. I thought I would add some complimentary information regarding your last two questions.
The problems associated with heteroscedasticity are not limited to extrapolation. Since they primarily involve confidence intervals, p-values, and prediction limits being incorrect, they apply throughout the range of your data.
Strictly speaking, the problems associated with heteroscedasticity exist with even the smallest amount of heteroscedasticity. However, as you might suspect, with very little heteroscedasticity, the problems are very small as well. There is no true 'bright line' where heteroscedasticity becomes too much, but a rule of thumb is that linear models are not too affected by heteroscedasticity when the largest variance is $\le 4\times$ the smallest variance.
|
What are the dangers of violating the homoscedasticity assumption for linear regression?
There is good information here in the other answers, particularly to your first question. I thought I would add some complimentary information regarding your last two questions.
The problems associ
|
6,903
|
What's considered a good log loss?
|
The logloss is simply $L(p_i)=-\log(p_i)$ where $p$ is simply the probability attributed to the real class.
So $L(p)=0$ is good, we attributed the probability $1$ to the right class, while $L(p)=+\infty$ is bad, because we attributed the probability $0$ to the actual class.
So, answering your question, $L(p)=0.5$ means, on average, you attributed to the right class the probability $p\approx0.61$ across samples.
Now, deciding if this is good enough is actually application-dependent, and so it's up to the argument.
|
What's considered a good log loss?
|
The logloss is simply $L(p_i)=-\log(p_i)$ where $p$ is simply the probability attributed to the real class.
So $L(p)=0$ is good, we attributed the probability $1$ to the right class, while $L(p)=+\inf
|
What's considered a good log loss?
The logloss is simply $L(p_i)=-\log(p_i)$ where $p$ is simply the probability attributed to the real class.
So $L(p)=0$ is good, we attributed the probability $1$ to the right class, while $L(p)=+\infty$ is bad, because we attributed the probability $0$ to the actual class.
So, answering your question, $L(p)=0.5$ means, on average, you attributed to the right class the probability $p\approx0.61$ across samples.
Now, deciding if this is good enough is actually application-dependent, and so it's up to the argument.
|
What's considered a good log loss?
The logloss is simply $L(p_i)=-\log(p_i)$ where $p$ is simply the probability attributed to the real class.
So $L(p)=0$ is good, we attributed the probability $1$ to the right class, while $L(p)=+\inf
|
6,904
|
What's considered a good log loss?
|
Like any metric, a good metric is the one better that the "dumb", by-chance guess, if you would have to guess with no information on the observations. This is called the intercept-only model in statistics.
This "dumb"-guess depends on 2 factors :
the number of classes
the balance of classes : their prevalence in the observed dataset
In the case of the LogLoss metric, one usual "well-known" metric is to say that 0.693 is the non-informative value. This figure is obtained by predicting p = 0.5 for any class of a binary problem. This is valid only for balanced binary problems. Because when prevalence of one class is of 10%, then you will predict p =0.1 for that class, always. This will be your baseline of dumb, by-chance prediction, because predicting 0.5 will be dumber.
I. Impact of the number of classes N on the dumb-logloss:
In the balanced case (every class has the same prevalence), when you predict p = prevalence = 1 / N for every observation, the equation becomes simply :
Logloss = -log(1 / N)
log being Ln, neperian logarithm for those who use that convention.
In the binary case, N = 2 : Logloss = - log(1/2) = 0.693
So the dumb-Loglosses are the following :
II. Impact of the prevalence of classes on the dumb-Logloss:
a. Binary classification case
In this case, we predict always p(i) = prevalence(i), and we obtain the following table :
So, when classes are very unbalanced (prevalence <2%), a logloss of 0.1 can actually be very bad ! Such as an accuracy of 98% would be bad in that case. So maybe Logloss would not be the best metric to use
b. Three-class case
"Dumb"-logloss depending on prevalence - three-class case :
We can see here the values of balanced binary and three-class cases (0.69 and 1.1).
CONCLUSION
A logloss of 0.69 may be good in a multiclass problem, and very bad in a binary biased case.
Depending of your case, you would better compute yourself the baseline of the problem, to check the meaning of your prediction.
In the biased cases, I understand that logloss has the same problem as the accuracy and other loss functions : it provides only a global measurement of your performance. So you would better complement your understanding with metrics focused on the minority classes (recall and precision), or maybe not use logloss at all.
|
What's considered a good log loss?
|
Like any metric, a good metric is the one better that the "dumb", by-chance guess, if you would have to guess with no information on the observations. This is called the intercept-only model in statis
|
What's considered a good log loss?
Like any metric, a good metric is the one better that the "dumb", by-chance guess, if you would have to guess with no information on the observations. This is called the intercept-only model in statistics.
This "dumb"-guess depends on 2 factors :
the number of classes
the balance of classes : their prevalence in the observed dataset
In the case of the LogLoss metric, one usual "well-known" metric is to say that 0.693 is the non-informative value. This figure is obtained by predicting p = 0.5 for any class of a binary problem. This is valid only for balanced binary problems. Because when prevalence of one class is of 10%, then you will predict p =0.1 for that class, always. This will be your baseline of dumb, by-chance prediction, because predicting 0.5 will be dumber.
I. Impact of the number of classes N on the dumb-logloss:
In the balanced case (every class has the same prevalence), when you predict p = prevalence = 1 / N for every observation, the equation becomes simply :
Logloss = -log(1 / N)
log being Ln, neperian logarithm for those who use that convention.
In the binary case, N = 2 : Logloss = - log(1/2) = 0.693
So the dumb-Loglosses are the following :
II. Impact of the prevalence of classes on the dumb-Logloss:
a. Binary classification case
In this case, we predict always p(i) = prevalence(i), and we obtain the following table :
So, when classes are very unbalanced (prevalence <2%), a logloss of 0.1 can actually be very bad ! Such as an accuracy of 98% would be bad in that case. So maybe Logloss would not be the best metric to use
b. Three-class case
"Dumb"-logloss depending on prevalence - three-class case :
We can see here the values of balanced binary and three-class cases (0.69 and 1.1).
CONCLUSION
A logloss of 0.69 may be good in a multiclass problem, and very bad in a binary biased case.
Depending of your case, you would better compute yourself the baseline of the problem, to check the meaning of your prediction.
In the biased cases, I understand that logloss has the same problem as the accuracy and other loss functions : it provides only a global measurement of your performance. So you would better complement your understanding with metrics focused on the minority classes (recall and precision), or maybe not use logloss at all.
|
What's considered a good log loss?
Like any metric, a good metric is the one better that the "dumb", by-chance guess, if you would have to guess with no information on the observations. This is called the intercept-only model in statis
|
6,905
|
What's considered a good log loss?
|
So this is actually more complicated than Firebugs response and it all depends on the inherent variation of the process you are trying to predict.
When I say variation what I mean is 'if an event was to repeat under the exact same conditions, known and unknown, what's the probability that the same outcome will occur again'.
A perfect predictor would have a loss, for probability P:
Loss = P ln P + (1-P) ln (1-P)
If you are trying to predict something where, at its worse, some events will be predicted with an outcome of 50/50, then by integrating and taking the average the average loss would be: L=0.5
If what you are trying to predict is a tad more repeatable the loss of a perfect model is lower. So for example, say with sufficient information a perfect model was able to predict an outcome of an event where across all possible events the worst it could say is 'this event will happen with 90% probability' then the average loss would be L=0.18.
There is also a difference if the distribution of probabilities is not uniform.
So in answer to your question the answer is 'it depends on the nature of what you are trying to predict'
|
What's considered a good log loss?
|
So this is actually more complicated than Firebugs response and it all depends on the inherent variation of the process you are trying to predict.
When I say variation what I mean is 'if an event was
|
What's considered a good log loss?
So this is actually more complicated than Firebugs response and it all depends on the inherent variation of the process you are trying to predict.
When I say variation what I mean is 'if an event was to repeat under the exact same conditions, known and unknown, what's the probability that the same outcome will occur again'.
A perfect predictor would have a loss, for probability P:
Loss = P ln P + (1-P) ln (1-P)
If you are trying to predict something where, at its worse, some events will be predicted with an outcome of 50/50, then by integrating and taking the average the average loss would be: L=0.5
If what you are trying to predict is a tad more repeatable the loss of a perfect model is lower. So for example, say with sufficient information a perfect model was able to predict an outcome of an event where across all possible events the worst it could say is 'this event will happen with 90% probability' then the average loss would be L=0.18.
There is also a difference if the distribution of probabilities is not uniform.
So in answer to your question the answer is 'it depends on the nature of what you are trying to predict'
|
What's considered a good log loss?
So this is actually more complicated than Firebugs response and it all depends on the inherent variation of the process you are trying to predict.
When I say variation what I mean is 'if an event was
|
6,906
|
What's considered a good log loss?
|
I'd say the standard statistics answer is to compare to the intercept only model. (this handles the unbalanced classes mentioned in other answers)
cf mcFadden's pseudo r^2.
https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/
Now the problem is what the maximum value is. fundamentally the problem is that probability of an event is undefined outside a model for the events. the way I would suggest is that you take your test data and aggregate it to a certain level, to get a probability estimate. then calculate the logloss of this estimate.
eg you are predicting click through rate based on (web_site, ad_id, consumer_id), then you aggregate clicks, impressions to eg web_site level and calculate the ctr on the test set for each web site. then calculate log_loss on your test data_set using these test click through rates as predictions. This is then the optimal logloss on your test set for a model only using website ids.
The problem is we can make this loss as small as we like by just adding more features until each record is uniquely identified.
|
What's considered a good log loss?
|
I'd say the standard statistics answer is to compare to the intercept only model. (this handles the unbalanced classes mentioned in other answers)
cf mcFadden's pseudo r^2.
https://stats.idre.ucla.edu
|
What's considered a good log loss?
I'd say the standard statistics answer is to compare to the intercept only model. (this handles the unbalanced classes mentioned in other answers)
cf mcFadden's pseudo r^2.
https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/
Now the problem is what the maximum value is. fundamentally the problem is that probability of an event is undefined outside a model for the events. the way I would suggest is that you take your test data and aggregate it to a certain level, to get a probability estimate. then calculate the logloss of this estimate.
eg you are predicting click through rate based on (web_site, ad_id, consumer_id), then you aggregate clicks, impressions to eg web_site level and calculate the ctr on the test set for each web site. then calculate log_loss on your test data_set using these test click through rates as predictions. This is then the optimal logloss on your test set for a model only using website ids.
The problem is we can make this loss as small as we like by just adding more features until each record is uniquely identified.
|
What's considered a good log loss?
I'd say the standard statistics answer is to compare to the intercept only model. (this handles the unbalanced classes mentioned in other answers)
cf mcFadden's pseudo r^2.
https://stats.idre.ucla.edu
|
6,907
|
What's considered a good log loss?
|
As others have pointed out, the - log of the loss = (probability of correct classification). So, for example, losses of -log(.9), -log(.8), -log(.7), -log(.6), and -log(.5), or .11, .22, .36, .51, and .69 corresponding to probabilities of correct classification of 90%, 80%, 70%, 60%, 50%. Thinking evaluatively, a random classifier will, on average, make correct predictions in a balanced classification problem 1/n_classes % of the time, so the loss of a random classifier would be -log(1/n_classes). However, by log laws, this is equal to log((1/n_classes)^-1) = log(n_classes). For n_classes on [2,10], random classifiers should produces losses [log(2), log(10)], or (0.69, 1.10, 1.39, 1.61, 1.79, 1.95, 2.08, 2.20, 2.30) respectively if you're looking for some concrete numerical benchmarks.
For unbalanced classification refer to Fed Zee's answer, and for determining the significance of a better-than-random log loss look at significance testing with binomial distributions.
|
What's considered a good log loss?
|
As others have pointed out, the - log of the loss = (probability of correct classification). So, for example, losses of -log(.9), -log(.8), -log(.7), -log(.6), and -log(.5), or .11, .22, .36, .51, and
|
What's considered a good log loss?
As others have pointed out, the - log of the loss = (probability of correct classification). So, for example, losses of -log(.9), -log(.8), -log(.7), -log(.6), and -log(.5), or .11, .22, .36, .51, and .69 corresponding to probabilities of correct classification of 90%, 80%, 70%, 60%, 50%. Thinking evaluatively, a random classifier will, on average, make correct predictions in a balanced classification problem 1/n_classes % of the time, so the loss of a random classifier would be -log(1/n_classes). However, by log laws, this is equal to log((1/n_classes)^-1) = log(n_classes). For n_classes on [2,10], random classifiers should produces losses [log(2), log(10)], or (0.69, 1.10, 1.39, 1.61, 1.79, 1.95, 2.08, 2.20, 2.30) respectively if you're looking for some concrete numerical benchmarks.
For unbalanced classification refer to Fed Zee's answer, and for determining the significance of a better-than-random log loss look at significance testing with binomial distributions.
|
What's considered a good log loss?
As others have pointed out, the - log of the loss = (probability of correct classification). So, for example, losses of -log(.9), -log(.8), -log(.7), -log(.6), and -log(.5), or .11, .22, .36, .51, and
|
6,908
|
Is Tikhonov regularization the same as Ridge Regression?
|
Tikhonov regularizarization is a larger set than ridge regression. Here is my attempt to spell out exactly how they differ.
Suppose that for a known matrix $A$ and vector $b$, we wish to find a vector $\mathbf{x}$ such that
:
$A\mathbf{x}=\mathbf{b}$.
The standard approach is ordinary least squares linear regression. However, if no $x$ satisfies the equation or more than one $x$ does—that is the solution is not unique—the problem is said to be ill-posed. Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as:
$\|A\mathbf{x}-\mathbf{b}\|^2 $
where $\left \| \cdot \right \|$ is the Euclidean norm. In matrix notation the solution, denoted by $\hat{x}$, is given by:
$\hat{x} = (A^{T}A)^{-1}A^{T}\mathbf{b}$
Tikhonov regularization minimizes
$\|A\mathbf{x}-\mathbf{b}\|^2+ \|\Gamma \mathbf{x}\|^2$
for some suitably chosen Tikhonov matrix, $\Gamma $. An explicit matrix form solution, denoted by $\hat{x}$, is given by:
$\hat{x} = (A^{T}A+ \Gamma^{T} \Gamma )^{-1}A^{T}{b}$
The effect of regularization may be varied via the scale of matrix $\Gamma$. For $\Gamma = 0$ this reduces to the unregularized least squares solution provided that (ATA)−1 exists.
Typically for ridge regression, two departures from Tikhonov regularization are described. First, the Tikhonov matrix is replaced by a multiple of the identity matrix
$\Gamma= \alpha I $,
giving preference to solutions with smaller norm, i.e., the $L_2$ norm. Then $\Gamma^{T} \Gamma$ becomes $\alpha^2 I$ leading to
$\hat{x} = (A^{T}A+ \alpha^2 I )^{-1}A^{T}{b}$
Finally, for ridge regression, it is typically assumed that $A$ variables are scaled so that $X^{T}X$ has the form of a correlation matrix. and $X^{T}b$ is the correlation vector between the $x$ variables and $b$, leading to
$\hat{x} = (X^{T}X+ \alpha^2 I )^{-1}X^{T}{b}$
Note in this form the Lagrange multiplier $\alpha^2$ is usually replaced by $k$, $\lambda$, or some other symbol but retains the property $\lambda\geq0$
In formulating this answer, I acknowledge borrowing liberally from Wikipedia and from Ridge estimation of transfer function weights
|
Is Tikhonov regularization the same as Ridge Regression?
|
Tikhonov regularizarization is a larger set than ridge regression. Here is my attempt to spell out exactly how they differ.
Suppose that for a known matrix $A$ and vector $b$, we wish to find a vecto
|
Is Tikhonov regularization the same as Ridge Regression?
Tikhonov regularizarization is a larger set than ridge regression. Here is my attempt to spell out exactly how they differ.
Suppose that for a known matrix $A$ and vector $b$, we wish to find a vector $\mathbf{x}$ such that
:
$A\mathbf{x}=\mathbf{b}$.
The standard approach is ordinary least squares linear regression. However, if no $x$ satisfies the equation or more than one $x$ does—that is the solution is not unique—the problem is said to be ill-posed. Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as:
$\|A\mathbf{x}-\mathbf{b}\|^2 $
where $\left \| \cdot \right \|$ is the Euclidean norm. In matrix notation the solution, denoted by $\hat{x}$, is given by:
$\hat{x} = (A^{T}A)^{-1}A^{T}\mathbf{b}$
Tikhonov regularization minimizes
$\|A\mathbf{x}-\mathbf{b}\|^2+ \|\Gamma \mathbf{x}\|^2$
for some suitably chosen Tikhonov matrix, $\Gamma $. An explicit matrix form solution, denoted by $\hat{x}$, is given by:
$\hat{x} = (A^{T}A+ \Gamma^{T} \Gamma )^{-1}A^{T}{b}$
The effect of regularization may be varied via the scale of matrix $\Gamma$. For $\Gamma = 0$ this reduces to the unregularized least squares solution provided that (ATA)−1 exists.
Typically for ridge regression, two departures from Tikhonov regularization are described. First, the Tikhonov matrix is replaced by a multiple of the identity matrix
$\Gamma= \alpha I $,
giving preference to solutions with smaller norm, i.e., the $L_2$ norm. Then $\Gamma^{T} \Gamma$ becomes $\alpha^2 I$ leading to
$\hat{x} = (A^{T}A+ \alpha^2 I )^{-1}A^{T}{b}$
Finally, for ridge regression, it is typically assumed that $A$ variables are scaled so that $X^{T}X$ has the form of a correlation matrix. and $X^{T}b$ is the correlation vector between the $x$ variables and $b$, leading to
$\hat{x} = (X^{T}X+ \alpha^2 I )^{-1}X^{T}{b}$
Note in this form the Lagrange multiplier $\alpha^2$ is usually replaced by $k$, $\lambda$, or some other symbol but retains the property $\lambda\geq0$
In formulating this answer, I acknowledge borrowing liberally from Wikipedia and from Ridge estimation of transfer function weights
|
Is Tikhonov regularization the same as Ridge Regression?
Tikhonov regularizarization is a larger set than ridge regression. Here is my attempt to spell out exactly how they differ.
Suppose that for a known matrix $A$ and vector $b$, we wish to find a vecto
|
6,909
|
Is Tikhonov regularization the same as Ridge Regression?
|
Carl has given a thorough answer that nicely explains the mathematical differences between Tikhonov regularization vs. ridge regression. Inspired by the historical discussion here, I thought it might be useful to add a short example demonstrating how the more general Tikhonov framework can be useful.
First a brief note on context. Ridge regression arose in statistics, and while regularization is now widespread in statistics & machine learning, Tikhonov's approach was originally motivated by inverse problems arising in model-based data assimilation (particularly in geophysics). The simplified example below is in this category (more complex versions are used for paleoclimate reconstructions).
Imagine we want to reconstruct temperatures $u[x,t=0]$ in the past, based on present-day measurements $u[x,t=T]$. In our simplified model we will assume that temperature evolves according to the heat equation
$$ u_t = u_{xx} $$
in 1D with periodic boundary conditions
$$ u[x+L,t] = u[x,t] $$
A simple (explicit) finite difference approach leads to the discrete model
$$ \frac{\Delta\mathbf{u}}{\Delta{t}} = \frac{\mathbf{Lu}}{\Delta{x^2}} \implies \mathbf{u}_{t+1} = \mathbf{Au}_t $$
Mathematically, the evolution matrix $\mathbf{A}$ is invertible, so we have
$$\mathbf{u}_t = \mathbf{A^{-1}u}_{t+1} $$
However numerically, difficulties will arise if the time interval $T$ is too long.
Tikhonov regularization can solve this problem by solving
\begin{align} \mathbf{Au}_t &\approx \mathbf{u}_{t+1} \\
\omega\mathbf{Lu}_t &\approx \mathbf{0}
\end{align}
which adds a small penalty $\omega^2\ll{1}$ on roughness $u_{xx}$.
Below is a comparison of the results:
We can see that the original temperature $u_0$ has a smooth profile, which is smoothed still further by diffusion to give $u_\mathsf{fwd}$. Direct inversion fails to recover $u_0$, and the solution $u_\mathsf{inv}$ shows strong "checkerboarding" artifacts. However the Tikhonov solution $u_\mathsf{reg}$ is able to recover $u_0$ with quite good accuracy.
Note that in this example, ridge regression would always push our solution towards an "ice age" (i.e. uniform zero temperatures). Tikhonov regression allows us a more flexible physically-based prior constraint: Here our penalty essentially says the reconstruction $\mathbf{u}$ should be only slowly evolving, i.e. $u_t\approx{0}$.
Matlab code for the example is below (can be run online here).
% Tikhonov Regularization Example: Inverse Heat Equation
n=15; t=2e1; w=1e-2; % grid size, # time steps, regularization
L=toeplitz(sparse([-2,1,zeros(1,n-3),1]/2)); % laplacian (periodic BCs)
A=(speye(n)+L)^t; % forward operator (diffusion)
x=(0:n-1)'; u0=sin(2*pi*x/n); % initial condition (periodic & smooth)
ufwd=A*u0; % forward model
uinv=A\ufwd; % inverse model
ureg=[A;w*L]\[ufwd;zeros(n,1)]; % regularized inverse
plot(x,u0,'k.-',x,ufwd,'k:',x,uinv,'r.:',x,ureg,'ro');
set(legend('u_0','u_{fwd}','u_{inv}','u_{reg}'),'box','off');
|
Is Tikhonov regularization the same as Ridge Regression?
|
Carl has given a thorough answer that nicely explains the mathematical differences between Tikhonov regularization vs. ridge regression. Inspired by the historical discussion here, I thought it might
|
Is Tikhonov regularization the same as Ridge Regression?
Carl has given a thorough answer that nicely explains the mathematical differences between Tikhonov regularization vs. ridge regression. Inspired by the historical discussion here, I thought it might be useful to add a short example demonstrating how the more general Tikhonov framework can be useful.
First a brief note on context. Ridge regression arose in statistics, and while regularization is now widespread in statistics & machine learning, Tikhonov's approach was originally motivated by inverse problems arising in model-based data assimilation (particularly in geophysics). The simplified example below is in this category (more complex versions are used for paleoclimate reconstructions).
Imagine we want to reconstruct temperatures $u[x,t=0]$ in the past, based on present-day measurements $u[x,t=T]$. In our simplified model we will assume that temperature evolves according to the heat equation
$$ u_t = u_{xx} $$
in 1D with periodic boundary conditions
$$ u[x+L,t] = u[x,t] $$
A simple (explicit) finite difference approach leads to the discrete model
$$ \frac{\Delta\mathbf{u}}{\Delta{t}} = \frac{\mathbf{Lu}}{\Delta{x^2}} \implies \mathbf{u}_{t+1} = \mathbf{Au}_t $$
Mathematically, the evolution matrix $\mathbf{A}$ is invertible, so we have
$$\mathbf{u}_t = \mathbf{A^{-1}u}_{t+1} $$
However numerically, difficulties will arise if the time interval $T$ is too long.
Tikhonov regularization can solve this problem by solving
\begin{align} \mathbf{Au}_t &\approx \mathbf{u}_{t+1} \\
\omega\mathbf{Lu}_t &\approx \mathbf{0}
\end{align}
which adds a small penalty $\omega^2\ll{1}$ on roughness $u_{xx}$.
Below is a comparison of the results:
We can see that the original temperature $u_0$ has a smooth profile, which is smoothed still further by diffusion to give $u_\mathsf{fwd}$. Direct inversion fails to recover $u_0$, and the solution $u_\mathsf{inv}$ shows strong "checkerboarding" artifacts. However the Tikhonov solution $u_\mathsf{reg}$ is able to recover $u_0$ with quite good accuracy.
Note that in this example, ridge regression would always push our solution towards an "ice age" (i.e. uniform zero temperatures). Tikhonov regression allows us a more flexible physically-based prior constraint: Here our penalty essentially says the reconstruction $\mathbf{u}$ should be only slowly evolving, i.e. $u_t\approx{0}$.
Matlab code for the example is below (can be run online here).
% Tikhonov Regularization Example: Inverse Heat Equation
n=15; t=2e1; w=1e-2; % grid size, # time steps, regularization
L=toeplitz(sparse([-2,1,zeros(1,n-3),1]/2)); % laplacian (periodic BCs)
A=(speye(n)+L)^t; % forward operator (diffusion)
x=(0:n-1)'; u0=sin(2*pi*x/n); % initial condition (periodic & smooth)
ufwd=A*u0; % forward model
uinv=A\ufwd; % inverse model
ureg=[A;w*L]\[ufwd;zeros(n,1)]; % regularized inverse
plot(x,u0,'k.-',x,ufwd,'k:',x,uinv,'r.:',x,ureg,'ro');
set(legend('u_0','u_{fwd}','u_{inv}','u_{reg}'),'box','off');
|
Is Tikhonov regularization the same as Ridge Regression?
Carl has given a thorough answer that nicely explains the mathematical differences between Tikhonov regularization vs. ridge regression. Inspired by the historical discussion here, I thought it might
|
6,910
|
What is the difference between "mean value" and "average"?
|
Mean versus average
The mean most commonly refers to the arithmetic mean, but may refer to some other form of mean, such as harmonic or geometric (see the Wikipedia article). Thus, when used without qualification, I think most people would assume that "mean" refers to the arithmetic mean.
Average has many meanings, some of which are much less mathematical than the term "mean". Even within the context of numerical summaries, "average" can refer to a broad range of measures of central tendency.
Thus, the arithmetic mean is one type of average.
Arguably, when used without qualification the average of a numeric variable often is meant to refer to the arithmetic mean.
Side point
It is interesting to observe that Excel uses the sloppier but more accessible name of AVERAGE() for its arithmetic mean function, where R uses mean().
|
What is the difference between "mean value" and "average"?
|
Mean versus average
The mean most commonly refers to the arithmetic mean, but may refer to some other form of mean, such as harmonic or geometric (see the Wikipedia article). Thus, when used without
|
What is the difference between "mean value" and "average"?
Mean versus average
The mean most commonly refers to the arithmetic mean, but may refer to some other form of mean, such as harmonic or geometric (see the Wikipedia article). Thus, when used without qualification, I think most people would assume that "mean" refers to the arithmetic mean.
Average has many meanings, some of which are much less mathematical than the term "mean". Even within the context of numerical summaries, "average" can refer to a broad range of measures of central tendency.
Thus, the arithmetic mean is one type of average.
Arguably, when used without qualification the average of a numeric variable often is meant to refer to the arithmetic mean.
Side point
It is interesting to observe that Excel uses the sloppier but more accessible name of AVERAGE() for its arithmetic mean function, where R uses mean().
|
What is the difference between "mean value" and "average"?
Mean versus average
The mean most commonly refers to the arithmetic mean, but may refer to some other form of mean, such as harmonic or geometric (see the Wikipedia article). Thus, when used without
|
6,911
|
What is the difference between "mean value" and "average"?
|
There are several "averages." Just think of this trick question: "What is the probability that the next person you meet has more than the average number of arms?"
The "mean" or "arithmetic mean" or "arithmetic average" is one average that you learned in the past. But the median (the value with half the observations greater and half less than it), the mode (the most common value), the geometric mean (multiply the values then take the nth root), the harmonic mean (the reciprocal of the mean of the reciprocals of the data), and others all fall under the general term "average."
|
What is the difference between "mean value" and "average"?
|
There are several "averages." Just think of this trick question: "What is the probability that the next person you meet has more than the average number of arms?"
The "mean" or "arithmetic mean" or "
|
What is the difference between "mean value" and "average"?
There are several "averages." Just think of this trick question: "What is the probability that the next person you meet has more than the average number of arms?"
The "mean" or "arithmetic mean" or "arithmetic average" is one average that you learned in the past. But the median (the value with half the observations greater and half less than it), the mode (the most common value), the geometric mean (multiply the values then take the nth root), the harmonic mean (the reciprocal of the mean of the reciprocals of the data), and others all fall under the general term "average."
|
What is the difference between "mean value" and "average"?
There are several "averages." Just think of this trick question: "What is the probability that the next person you meet has more than the average number of arms?"
The "mean" or "arithmetic mean" or "
|
6,912
|
What is the difference between "mean value" and "average"?
|
Mean and average are generally used interchangeably (although I've seen them used as referring to population versus empirical).
They, like median and mode, are measures of central tendency, but in many cases, the other two are different.
|
What is the difference between "mean value" and "average"?
|
Mean and average are generally used interchangeably (although I've seen them used as referring to population versus empirical).
They, like median and mode, are measures of central tendency, but in man
|
What is the difference between "mean value" and "average"?
Mean and average are generally used interchangeably (although I've seen them used as referring to population versus empirical).
They, like median and mode, are measures of central tendency, but in many cases, the other two are different.
|
What is the difference between "mean value" and "average"?
Mean and average are generally used interchangeably (although I've seen them used as referring to population versus empirical).
They, like median and mode, are measures of central tendency, but in man
|
6,913
|
What is the difference between "mean value" and "average"?
|
The mean you described (the arithmetic mean) is what people typically intend when they say "mean" and, yes, that is the same as average. The only ambiguity that can occur is when someone is using a different type of mean, such as the geometric mean or the harmonic mean, but I think it is implicit from your question that you were talking about the arithmetic mean
|
What is the difference between "mean value" and "average"?
|
The mean you described (the arithmetic mean) is what people typically intend when they say "mean" and, yes, that is the same as average. The only ambiguity that can occur is when someone is using a di
|
What is the difference between "mean value" and "average"?
The mean you described (the arithmetic mean) is what people typically intend when they say "mean" and, yes, that is the same as average. The only ambiguity that can occur is when someone is using a different type of mean, such as the geometric mean or the harmonic mean, but I think it is implicit from your question that you were talking about the arithmetic mean
|
What is the difference between "mean value" and "average"?
The mean you described (the arithmetic mean) is what people typically intend when they say "mean" and, yes, that is the same as average. The only ambiguity that can occur is when someone is using a di
|
6,914
|
What is the difference between "mean value" and "average"?
|
I see "average" and "mean" used mostly as synonyms. One author who draws a clear distinction is Donald Wheeler, in "Advanced Topics in Statistical Quality Control." He declares that the "average" is a statistic determined by some arithmetic procedure, whereas "mean" is a parameter, specifying location of a distribution. By way of example, he writes that one could calculate an "average" telephone number, which would be meaningless (pun?). The average (a statistic) is an unbiased estimate of the mean (a parameter).
|
What is the difference between "mean value" and "average"?
|
I see "average" and "mean" used mostly as synonyms. One author who draws a clear distinction is Donald Wheeler, in "Advanced Topics in Statistical Quality Control." He declares that the "average" is a
|
What is the difference between "mean value" and "average"?
I see "average" and "mean" used mostly as synonyms. One author who draws a clear distinction is Donald Wheeler, in "Advanced Topics in Statistical Quality Control." He declares that the "average" is a statistic determined by some arithmetic procedure, whereas "mean" is a parameter, specifying location of a distribution. By way of example, he writes that one could calculate an "average" telephone number, which would be meaningless (pun?). The average (a statistic) is an unbiased estimate of the mean (a parameter).
|
What is the difference between "mean value" and "average"?
I see "average" and "mean" used mostly as synonyms. One author who draws a clear distinction is Donald Wheeler, in "Advanced Topics in Statistical Quality Control." He declares that the "average" is a
|
6,915
|
Data has two trends; how to extract independent trendlines?
|
To solve your problem, a good approach is to define a probabilistic model that matches the assumptions about your dataset. In your case, you probably want a mixture of linear regression models. You can create a "mixture of regressors" model similar to a gaussian mixture model by associating different data points with different mixture components.
I have included some code to get you started. The code implements an EM algorithm for a mixture of two regressors (it should be relatively easy to extend to larger mixtures). The code seems to be fairly robust for random datasets. However, unlike linear regression, mixture models have non-convex objectives, so for a real dataset, you may need to run a few trials with different random starting points.
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as lin
#generate some random data
N=100
x=np.random.rand(N,2)
x[:,1]=1
w=np.random.rand(2,2)
y=np.zeros(N)
n=int(np.random.rand()*N)
y[:n]=np.dot(x[:n,:],w[0,:])+np.random.normal(size=n)*.01
y[n:]=np.dot(x[n:,:],w[1,:])+np.random.normal(size=N-n)*.01
rx=np.ones( (100,2) )
r=np.arange(0,1,.01)
rx[:,0]=r
#plot the random dataset
plt.plot(x[:,0],y,'.b')
plt.plot(r,np.dot(rx,w[0,:]),':k',linewidth=2)
plt.plot(r,np.dot(rx,w[1,:]),':k',linewidth=2)
# regularization parameter for the regression weights
lam=.01
def em():
# mixture weights
rpi=np.zeros( (2) )+.5
# expected mixture weights for each data point
pi=np.zeros( (len(x),2) )+.5
#the regression weights
w1=np.random.rand(2)
w2=np.random.rand(2)
#precision term for the probability of the data under the regression function
eta=100
for _ in xrange(100):
if 0:
plt.plot(r,np.dot(rx,w1),'-r',alpha=.5)
plt.plot(r,np.dot(rx,w2),'-g',alpha=.5)
#compute lhood for each data point
err1=y-np.dot(x,w1)
err2=y-np.dot(x,w2)
prbs=np.zeros( (len(y),2) )
prbs[:,0]=-.5*eta*err1**2
prbs[:,1]=-.5*eta*err2**2
#compute expected mixture weights
pi=np.tile(rpi,(len(x),1))*np.exp(prbs)
pi/=np.tile(np.sum(pi,1),(2,1)).T
#max with respect to the mixture probabilities
rpi=np.sum(pi,0)
rpi/=np.sum(rpi)
#max with respect to the regression weights
pi1x=np.tile(pi[:,0],(2,1)).T*x
xp1=np.dot(pi1x.T,x)+np.eye(2)*lam/eta
yp1=np.dot(pi1x.T,y)
w1=lin.solve(xp1,yp1)
pi2x=np.tile(pi[:,1],(2,1)).T*x
xp2=np.dot(pi2x.T,x)+np.eye(2)*lam/eta
yp2=np.dot(pi[:,1]*y,x)
w2=lin.solve(xp2,yp2)
#max wrt the precision term
eta=np.sum(pi)/np.sum(-prbs/eta*pi)
#objective function - unstable as the pi's become concentrated on a single component
obj=np.sum(prbs*pi)-np.sum(pi[pi>1e-50]*np.log(pi[pi>1e-50]))+np.sum(pi*np.log(np.tile(rpi,(len(x),1))))+np.log(eta)*np.sum(pi)
print obj,eta,rpi,w1,w2
try:
if np.isnan(obj): break
if np.abs(obj-oldobj)<1e-2: break
except:
pass
oldobj=obj
return w1,w2
#run the em algorithm and plot the solution
rw1,rw2=em()
plt.plot(r,np.dot(rx,rw1),'-r')
plt.plot(r,np.dot(rx,rw2),'-g')
plt.show()
|
Data has two trends; how to extract independent trendlines?
|
To solve your problem, a good approach is to define a probabilistic model that matches the assumptions about your dataset. In your case, you probably want a mixture of linear regression models. You ca
|
Data has two trends; how to extract independent trendlines?
To solve your problem, a good approach is to define a probabilistic model that matches the assumptions about your dataset. In your case, you probably want a mixture of linear regression models. You can create a "mixture of regressors" model similar to a gaussian mixture model by associating different data points with different mixture components.
I have included some code to get you started. The code implements an EM algorithm for a mixture of two regressors (it should be relatively easy to extend to larger mixtures). The code seems to be fairly robust for random datasets. However, unlike linear regression, mixture models have non-convex objectives, so for a real dataset, you may need to run a few trials with different random starting points.
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as lin
#generate some random data
N=100
x=np.random.rand(N,2)
x[:,1]=1
w=np.random.rand(2,2)
y=np.zeros(N)
n=int(np.random.rand()*N)
y[:n]=np.dot(x[:n,:],w[0,:])+np.random.normal(size=n)*.01
y[n:]=np.dot(x[n:,:],w[1,:])+np.random.normal(size=N-n)*.01
rx=np.ones( (100,2) )
r=np.arange(0,1,.01)
rx[:,0]=r
#plot the random dataset
plt.plot(x[:,0],y,'.b')
plt.plot(r,np.dot(rx,w[0,:]),':k',linewidth=2)
plt.plot(r,np.dot(rx,w[1,:]),':k',linewidth=2)
# regularization parameter for the regression weights
lam=.01
def em():
# mixture weights
rpi=np.zeros( (2) )+.5
# expected mixture weights for each data point
pi=np.zeros( (len(x),2) )+.5
#the regression weights
w1=np.random.rand(2)
w2=np.random.rand(2)
#precision term for the probability of the data under the regression function
eta=100
for _ in xrange(100):
if 0:
plt.plot(r,np.dot(rx,w1),'-r',alpha=.5)
plt.plot(r,np.dot(rx,w2),'-g',alpha=.5)
#compute lhood for each data point
err1=y-np.dot(x,w1)
err2=y-np.dot(x,w2)
prbs=np.zeros( (len(y),2) )
prbs[:,0]=-.5*eta*err1**2
prbs[:,1]=-.5*eta*err2**2
#compute expected mixture weights
pi=np.tile(rpi,(len(x),1))*np.exp(prbs)
pi/=np.tile(np.sum(pi,1),(2,1)).T
#max with respect to the mixture probabilities
rpi=np.sum(pi,0)
rpi/=np.sum(rpi)
#max with respect to the regression weights
pi1x=np.tile(pi[:,0],(2,1)).T*x
xp1=np.dot(pi1x.T,x)+np.eye(2)*lam/eta
yp1=np.dot(pi1x.T,y)
w1=lin.solve(xp1,yp1)
pi2x=np.tile(pi[:,1],(2,1)).T*x
xp2=np.dot(pi2x.T,x)+np.eye(2)*lam/eta
yp2=np.dot(pi[:,1]*y,x)
w2=lin.solve(xp2,yp2)
#max wrt the precision term
eta=np.sum(pi)/np.sum(-prbs/eta*pi)
#objective function - unstable as the pi's become concentrated on a single component
obj=np.sum(prbs*pi)-np.sum(pi[pi>1e-50]*np.log(pi[pi>1e-50]))+np.sum(pi*np.log(np.tile(rpi,(len(x),1))))+np.log(eta)*np.sum(pi)
print obj,eta,rpi,w1,w2
try:
if np.isnan(obj): break
if np.abs(obj-oldobj)<1e-2: break
except:
pass
oldobj=obj
return w1,w2
#run the em algorithm and plot the solution
rw1,rw2=em()
plt.plot(r,np.dot(rx,rw1),'-r')
plt.plot(r,np.dot(rx,rw2),'-g')
plt.show()
|
Data has two trends; how to extract independent trendlines?
To solve your problem, a good approach is to define a probabilistic model that matches the assumptions about your dataset. In your case, you probably want a mixture of linear regression models. You ca
|
6,916
|
Data has two trends; how to extract independent trendlines?
|
Elsewhere in this thread, user1149913 provides great advice (define a probabilistic model) and code for a powerful approach (EM estimation). Two issues remain to be addressed:
How to cope with departures from the probabilistic model (which are very evident in the 2011-2012 data and somewhat evident in the undulations of the less-sloped points).
How to identify good starting values for the EM algorithm (or any other algorithm).
To address #2, consider using a Hough transform. This is a feature-detection algorithm which, for finding linear stretches of features, can efficiently be computed as a Radon transform.
Conceptually, the Hough transform depicts sets of lines. A line in the plane can be parameterized by its slope, $x$, and its distance, $y$, from a fixed origin. A point in this $x,y$ coordinate system thereby designates a single line. Each point in the original plot determines a pencil of lines passing through that point: this pencil appears as a curve in the Hough transform. When features in the original plot fall along a common line, or near enough to one, then the collections of curves they produce in the Hough transform tend to have a common intersection corresponding to that common line. By finding these points of greatest intensity in the Hough transform, we can read off good solutions to the original problem.
To get started with these data, I first cropped out the auxiliary stuff (axes, tick marks, and labels) and for good measure cropped out the obviously outlying points at the bottom right and sprinkled along the bottom axis. (When that stuff is not cropped out, the procedure still works well, but it also detects the axes, the frames, the linear sequences of ticks, the linear sequences of labels, and even the points lying sporadically on the bottom axis!)
img = Import["http://i.stack.imgur.com/SkEm3.png"]
i = ColorNegate[Binarize[img]]
crop2 = ImageCrop[ImageCrop[i, {694, 531}, {Left, Bottom}], {565, 467}, {Right, Top}]
(This and the rest of the code are in Mathematica.)
To each dot in this image corresponds a narrow range of curves in the Hough transform, visible here. They are sine waves:
hough2 = Radon[crop2, Method -> "Hough"] // ImageAdjust
This makes visually manifest the sense in which the question is a line clustering problem: the Hough transform reduces it to a point clustering problem, to which we can apply any clustering method we like.
In this case, the clustering is so clear that simple post-processing of the Hough transform sufficed. To identify locations of greatest intensity in the transform, I increased the contrast and blurred the transform over a radius of about 1%: that's comparable to the diameters of the plot points in the original image.
blur = ImageAdjust[Blur[ImageAdjust[hough2, {1, 0}], 8]]
Thresholding the result narrowed it to two tiny blobs whose centroids reasonably identify the points of greatest intensity: these estimate the fitted lines.
comp = MorphologicalComponents[blur, 0.777]) // Colorize
(The threshold of $0.777$ was found empirically: it produces only two regions and the smaller of the two is almost as small as possible.)
The left side of the image corresponds to a direction of 0 degrees (horizontal) and, as we look from left to right, that angle increases linearly to 180 degrees. Interpolating, I compute that the two blobs are centered at 19 and 57.1 degrees, respectively. We can also read off the intercepts from the vertical positions of the blobs. This information yields the initial fits:
width = ImageDimensions[blur][[1]];
slopes = Module[{x, y, z}, ComponentMeasurements[comp, "Centroid"] /.
Rule[x_, {y_, z_}] :> Round[((y - 1/2)/(width - 1)) 180., 0.1]
]
{19., 57.1}
In a similar fashion one can compute the intercepts corresponding to these slopes, giving these fits:
(The red line corresponds to the tiny pink dot in the previous picture and the blue line corresponds to the larger aqua blob.)
To a great extent, this approach has automatically dealt with the first issue: deviations from linearity smear out the points of greatest intensity, but typically do not shift them much. Frankly outlying points will contribute low-level noise throughout the Hough transform, which will disappear during the post-processing procedures.
At this point one can provide these estimates as starting values for the EM algorithm or for a likelihood minimizer (which, given good estimates, will converge quickly). Better, though, would be to use a robust regression estimator such as iteratively reweighted least squares. It is able to provide a regression weight to every point. Low weights indicate a point does not "belong" to a line. Exploit these weights, if desired, to assign each point to its proper line. Then, having classified the points, you can use ordinary least squares (or any other regression procedure) separately on the two groups of points.
|
Data has two trends; how to extract independent trendlines?
|
Elsewhere in this thread, user1149913 provides great advice (define a probabilistic model) and code for a powerful approach (EM estimation). Two issues remain to be addressed:
How to cope with depar
|
Data has two trends; how to extract independent trendlines?
Elsewhere in this thread, user1149913 provides great advice (define a probabilistic model) and code for a powerful approach (EM estimation). Two issues remain to be addressed:
How to cope with departures from the probabilistic model (which are very evident in the 2011-2012 data and somewhat evident in the undulations of the less-sloped points).
How to identify good starting values for the EM algorithm (or any other algorithm).
To address #2, consider using a Hough transform. This is a feature-detection algorithm which, for finding linear stretches of features, can efficiently be computed as a Radon transform.
Conceptually, the Hough transform depicts sets of lines. A line in the plane can be parameterized by its slope, $x$, and its distance, $y$, from a fixed origin. A point in this $x,y$ coordinate system thereby designates a single line. Each point in the original plot determines a pencil of lines passing through that point: this pencil appears as a curve in the Hough transform. When features in the original plot fall along a common line, or near enough to one, then the collections of curves they produce in the Hough transform tend to have a common intersection corresponding to that common line. By finding these points of greatest intensity in the Hough transform, we can read off good solutions to the original problem.
To get started with these data, I first cropped out the auxiliary stuff (axes, tick marks, and labels) and for good measure cropped out the obviously outlying points at the bottom right and sprinkled along the bottom axis. (When that stuff is not cropped out, the procedure still works well, but it also detects the axes, the frames, the linear sequences of ticks, the linear sequences of labels, and even the points lying sporadically on the bottom axis!)
img = Import["http://i.stack.imgur.com/SkEm3.png"]
i = ColorNegate[Binarize[img]]
crop2 = ImageCrop[ImageCrop[i, {694, 531}, {Left, Bottom}], {565, 467}, {Right, Top}]
(This and the rest of the code are in Mathematica.)
To each dot in this image corresponds a narrow range of curves in the Hough transform, visible here. They are sine waves:
hough2 = Radon[crop2, Method -> "Hough"] // ImageAdjust
This makes visually manifest the sense in which the question is a line clustering problem: the Hough transform reduces it to a point clustering problem, to which we can apply any clustering method we like.
In this case, the clustering is so clear that simple post-processing of the Hough transform sufficed. To identify locations of greatest intensity in the transform, I increased the contrast and blurred the transform over a radius of about 1%: that's comparable to the diameters of the plot points in the original image.
blur = ImageAdjust[Blur[ImageAdjust[hough2, {1, 0}], 8]]
Thresholding the result narrowed it to two tiny blobs whose centroids reasonably identify the points of greatest intensity: these estimate the fitted lines.
comp = MorphologicalComponents[blur, 0.777]) // Colorize
(The threshold of $0.777$ was found empirically: it produces only two regions and the smaller of the two is almost as small as possible.)
The left side of the image corresponds to a direction of 0 degrees (horizontal) and, as we look from left to right, that angle increases linearly to 180 degrees. Interpolating, I compute that the two blobs are centered at 19 and 57.1 degrees, respectively. We can also read off the intercepts from the vertical positions of the blobs. This information yields the initial fits:
width = ImageDimensions[blur][[1]];
slopes = Module[{x, y, z}, ComponentMeasurements[comp, "Centroid"] /.
Rule[x_, {y_, z_}] :> Round[((y - 1/2)/(width - 1)) 180., 0.1]
]
{19., 57.1}
In a similar fashion one can compute the intercepts corresponding to these slopes, giving these fits:
(The red line corresponds to the tiny pink dot in the previous picture and the blue line corresponds to the larger aqua blob.)
To a great extent, this approach has automatically dealt with the first issue: deviations from linearity smear out the points of greatest intensity, but typically do not shift them much. Frankly outlying points will contribute low-level noise throughout the Hough transform, which will disappear during the post-processing procedures.
At this point one can provide these estimates as starting values for the EM algorithm or for a likelihood minimizer (which, given good estimates, will converge quickly). Better, though, would be to use a robust regression estimator such as iteratively reweighted least squares. It is able to provide a regression weight to every point. Low weights indicate a point does not "belong" to a line. Exploit these weights, if desired, to assign each point to its proper line. Then, having classified the points, you can use ordinary least squares (or any other regression procedure) separately on the two groups of points.
|
Data has two trends; how to extract independent trendlines?
Elsewhere in this thread, user1149913 provides great advice (define a probabilistic model) and code for a powerful approach (EM estimation). Two issues remain to be addressed:
How to cope with depar
|
6,917
|
Data has two trends; how to extract independent trendlines?
|
I found this question linked on another question. I actually did academic research on this kind of problem. Please check my answer "Least square root" fitting? A fitting method with multiple minima for more details.
whuber's Hough transform based approach is a very good solution for simple scenarios as the one you gave. I worked on scenarios with more complex data, such as this:
My co-authors and I denoted this a "data association" problem. When you try to solve it, the main problem is typically combinatorial due to the exponential amount of possible data combinations.
We have a publication "Overlapping Mixtures of Gaussian Processes for the data association problem" where we approached the general problem of N curves with an iterative technique, giving very good results. You can find Matlab code linked in the paper.
[Update] A Python implementation of the OMGP technique can be found in the GPClust library.
I have another paper where we relaxed the problem to obtain a convex optimization problem, but it has not been accepted for publication yet. It is specific for 2 curves, so it would work perfectly on your data. Let me know if you are interested.
|
Data has two trends; how to extract independent trendlines?
|
I found this question linked on another question. I actually did academic research on this kind of problem. Please check my answer "Least square root" fitting? A fitting method with multiple minima fo
|
Data has two trends; how to extract independent trendlines?
I found this question linked on another question. I actually did academic research on this kind of problem. Please check my answer "Least square root" fitting? A fitting method with multiple minima for more details.
whuber's Hough transform based approach is a very good solution for simple scenarios as the one you gave. I worked on scenarios with more complex data, such as this:
My co-authors and I denoted this a "data association" problem. When you try to solve it, the main problem is typically combinatorial due to the exponential amount of possible data combinations.
We have a publication "Overlapping Mixtures of Gaussian Processes for the data association problem" where we approached the general problem of N curves with an iterative technique, giving very good results. You can find Matlab code linked in the paper.
[Update] A Python implementation of the OMGP technique can be found in the GPClust library.
I have another paper where we relaxed the problem to obtain a convex optimization problem, but it has not been accepted for publication yet. It is specific for 2 curves, so it would work perfectly on your data. Let me know if you are interested.
|
Data has two trends; how to extract independent trendlines?
I found this question linked on another question. I actually did academic research on this kind of problem. Please check my answer "Least square root" fitting? A fitting method with multiple minima fo
|
6,918
|
Data has two trends; how to extract independent trendlines?
|
user1149913 has an excellent answer (+1), but it looks to me that your data collection fell apart in late 2011, so you'd have to cut that part of your data off, and then still run things a few times with different random starting coefficients to see what you get.
One straightforward way to do things would be to separate your data into two sets by eye, then use whatever linear model technique you're used to. In R, it would be the lm function.
Or fit two lines by eye. In R you would use abline to do this.
The data's jumbled, has outliers, and falls apart at the end, yet by-eye has two fairly obvious lines, so I'm not sure a fancy method is worth it.
|
Data has two trends; how to extract independent trendlines?
|
user1149913 has an excellent answer (+1), but it looks to me that your data collection fell apart in late 2011, so you'd have to cut that part of your data off, and then still run things a few times w
|
Data has two trends; how to extract independent trendlines?
user1149913 has an excellent answer (+1), but it looks to me that your data collection fell apart in late 2011, so you'd have to cut that part of your data off, and then still run things a few times with different random starting coefficients to see what you get.
One straightforward way to do things would be to separate your data into two sets by eye, then use whatever linear model technique you're used to. In R, it would be the lm function.
Or fit two lines by eye. In R you would use abline to do this.
The data's jumbled, has outliers, and falls apart at the end, yet by-eye has two fairly obvious lines, so I'm not sure a fancy method is worth it.
|
Data has two trends; how to extract independent trendlines?
user1149913 has an excellent answer (+1), but it looks to me that your data collection fell apart in late 2011, so you'd have to cut that part of your data off, and then still run things a few times w
|
6,919
|
What is the difference between the vertical bar and semi-colon notations?
|
I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be).
Let's say in a regression setting, you would have a distribution:
$$
p(Y | x, \beta)
$$
Which means: the distribution of $Y$ if you know (conditional on) the $x$ and $\beta$ values.
If you want to estimate the betas, you want to maximize the likelihood:
$$
L(\beta; y,x) = p(Y | x, \beta)
$$
Essentially, you are now looking at the expression $p(Y | x, \beta)$ as a function of the beta's, but apart from that, there is no difference (for mathematical correct expressions that you can properly derive, this is a necessity --- although in practice no one bothers).
Then, in bayesian settings, the difference between parameters and other variables soon fades, so one started to you use both notations intermixedly.
So, in essence: there is no actual difference: they both indicate the conditional distribution of the thing on the left, conditional on the thing(s) on the right.
|
What is the difference between the vertical bar and semi-colon notations?
|
I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be).
Let's say
|
What is the difference between the vertical bar and semi-colon notations?
I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be).
Let's say in a regression setting, you would have a distribution:
$$
p(Y | x, \beta)
$$
Which means: the distribution of $Y$ if you know (conditional on) the $x$ and $\beta$ values.
If you want to estimate the betas, you want to maximize the likelihood:
$$
L(\beta; y,x) = p(Y | x, \beta)
$$
Essentially, you are now looking at the expression $p(Y | x, \beta)$ as a function of the beta's, but apart from that, there is no difference (for mathematical correct expressions that you can properly derive, this is a necessity --- although in practice no one bothers).
Then, in bayesian settings, the difference between parameters and other variables soon fades, so one started to you use both notations intermixedly.
So, in essence: there is no actual difference: they both indicate the conditional distribution of the thing on the left, conditional on the thing(s) on the right.
|
What is the difference between the vertical bar and semi-colon notations?
I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be).
Let's say
|
6,920
|
What is the difference between the vertical bar and semi-colon notations?
|
$f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x,\theta)$ and only makes sense if $\Theta$ is a random variable. $f(x|\theta)$ is the conditional distribution of $X$ given $\Theta$, and again, only makes sense if $\Theta$ is a random variable. This will become much clearer when you get further into the book and look at Bayesian analysis.
|
What is the difference between the vertical bar and semi-colon notations?
|
$f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x
|
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x,\theta)$ and only makes sense if $\Theta$ is a random variable. $f(x|\theta)$ is the conditional distribution of $X$ given $\Theta$, and again, only makes sense if $\Theta$ is a random variable. This will become much clearer when you get further into the book and look at Bayesian analysis.
|
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the density of the random variable $X$ at the point $x$, with $\theta$ being the parameter of the distribution. $f(x,\theta)$ is the joint density of $X$ and $\Theta$ at the point $(x
|
6,921
|
What is the difference between the vertical bar and semi-colon notations?
|
$f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of functions, where the elements are indexed by $\Theta$. A subtle distinction, perhaps, but an important one, esp. when it comes time to estimate an unknown parameter $\theta$ on the basis of known data $x$; at that time, $\theta$ varies and $x$ is fixed, resulting in the "likelihood function". Usage of $\mid$ is more common among statisticians, while $;$ among mathematicians.
|
What is the difference between the vertical bar and semi-colon notations?
|
$f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of func
|
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of functions, where the elements are indexed by $\Theta$. A subtle distinction, perhaps, but an important one, esp. when it comes time to estimate an unknown parameter $\theta$ on the basis of known data $x$; at that time, $\theta$ varies and $x$ is fixed, resulting in the "likelihood function". Usage of $\mid$ is more common among statisticians, while $;$ among mathematicians.
|
What is the difference between the vertical bar and semi-colon notations?
$f(x;\theta)$ is the same as $f(x|\theta)$, simply meaning that $\theta$ is a fixed parameter and the function $f$ is a function of $x$. $f(x,\Theta)$, OTOH, is an element of a family (or set) of func
|
6,922
|
What is the difference between the vertical bar and semi-colon notations?
|
Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates conditioning on values of $d,w$. Conditioning is an operation on random variables and as such using this notation when $d, w$ aren't random variables is confusing (and tragically common).
As @Nick Sabbe points out $p(y|X, \Theta)$ is a common notation for the sampling distribution of observed data $y$. Some frequentists will use this notation but insist that $\Theta$ isn't a random variable, which is an abuse IMO. But they have no monopoly there; I've seen Bayesians do it too, tacking fixed hyperparameters on at the end of the conditionals.
|
What is the difference between the vertical bar and semi-colon notations?
|
Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates con
|
What is the difference between the vertical bar and semi-colon notations?
Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates conditioning on values of $d,w$. Conditioning is an operation on random variables and as such using this notation when $d, w$ aren't random variables is confusing (and tragically common).
As @Nick Sabbe points out $p(y|X, \Theta)$ is a common notation for the sampling distribution of observed data $y$. Some frequentists will use this notation but insist that $\Theta$ isn't a random variable, which is an abuse IMO. But they have no monopoly there; I've seen Bayesians do it too, tacking fixed hyperparameters on at the end of the conditionals.
|
What is the difference between the vertical bar and semi-colon notations?
Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates con
|
6,923
|
Can cross validation be used for causal inference?
|
I think it's useful to review what we know about cross-validation. Statistical results around CV fall into two classes: efficiency and consistency.
Efficiency is what we're usually concerned with when building predictive models. The idea is that we use CV to determine a model with asymtptotic guarantees concerning the loss function. The most famous result here is due to Stone 1977 and shows that LOO CV is asymptotically equivalent to AIC. But, Brett provides a good example where you can find a predictive model which doesn't inform you on the causal mechanism.
Consistency is what we're concerned with if our goal is to find the "true" model. The idea is that we use CV to determine a model with asymptotic guarantees that, given that our model space includes the true model, we'll discover it with a large enough sample. The most famous result here is due to Shao 1993 concerning linear models, but as he states in his abstract, his "shocking discovery" is opposite of the result for LOO. For linear models, you can achieve consistency using LKO CV as long as $k/n \rightarrow 1$ as $n \rightarrow \infty$. Beyond linear mdoels, it's harder to derive statistical results.
But suppose you can meet the consistency criteria and your CV procedure leads to the true model: $Y = \beta X + e$. What have we learned about the causal mechanism? We simply know that there's a well defined correlation between $Y$ and $X$, which doesn't say much about causal claims. From a traditional perspective, you need to bring in experimental design with the mechanism of control/manipulation to make causal claims. From the perspective of Judea Pearl's framework, you can bake causal assumptions into a structural model and use the probability based calculus of counterfactuals to derive some claims, but you'll need to satisfy certain properties.
Perhaps you could say that CV can help with causal inference by identifying the true model (provided you can satisfy consistency criteria!). But it only gets you so far; CV by itself isn't doing any of the work in either framework of causal inference.
If you're interested further in what we can say with cross-validation, I would recommend Shao 1997 over the widely cited 1993 paper:
An Asymptotic Theory for Linear Model Selection (Shao, 1997)
You can skim through the major results, but it's interesting to read the discussion that follows. I thought the comments by Rao & Tibshirani, and by Stone, were particularly insightful. But note that while they discuss consistency, no claims are ever made regarding causality.
|
Can cross validation be used for causal inference?
|
I think it's useful to review what we know about cross-validation. Statistical results around CV fall into two classes: efficiency and consistency.
Efficiency is what we're usually concerned with w
|
Can cross validation be used for causal inference?
I think it's useful to review what we know about cross-validation. Statistical results around CV fall into two classes: efficiency and consistency.
Efficiency is what we're usually concerned with when building predictive models. The idea is that we use CV to determine a model with asymtptotic guarantees concerning the loss function. The most famous result here is due to Stone 1977 and shows that LOO CV is asymptotically equivalent to AIC. But, Brett provides a good example where you can find a predictive model which doesn't inform you on the causal mechanism.
Consistency is what we're concerned with if our goal is to find the "true" model. The idea is that we use CV to determine a model with asymptotic guarantees that, given that our model space includes the true model, we'll discover it with a large enough sample. The most famous result here is due to Shao 1993 concerning linear models, but as he states in his abstract, his "shocking discovery" is opposite of the result for LOO. For linear models, you can achieve consistency using LKO CV as long as $k/n \rightarrow 1$ as $n \rightarrow \infty$. Beyond linear mdoels, it's harder to derive statistical results.
But suppose you can meet the consistency criteria and your CV procedure leads to the true model: $Y = \beta X + e$. What have we learned about the causal mechanism? We simply know that there's a well defined correlation between $Y$ and $X$, which doesn't say much about causal claims. From a traditional perspective, you need to bring in experimental design with the mechanism of control/manipulation to make causal claims. From the perspective of Judea Pearl's framework, you can bake causal assumptions into a structural model and use the probability based calculus of counterfactuals to derive some claims, but you'll need to satisfy certain properties.
Perhaps you could say that CV can help with causal inference by identifying the true model (provided you can satisfy consistency criteria!). But it only gets you so far; CV by itself isn't doing any of the work in either framework of causal inference.
If you're interested further in what we can say with cross-validation, I would recommend Shao 1997 over the widely cited 1993 paper:
An Asymptotic Theory for Linear Model Selection (Shao, 1997)
You can skim through the major results, but it's interesting to read the discussion that follows. I thought the comments by Rao & Tibshirani, and by Stone, were particularly insightful. But note that while they discuss consistency, no claims are ever made regarding causality.
|
Can cross validation be used for causal inference?
I think it's useful to review what we know about cross-validation. Statistical results around CV fall into two classes: efficiency and consistency.
Efficiency is what we're usually concerned with w
|
6,924
|
Can cross validation be used for causal inference?
|
This is a really interesting question and I don't offer any specific citations. However, in general, I'd say, NO, in and of itself, cross-validation does not offer any insight into causality. In absence of a designed experiment, the issue of causality is always uncertain. As you suggest, cross-validation can and will improve predictive accuracy. This, alone, says nothing about causality.
Absent of a designed experiment, causal inference would require a model that includes all of the relevant predictors--something that we can rarely guarantee in an observational study. Moreover, a simple lag variable, for example (or anything highly correlated with whatever outcome we were trying to predict) would produce a good model and one which could be validated in multiple samples. That does not mean, however, that we can infer causation. Cross-validation assures repeatability in predictions and nothing more. Causality is a matter of design and logic.
EDIT:
Here's an example to illustrate. I could build a model with good predictive accuracy that predicts the population of a city based on the amount of money the city spends on trash removal. I could use cross-validation to test the accuracy of that model as well as other methods to improve the accuracy of prediction and get more stable parameters. Now, while this model works great for prediction, the causal logic is wrong--the causal direction is reversed. No matter what the folks in the Public Works Department might argue, increasing their budget for trash removal would not be a good strategy to increase the city's population (the causal interpretation).
The issues of accuracy and repeatability of a model are separate from our ability to make causal inferences about the relationships we observe. Cross-validation helps us with the former and not with the latter. Now, IF we are estimating a "correct" model in terms of specifying a casual relationship (for example, trying to determine what our trash removal budget should be based on our expected population next year), cross-validation can help us to have greater confidence in our estimate of that effect. However, cross-validation does nothing to help us choose the "correct" model with regard to causal relationships. Again, here we need to rely on the design of the study, our subject matter expertise, theory, and logic.
|
Can cross validation be used for causal inference?
|
This is a really interesting question and I don't offer any specific citations. However, in general, I'd say, NO, in and of itself, cross-validation does not offer any insight into causality. In abs
|
Can cross validation be used for causal inference?
This is a really interesting question and I don't offer any specific citations. However, in general, I'd say, NO, in and of itself, cross-validation does not offer any insight into causality. In absence of a designed experiment, the issue of causality is always uncertain. As you suggest, cross-validation can and will improve predictive accuracy. This, alone, says nothing about causality.
Absent of a designed experiment, causal inference would require a model that includes all of the relevant predictors--something that we can rarely guarantee in an observational study. Moreover, a simple lag variable, for example (or anything highly correlated with whatever outcome we were trying to predict) would produce a good model and one which could be validated in multiple samples. That does not mean, however, that we can infer causation. Cross-validation assures repeatability in predictions and nothing more. Causality is a matter of design and logic.
EDIT:
Here's an example to illustrate. I could build a model with good predictive accuracy that predicts the population of a city based on the amount of money the city spends on trash removal. I could use cross-validation to test the accuracy of that model as well as other methods to improve the accuracy of prediction and get more stable parameters. Now, while this model works great for prediction, the causal logic is wrong--the causal direction is reversed. No matter what the folks in the Public Works Department might argue, increasing their budget for trash removal would not be a good strategy to increase the city's population (the causal interpretation).
The issues of accuracy and repeatability of a model are separate from our ability to make causal inferences about the relationships we observe. Cross-validation helps us with the former and not with the latter. Now, IF we are estimating a "correct" model in terms of specifying a casual relationship (for example, trying to determine what our trash removal budget should be based on our expected population next year), cross-validation can help us to have greater confidence in our estimate of that effect. However, cross-validation does nothing to help us choose the "correct" model with regard to causal relationships. Again, here we need to rely on the design of the study, our subject matter expertise, theory, and logic.
|
Can cross validation be used for causal inference?
This is a really interesting question and I don't offer any specific citations. However, in general, I'd say, NO, in and of itself, cross-validation does not offer any insight into causality. In abs
|
6,925
|
Can cross validation be used for causal inference?
|
It seems to me that your question more generally addresses different flavour of validation for a predictive model: Cross-validation has somewhat more to do with internal validity, or at least the initial modelling stage, whereas drawing causal links on a wider population is more related to external validity. By that (and as an update following @Brett's nice remark), I mean that we usually build a model on a working sample, assuming an hypothetical conceptual model (i.e. we specify the relationships between predictors and the outcome(s) of interest), and we try to obtain reliable estimates with a minimal classification error rate or a minimal prediction error. Hopefully, the better the model performs, the better it will allow us to predict outcome(s) on unseen data; still, CV doesn't tell anything about the "validity" or adequacy of the hypothesized causal links. We could certainly achieve decent results with a model where some moderation and/or mediation effects are neglected or simply not known in advance.
My point is that whatever the method you use to validate your model (and holdout method is certainly not the best one, but still it is widely used in epidemiological study to alleviate the problems arising from stepwise model building), you work with the same sample (which we assume is representative of a larger population). On the contrary, generalizing the results and the causal links inferred this way to new samples or a plausibly related population is usually done by replication studies. This ensures that we can safely test the predictive ability of our model in a "superpopulation" which features a larger range of individual variations and may exhibit other potential factors of interest.
Your model might provide valid predictions for your working sample, and it includes all potential confounders you may have think of; however, it is possible that it will not perform as well with new data, just because other factors appear in the intervening causal path that were not identified when building the initial model. This may happen if some of the predictors and the causal links inferred therefrom depend on the particular trial centre where patients were recruited, for example.
In genetic epidemiology, many genome-wide association studies fail to replicate just because we are trying to model complex diseases with an oversimplified view on causal relationships between DNA markers and the observed phenotype, while it is very likely that gene-gene (epistasis), gene-diseases (pleiotropy), gene-environment, and population substructure all come into play, but see for example Validating, augmenting and refining genome-wide association signals (Ioannidis et al., Nature Reviews Genetics, 2009 10). So, we can build-up a performant model to account for the observed cross-variations between a set of genetic markers (with very low and sparse effect size) and a multivariate pattern of observed phenotypes (e.g., volume of white/gray matter or localized activities in the brain as observed through fMRI, responses to neuropsychological assessment or personality inventory), still it won't perform as expected on an independent sample.
As for a general reference on this topic, can recommend chapter 17 and Part III of Clinical Prediction Models, from EW Steyerberg (Springer, 2009). I also like the following article from Ioannidis:
Ioannidis, JPA, Why Most Published
Research Findings Are False? PLoS
Med. 2005 2(8): e124
|
Can cross validation be used for causal inference?
|
It seems to me that your question more generally addresses different flavour of validation for a predictive model: Cross-validation has somewhat more to do with internal validity, or at least the init
|
Can cross validation be used for causal inference?
It seems to me that your question more generally addresses different flavour of validation for a predictive model: Cross-validation has somewhat more to do with internal validity, or at least the initial modelling stage, whereas drawing causal links on a wider population is more related to external validity. By that (and as an update following @Brett's nice remark), I mean that we usually build a model on a working sample, assuming an hypothetical conceptual model (i.e. we specify the relationships between predictors and the outcome(s) of interest), and we try to obtain reliable estimates with a minimal classification error rate or a minimal prediction error. Hopefully, the better the model performs, the better it will allow us to predict outcome(s) on unseen data; still, CV doesn't tell anything about the "validity" or adequacy of the hypothesized causal links. We could certainly achieve decent results with a model where some moderation and/or mediation effects are neglected or simply not known in advance.
My point is that whatever the method you use to validate your model (and holdout method is certainly not the best one, but still it is widely used in epidemiological study to alleviate the problems arising from stepwise model building), you work with the same sample (which we assume is representative of a larger population). On the contrary, generalizing the results and the causal links inferred this way to new samples or a plausibly related population is usually done by replication studies. This ensures that we can safely test the predictive ability of our model in a "superpopulation" which features a larger range of individual variations and may exhibit other potential factors of interest.
Your model might provide valid predictions for your working sample, and it includes all potential confounders you may have think of; however, it is possible that it will not perform as well with new data, just because other factors appear in the intervening causal path that were not identified when building the initial model. This may happen if some of the predictors and the causal links inferred therefrom depend on the particular trial centre where patients were recruited, for example.
In genetic epidemiology, many genome-wide association studies fail to replicate just because we are trying to model complex diseases with an oversimplified view on causal relationships between DNA markers and the observed phenotype, while it is very likely that gene-gene (epistasis), gene-diseases (pleiotropy), gene-environment, and population substructure all come into play, but see for example Validating, augmenting and refining genome-wide association signals (Ioannidis et al., Nature Reviews Genetics, 2009 10). So, we can build-up a performant model to account for the observed cross-variations between a set of genetic markers (with very low and sparse effect size) and a multivariate pattern of observed phenotypes (e.g., volume of white/gray matter or localized activities in the brain as observed through fMRI, responses to neuropsychological assessment or personality inventory), still it won't perform as expected on an independent sample.
As for a general reference on this topic, can recommend chapter 17 and Part III of Clinical Prediction Models, from EW Steyerberg (Springer, 2009). I also like the following article from Ioannidis:
Ioannidis, JPA, Why Most Published
Research Findings Are False? PLoS
Med. 2005 2(8): e124
|
Can cross validation be used for causal inference?
It seems to me that your question more generally addresses different flavour of validation for a predictive model: Cross-validation has somewhat more to do with internal validity, or at least the init
|
6,926
|
Can cross validation be used for causal inference?
|
This is a good question, but the answer is definitely no: cross-validation will not improve causal inference. If you have a mapping between symptoms and diseases, cross-validation will help to insure that your model matches their joint distribution better than if you had simply fit your model to the entire raw data set, but it can't ever tell you anything about the directionality of causation.
Cross-validation is very important and worth studying, but it does nothing more than prevent you from overfitting to noise in your data set. If you'd like to understand it more, I'd suggest Chapter 7 of ESL: http://www-stat.stanford.edu/~hastie/Papers/ESLII.pdf
|
Can cross validation be used for causal inference?
|
This is a good question, but the answer is definitely no: cross-validation will not improve causal inference. If you have a mapping between symptoms and diseases, cross-validation will help to insure
|
Can cross validation be used for causal inference?
This is a good question, but the answer is definitely no: cross-validation will not improve causal inference. If you have a mapping between symptoms and diseases, cross-validation will help to insure that your model matches their joint distribution better than if you had simply fit your model to the entire raw data set, but it can't ever tell you anything about the directionality of causation.
Cross-validation is very important and worth studying, but it does nothing more than prevent you from overfitting to noise in your data set. If you'd like to understand it more, I'd suggest Chapter 7 of ESL: http://www-stat.stanford.edu/~hastie/Papers/ESLII.pdf
|
Can cross validation be used for causal inference?
This is a good question, but the answer is definitely no: cross-validation will not improve causal inference. If you have a mapping between symptoms and diseases, cross-validation will help to insure
|
6,927
|
Can cross validation be used for causal inference?
|
To respond to the follow-up @Andy posted as an answer here...
Although I could not say which estimate is correct and which is false, doesn't the inconsistency in the Assault Conviction and the Gun conviction estimates between the two models cast doubt that either has a true causal effect on sentence length?
I think what you mean is the discrepancy in the parameter estimates gives us reason to believe that neither parameter estimate represents the true causal effect. I agree with that, though we already had plenty of reason to be skeptical that such a model would render the true causal effect.
Here's my take:
Over-fitting data is a source of biased parameter estimates, and with no reason to believe that this bias offsets other sources of bias in estimating a particular causal effect, it must then be better, on average, to estimate causal effects without over-fitting the data. Cross-validation prevents over-fitting, thus it should, on average, improve estimates of causal effects.
But if someone is trying to convince me to believe their estimate of a causal effect from observational data, proving that they haven't over-fit their data is a low-priority unless I have strong reason to suspect their modelling strategy is likely to have over-fit.
In the social science applications I work with, I'm much more concerned with substantive issues, measurement issues, and sensitivity checks. By sensitivity checks I mean estimating variations on the model where terms are added or removed, and estimating models with interactions allowing the effect of interest to vary across sub-groups. How much do these changes to the statistical model affect the parameter estimate we want to interpret causally? Are the discrepancies in this parameter estimate across model specifications or sub-groups understandable in terms of the causal story you are trying to tell, or do they hint at an effect driven by, e.g. selection.
In fact, before you run these alternate specifications. Write down how you think your parameter estimate will change. Its great if your parameter estimate of interest doesn't vary much across sub-groups, or specifications - in the context of my work, that is more important than cross-validation. But other substantive issues affecting my interpretation are more important still.
|
Can cross validation be used for causal inference?
|
To respond to the follow-up @Andy posted as an answer here...
Although I could not say which estimate is correct and which is false, doesn't the inconsistency in the Assault Conviction and the Gun co
|
Can cross validation be used for causal inference?
To respond to the follow-up @Andy posted as an answer here...
Although I could not say which estimate is correct and which is false, doesn't the inconsistency in the Assault Conviction and the Gun conviction estimates between the two models cast doubt that either has a true causal effect on sentence length?
I think what you mean is the discrepancy in the parameter estimates gives us reason to believe that neither parameter estimate represents the true causal effect. I agree with that, though we already had plenty of reason to be skeptical that such a model would render the true causal effect.
Here's my take:
Over-fitting data is a source of biased parameter estimates, and with no reason to believe that this bias offsets other sources of bias in estimating a particular causal effect, it must then be better, on average, to estimate causal effects without over-fitting the data. Cross-validation prevents over-fitting, thus it should, on average, improve estimates of causal effects.
But if someone is trying to convince me to believe their estimate of a causal effect from observational data, proving that they haven't over-fit their data is a low-priority unless I have strong reason to suspect their modelling strategy is likely to have over-fit.
In the social science applications I work with, I'm much more concerned with substantive issues, measurement issues, and sensitivity checks. By sensitivity checks I mean estimating variations on the model where terms are added or removed, and estimating models with interactions allowing the effect of interest to vary across sub-groups. How much do these changes to the statistical model affect the parameter estimate we want to interpret causally? Are the discrepancies in this parameter estimate across model specifications or sub-groups understandable in terms of the causal story you are trying to tell, or do they hint at an effect driven by, e.g. selection.
In fact, before you run these alternate specifications. Write down how you think your parameter estimate will change. Its great if your parameter estimate of interest doesn't vary much across sub-groups, or specifications - in the context of my work, that is more important than cross-validation. But other substantive issues affecting my interpretation are more important still.
|
Can cross validation be used for causal inference?
To respond to the follow-up @Andy posted as an answer here...
Although I could not say which estimate is correct and which is false, doesn't the inconsistency in the Assault Conviction and the Gun co
|
6,928
|
Can cross validation be used for causal inference?
|
I thank everyone for their answers, but the question has grown to something I did not intend it to, being mainly an essay on the general notion of causal inference with no right answer.
I initially intended the question to probe the audience for examples of the use of cross validation for causal inference. I had assumed such methods existed, as the notion of using a test sample and hold out sample to assess repeatability of effect estimates seemed logical to me. Like John noted, what I was suggesting isn't dissimilar to bootstrapping, and I would say it resembles other methods we use to validate results such as subset specificity tests or non-equivalent dependent variables (bootstrapping relaxes parametric assumptions of models, and the subset tests in a more general manner are used as a check that results are logical in varied situations). None of these methods meets any of the other answers standards of proof for causal inference, but I believe they are still useful for causal inference.
chl's comment is correct in that my assertion for using cross validation is a check on internal validity to aid in causal inference. But I ask we throw away the distinction between internal and external validity for now, as it does nothing to further the debate. chl's example of genome wide studies in epidemiology I would consider a prime example of poor internal validity, making strong inferences inherently dubious. I think the genome association studies are actually an example of what I asked for. Do you think the inferences between genes and disease are improved through the use of cross-validation (as oppossed to just throwing all markers into one model and adjusting p-values accordingly?)
Below I have pasted a copy of a table in the Berk article I cited in my question. While these tables were shown to demonstrate the false logic of using step-wise selection criteria and causal inference on the same model, lets pretend no model selection criteria were used, and the parameters in both the training and hold out sample were determined A priori. This does not strike me as an unrealistic result. Although I could not say which estimate is correct and which is false, doesn't the inconsistency in the Assault Conviction and the Gun conviction estimates between the two models cast doubt that either has a true causal effect on sentence length? Is knowing that variation not useful? If we lose nothing by having a hold out sample to test our model why can't we use cross-validation to improve causal inference (or I am missing what we are losing by using a hold out sample?)
|
Can cross validation be used for causal inference?
|
I thank everyone for their answers, but the question has grown to something I did not intend it to, being mainly an essay on the general notion of causal inference with no right answer.
I initially i
|
Can cross validation be used for causal inference?
I thank everyone for their answers, but the question has grown to something I did not intend it to, being mainly an essay on the general notion of causal inference with no right answer.
I initially intended the question to probe the audience for examples of the use of cross validation for causal inference. I had assumed such methods existed, as the notion of using a test sample and hold out sample to assess repeatability of effect estimates seemed logical to me. Like John noted, what I was suggesting isn't dissimilar to bootstrapping, and I would say it resembles other methods we use to validate results such as subset specificity tests or non-equivalent dependent variables (bootstrapping relaxes parametric assumptions of models, and the subset tests in a more general manner are used as a check that results are logical in varied situations). None of these methods meets any of the other answers standards of proof for causal inference, but I believe they are still useful for causal inference.
chl's comment is correct in that my assertion for using cross validation is a check on internal validity to aid in causal inference. But I ask we throw away the distinction between internal and external validity for now, as it does nothing to further the debate. chl's example of genome wide studies in epidemiology I would consider a prime example of poor internal validity, making strong inferences inherently dubious. I think the genome association studies are actually an example of what I asked for. Do you think the inferences between genes and disease are improved through the use of cross-validation (as oppossed to just throwing all markers into one model and adjusting p-values accordingly?)
Below I have pasted a copy of a table in the Berk article I cited in my question. While these tables were shown to demonstrate the false logic of using step-wise selection criteria and causal inference on the same model, lets pretend no model selection criteria were used, and the parameters in both the training and hold out sample were determined A priori. This does not strike me as an unrealistic result. Although I could not say which estimate is correct and which is false, doesn't the inconsistency in the Assault Conviction and the Gun conviction estimates between the two models cast doubt that either has a true causal effect on sentence length? Is knowing that variation not useful? If we lose nothing by having a hold out sample to test our model why can't we use cross-validation to improve causal inference (or I am missing what we are losing by using a hold out sample?)
|
Can cross validation be used for causal inference?
I thank everyone for their answers, but the question has grown to something I did not intend it to, being mainly an essay on the general notion of causal inference with no right answer.
I initially i
|
6,929
|
Can cross validation be used for causal inference?
|
I guess this is an intuitive way to think about the relation between CV and causal inference: (please correct if I am wrong)
I always think about CV as a way to evaluate the performance of a model in predictions. However, in causal inference we are more concerned with something equivalent to Occam's Razor (parsimony), hence CV won't help.
Thanks.
|
Can cross validation be used for causal inference?
|
I guess this is an intuitive way to think about the relation between CV and causal inference: (please correct if I am wrong)
I always think about CV as a way to evaluate the performance of a model in
|
Can cross validation be used for causal inference?
I guess this is an intuitive way to think about the relation between CV and causal inference: (please correct if I am wrong)
I always think about CV as a way to evaluate the performance of a model in predictions. However, in causal inference we are more concerned with something equivalent to Occam's Razor (parsimony), hence CV won't help.
Thanks.
|
Can cross validation be used for causal inference?
I guess this is an intuitive way to think about the relation between CV and causal inference: (please correct if I am wrong)
I always think about CV as a way to evaluate the performance of a model in
|
6,930
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform any other methods?
|
Here is one theoretical and two practical reasons why someone might rationally prefer a non-DNN approach.
The No Free Lunch Theorem from Wolpert and Macready says
We have dubbed the associated results NFL theorems because they demonstrate that if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems.
In other words, no single algorithm rules them all; you've got to benchmark.
The obvious rebuttal here is that you usually don't care about all possible problems, and deep learning seems to work well on several classes of problems that people do care about (e.g., object recognition), and so it's a reasonable first/only choice for other applications in those domains.
Many of these very deep networks require tons of data, as well as tons of computation, to fit. If you have (say) 500 examples, a twenty layer network is never going to learn well, while it might be possible to fit a much simpler model. There are a surprising number of problems where it's not feasible to collect a ton of data. On the other hand, one might try learning to solve a related problem (where more data is available), use something like transfer learning to adapt it to the specific low-data-availability-task.
Deep neural networks can also have unusual failure modes. There are some papers showing that barely-human-perceptible changes can cause a network to flip from correctly classifying an image to confidently misclassifying it. (See here and the accompanying paper by Szegedy et al.) Other approaches may be more robust against this: there are poisoning attacks against SVMs (e.g., this by Biggio, Nelson, and Laskov), but those happen at train, rather than test time. At the opposite extreme, there are known (but not great) performance bounds for the nearest-neighbor algorithm. In some situations, you might happier with lower overall performance with less chance of catastrophe.
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform a
|
Here is one theoretical and two practical reasons why someone might rationally prefer a non-DNN approach.
The No Free Lunch Theorem from Wolpert and Macready says
We have dubbed the associated res
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform any other methods?
Here is one theoretical and two practical reasons why someone might rationally prefer a non-DNN approach.
The No Free Lunch Theorem from Wolpert and Macready says
We have dubbed the associated results NFL theorems because they demonstrate that if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems.
In other words, no single algorithm rules them all; you've got to benchmark.
The obvious rebuttal here is that you usually don't care about all possible problems, and deep learning seems to work well on several classes of problems that people do care about (e.g., object recognition), and so it's a reasonable first/only choice for other applications in those domains.
Many of these very deep networks require tons of data, as well as tons of computation, to fit. If you have (say) 500 examples, a twenty layer network is never going to learn well, while it might be possible to fit a much simpler model. There are a surprising number of problems where it's not feasible to collect a ton of data. On the other hand, one might try learning to solve a related problem (where more data is available), use something like transfer learning to adapt it to the specific low-data-availability-task.
Deep neural networks can also have unusual failure modes. There are some papers showing that barely-human-perceptible changes can cause a network to flip from correctly classifying an image to confidently misclassifying it. (See here and the accompanying paper by Szegedy et al.) Other approaches may be more robust against this: there are poisoning attacks against SVMs (e.g., this by Biggio, Nelson, and Laskov), but those happen at train, rather than test time. At the opposite extreme, there are known (but not great) performance bounds for the nearest-neighbor algorithm. In some situations, you might happier with lower overall performance with less chance of catastrophe.
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform a
Here is one theoretical and two practical reasons why someone might rationally prefer a non-DNN approach.
The No Free Lunch Theorem from Wolpert and Macready says
We have dubbed the associated res
|
6,931
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform any other methods?
|
Somewhere on this playlist of lectures by Geoff Hinton (from his Coursera course on neural networks), there's a segment where he talks about two classes of problems:
Problems where noise is the key feature,
Problems where signal is the key feature.
I remember the explanation that while neural nets thrive in this latter space, traditional statistical methods are often better suited to the former. Analyzing high-res digital photographs of actual things in the world, a place where deep convolutional nets excel, clearly constitutes the latter.
On the other hand, when noise is the dominant feature, for example, in a medical case-control study with 50 cases and 50 controls, traditional statistical methods may be better suited to the problem.
If anybody finds that video, please comment and I'll update.
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform a
|
Somewhere on this playlist of lectures by Geoff Hinton (from his Coursera course on neural networks), there's a segment where he talks about two classes of problems:
Problems where noise is the key f
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform any other methods?
Somewhere on this playlist of lectures by Geoff Hinton (from his Coursera course on neural networks), there's a segment where he talks about two classes of problems:
Problems where noise is the key feature,
Problems where signal is the key feature.
I remember the explanation that while neural nets thrive in this latter space, traditional statistical methods are often better suited to the former. Analyzing high-res digital photographs of actual things in the world, a place where deep convolutional nets excel, clearly constitutes the latter.
On the other hand, when noise is the dominant feature, for example, in a medical case-control study with 50 cases and 50 controls, traditional statistical methods may be better suited to the problem.
If anybody finds that video, please comment and I'll update.
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform a
Somewhere on this playlist of lectures by Geoff Hinton (from his Coursera course on neural networks), there's a segment where he talks about two classes of problems:
Problems where noise is the key f
|
6,932
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform any other methods?
|
Two linearly perfected correlated variables. Can deep-network with 1 million hidden layers and 2 trillion neutrons beat a simple linear regression?
EDITED
In my experience, sample collection is more expensive than computation. I mean, we can just hire some Amazon instances, run deep learning training and then come back a few days later. The cost in my field is about $200 USD. The cost is minimal. My colleagues earn more than that in a day.
Sample collection generally requires domain knowledge and specialized equipments. Deep learning is only suitable for problems with cheap and easy access data set, such as natural language processing, image processing and anything that you can scrape off from the Internet.
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform a
|
Two linearly perfected correlated variables. Can deep-network with 1 million hidden layers and 2 trillion neutrons beat a simple linear regression?
EDITED
In my experience, sample collection is more e
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform any other methods?
Two linearly perfected correlated variables. Can deep-network with 1 million hidden layers and 2 trillion neutrons beat a simple linear regression?
EDITED
In my experience, sample collection is more expensive than computation. I mean, we can just hire some Amazon instances, run deep learning training and then come back a few days later. The cost in my field is about $200 USD. The cost is minimal. My colleagues earn more than that in a day.
Sample collection generally requires domain knowledge and specialized equipments. Deep learning is only suitable for problems with cheap and easy access data set, such as natural language processing, image processing and anything that you can scrape off from the Internet.
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform a
Two linearly perfected correlated variables. Can deep-network with 1 million hidden layers and 2 trillion neutrons beat a simple linear regression?
EDITED
In my experience, sample collection is more e
|
6,933
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform any other methods?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
To be honest, it is not possible for a deep-learning method to outperform a kernel methods. Why ? It is very simple, because any network, be it deep or shallow, can be described by a kernel. Thus a kernel can reproduce any result coming from deep learning. However, a kernel method have access to other, more powerful methods, than deep learning ones. Indeed, today, kernel machines obtains results that are far better than any deep learning approach.
EDIT : As I received a warning concerning this answer, please let me detail it.
I reference this paper : https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3769804.
Every neural network (NN) can be input in a kernel machine, it is known since more than a decade now. For instance, in our kernel machines, we can input any NNs, there is an interface to it, see section 2.3.7.
A kernel machine have access to other methods than NNs. See for instance section 3, that allows any PDE approach.
We benchmarked both approaches. I am sorry, but our kernel machines always retrieved better results than NN ones, as for instance the MNIST test in https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3766451
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform a
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform any other methods?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
To be honest, it is not possible for a deep-learning method to outperform a kernel methods. Why ? It is very simple, because any network, be it deep or shallow, can be described by a kernel. Thus a kernel can reproduce any result coming from deep learning. However, a kernel method have access to other, more powerful methods, than deep learning ones. Indeed, today, kernel machines obtains results that are far better than any deep learning approach.
EDIT : As I received a warning concerning this answer, please let me detail it.
I reference this paper : https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3769804.
Every neural network (NN) can be input in a kernel machine, it is known since more than a decade now. For instance, in our kernel machines, we can input any NNs, there is an interface to it, see section 2.3.7.
A kernel machine have access to other methods than NNs. See for instance section 3, that allows any PDE approach.
We benchmarked both approaches. I am sorry, but our kernel machines always retrieved better results than NN ones, as for instance the MNIST test in https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3766451
|
Is there any supervised-learning problem that (deep) neural networks obviously couldn't outperform a
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
6,934
|
Determine different clusters of 1d data from database
|
In one dimensional data, don't use cluster analysis.
Cluster analysis is usually a multivariate technique. Or let me better put it the other way around: for one-dimensional data -- which is completely ordered -- there are much better techniques. Using k-means and similar techniques here is a total waste, unless you put in enough effort to actually optimize them for the 1-d case.
Just to give you an example: for k-means it is common to use k random objects as initial seeds. For one dimensional data, it's fairly easy to do better by just using the appropriate quantiles (1/2k, 3/2k, 5/2k etc.), after sorting the data once, and then optimize from this starting point. However, 2D data cannot be sorted completely. And in a grid, there likely will be empty cells.
I also wouldn't call it cluster. I would call it interval. What you really want to do is to optimize the interval borders. If you do k-means, it will test for each object if it should be moved to another cluster. That does not make sense in 1D: only the objects at the interval borders need to be checked. That obviously is much faster, as there are only ~2k objects there. If they do not already prefer other intervals, more central objects will not either.
You may want to look into techniques such as Jenks Natural Breaks optimization, for example.
Or you can do a kernel density estimation and look for local minima of the density to split there. The nice thing is that you do not need to specify k for this!
See this answer for an example how to do this in Python (green markers are the cluster modes; red markers a points where the data is cut; the y axis is a log-likelihood of the density):
P.S. please use the search function. Here are some questions on 1-d data clustering that you missed:
Clustering 1D data
https://stackoverflow.com/questions/7869609/cluster-one-dimensional-data-optimally
https://stackoverflow.com/questions/11513484/1d-number-array-clustering
|
Determine different clusters of 1d data from database
|
In one dimensional data, don't use cluster analysis.
Cluster analysis is usually a multivariate technique. Or let me better put it the other way around: for one-dimensional data -- which is completely
|
Determine different clusters of 1d data from database
In one dimensional data, don't use cluster analysis.
Cluster analysis is usually a multivariate technique. Or let me better put it the other way around: for one-dimensional data -- which is completely ordered -- there are much better techniques. Using k-means and similar techniques here is a total waste, unless you put in enough effort to actually optimize them for the 1-d case.
Just to give you an example: for k-means it is common to use k random objects as initial seeds. For one dimensional data, it's fairly easy to do better by just using the appropriate quantiles (1/2k, 3/2k, 5/2k etc.), after sorting the data once, and then optimize from this starting point. However, 2D data cannot be sorted completely. And in a grid, there likely will be empty cells.
I also wouldn't call it cluster. I would call it interval. What you really want to do is to optimize the interval borders. If you do k-means, it will test for each object if it should be moved to another cluster. That does not make sense in 1D: only the objects at the interval borders need to be checked. That obviously is much faster, as there are only ~2k objects there. If they do not already prefer other intervals, more central objects will not either.
You may want to look into techniques such as Jenks Natural Breaks optimization, for example.
Or you can do a kernel density estimation and look for local minima of the density to split there. The nice thing is that you do not need to specify k for this!
See this answer for an example how to do this in Python (green markers are the cluster modes; red markers a points where the data is cut; the y axis is a log-likelihood of the density):
P.S. please use the search function. Here are some questions on 1-d data clustering that you missed:
Clustering 1D data
https://stackoverflow.com/questions/7869609/cluster-one-dimensional-data-optimally
https://stackoverflow.com/questions/11513484/1d-number-array-clustering
|
Determine different clusters of 1d data from database
In one dimensional data, don't use cluster analysis.
Cluster analysis is usually a multivariate technique. Or let me better put it the other way around: for one-dimensional data -- which is completely
|
6,935
|
Determine different clusters of 1d data from database
|
One-dimensional clustering can be done optimally and efficiently, which may be able to give you insight on the structure of your data.
In the one-dimensional case, there are methods that are optimal and efficient (O(kn)), and as a bonus there are even regularized clustering algorithms that will let you automatically select the number of clusters! I recommend this survey: https://cs.au.dk/~larsen/papers/1dkmeans.pdf
R implementations can be found on the Ckmeans.1d.dp package:
https://cran.r-project.org/web/packages/Ckmeans.1d.dp/index.html
As a side note, 1-dimensional clustering can be used for quantization, where you represent your input data using a smaller set of values; this can help with compression, or to speed up searching for example.
|
Determine different clusters of 1d data from database
|
One-dimensional clustering can be done optimally and efficiently, which may be able to give you insight on the structure of your data.
In the one-dimensional case, there are methods that are optimal a
|
Determine different clusters of 1d data from database
One-dimensional clustering can be done optimally and efficiently, which may be able to give you insight on the structure of your data.
In the one-dimensional case, there are methods that are optimal and efficient (O(kn)), and as a bonus there are even regularized clustering algorithms that will let you automatically select the number of clusters! I recommend this survey: https://cs.au.dk/~larsen/papers/1dkmeans.pdf
R implementations can be found on the Ckmeans.1d.dp package:
https://cran.r-project.org/web/packages/Ckmeans.1d.dp/index.html
As a side note, 1-dimensional clustering can be used for quantization, where you represent your input data using a smaller set of values; this can help with compression, or to speed up searching for example.
|
Determine different clusters of 1d data from database
One-dimensional clustering can be done optimally and efficiently, which may be able to give you insight on the structure of your data.
In the one-dimensional case, there are methods that are optimal a
|
6,936
|
Determine different clusters of 1d data from database
|
Is your question whether you should cluster or what method you should use to cluster?
Regarding whether you should cluster, it depends if you want to automatically partition your data (for example if you want to repeat this partitioning several times). If you are doing this only once, you can just look at the histogram of the distribution of your values, and partition it by eye, as proposed in the comments. I would recommend looking at the data by eye anyway, since it could help you determine how many clusters you want and also whether the clustering "worked".
Regarding the type of clustering, k-means should be fine if there are "real" clusters in the data. If you don't see any clusters in the histogram, it doesn't make much sense clustering it anyway, since any partitioning of your data range will give valid clusters (or in the case of random initiation of kmeans, you will get different clusters each run).
|
Determine different clusters of 1d data from database
|
Is your question whether you should cluster or what method you should use to cluster?
Regarding whether you should cluster, it depends if you want to automatically partition your data (for example if
|
Determine different clusters of 1d data from database
Is your question whether you should cluster or what method you should use to cluster?
Regarding whether you should cluster, it depends if you want to automatically partition your data (for example if you want to repeat this partitioning several times). If you are doing this only once, you can just look at the histogram of the distribution of your values, and partition it by eye, as proposed in the comments. I would recommend looking at the data by eye anyway, since it could help you determine how many clusters you want and also whether the clustering "worked".
Regarding the type of clustering, k-means should be fine if there are "real" clusters in the data. If you don't see any clusters in the histogram, it doesn't make much sense clustering it anyway, since any partitioning of your data range will give valid clusters (or in the case of random initiation of kmeans, you will get different clusters each run).
|
Determine different clusters of 1d data from database
Is your question whether you should cluster or what method you should use to cluster?
Regarding whether you should cluster, it depends if you want to automatically partition your data (for example if
|
6,937
|
Determine different clusters of 1d data from database
|
You can try:
KMeans, GMM or other methods by specifying n_clusters= no. of peaks in kernel density plot.
KMeans, GMM or other methods by determining the optimum no. of clusters based on some metrics. More info: [here] https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set
|
Determine different clusters of 1d data from database
|
You can try:
KMeans, GMM or other methods by specifying n_clusters= no. of peaks in kernel density plot.
KMeans, GMM or other methods by determining the optimum no. of clusters based on some metrics.
|
Determine different clusters of 1d data from database
You can try:
KMeans, GMM or other methods by specifying n_clusters= no. of peaks in kernel density plot.
KMeans, GMM or other methods by determining the optimum no. of clusters based on some metrics. More info: [here] https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set
|
Determine different clusters of 1d data from database
You can try:
KMeans, GMM or other methods by specifying n_clusters= no. of peaks in kernel density plot.
KMeans, GMM or other methods by determining the optimum no. of clusters based on some metrics.
|
6,938
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
I'm not sure there is one accepted definition for a multivariate median. The one I'm familiar with is Oja's median point, which minimizes the sum of volumes of simplices formed over subsets of points. (See the link for a technical definition.)
Update: The site referenced for the Oja definition above also has a nice paper covering a number of definitions of a multivariate median:
Geometric Measures of Data Depth
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
I'm not sure there is one accepted definition for a multivariate median. The one I'm familiar with is Oja's median point, which minimizes the sum of volumes of simplices formed over subsets of points
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
I'm not sure there is one accepted definition for a multivariate median. The one I'm familiar with is Oja's median point, which minimizes the sum of volumes of simplices formed over subsets of points. (See the link for a technical definition.)
Update: The site referenced for the Oja definition above also has a nice paper covering a number of definitions of a multivariate median:
Geometric Measures of Data Depth
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
I'm not sure there is one accepted definition for a multivariate median. The one I'm familiar with is Oja's median point, which minimizes the sum of volumes of simplices formed over subsets of points
|
6,939
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
As @Ars said there are no accepted definition (and this is a good point). There are general alternatives families of ways to generalize quantiles on $\mathbb{R}^d$, I think the most significant are:
Generalize quantile process Let $P_n(A)$ be the empirical measure (=the proportion of observations in $A$). Then, with $\mathbb{A}$ a well chosen subset of the Borel sets in $\mathbb{R}^d$ and $\lambda$ a real valued measure,
you can define the empirical quantile function:
$U_n(t)=\inf (\lambda(A) : P_n(A)\geq t A\in\mathbb{A})$
Suppose you can find one $A_{t}$ that gives you the minimum. Then the set (or an element of the set) $A_{1/2-\epsilon}\cap A_{1/2+\epsilon}$ gives you the median when $\epsilon$ is made small enough. The definition of the median is recovered when using $\mathbb{A}=(]-\infty,x] x\in\mathbb{R})$ and $\lambda(]-\infty,x])=x$. Ars answer falls into that framework I guess... tukey's half space location may be obtained using $\mathbb{A}(a)=( H_{x}=(t\in \mathbb{R}^d :\; \langle a, t \rangle \leq x ) $ and $\lambda(H_{x})=x$ (with $x\in \mathbb{R}$, $a\in\mathbb{R}^d$).
variational definition and M-estimation
The idea here is that the $\alpha$-quantile $Q_{\alpha}$ of a random variable $Y$ in $\mathbb{R}$ can be defined through a variational equality.
The most common definition is using the quantile regression function $\rho_{\alpha}$ (also known as pinball loss, guess why ? ) $Q_{\alpha}=arg\inf_{x\in \mathbb{R}}\mathbb{E}[\rho_{\alpha}(Y-x)]$. The case $\alpha=1/2$ gives $\rho_{1/2}(y)=|y|$ and you can generalize that to higher dimension using $l^1$ distances as done in @Srikant Answer. This is theoretical median but gives you empirical median if you replace expectation by empirical expectation (mean).
But Kolshinskii proposes to use Legendre-Fenchel transform: since $Q_{\alpha}=Arg\sup_s (s\alpha-f(s))$
where $f(s)=\frac{1}{2}\mathbb{E} [|s-Y|-|Y|+s]$ for $s\in \mathbb{R}$.
He gives a lot of deep reasons for that (see the paper ;)). Generalizing this to higher dimensions require working with a vectorial $\alpha$ and replacing $s\alpha$ by $\langle s,\alpha\rangle$ but you can take $\alpha=(1/2,\dots,1/2)$.
Partial ordering You can generalize the definition of quantiles in $\mathbb{R}^d$ as soon as you can create a partial order (with equivalence classes).
Obviously there are bridges between the different formulations. They are not all obvious...
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
As @Ars said there are no accepted definition (and this is a good point). There are general alternatives families of ways to generalize quantiles on $\mathbb{R}^d$, I think the most significant are:
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
As @Ars said there are no accepted definition (and this is a good point). There are general alternatives families of ways to generalize quantiles on $\mathbb{R}^d$, I think the most significant are:
Generalize quantile process Let $P_n(A)$ be the empirical measure (=the proportion of observations in $A$). Then, with $\mathbb{A}$ a well chosen subset of the Borel sets in $\mathbb{R}^d$ and $\lambda$ a real valued measure,
you can define the empirical quantile function:
$U_n(t)=\inf (\lambda(A) : P_n(A)\geq t A\in\mathbb{A})$
Suppose you can find one $A_{t}$ that gives you the minimum. Then the set (or an element of the set) $A_{1/2-\epsilon}\cap A_{1/2+\epsilon}$ gives you the median when $\epsilon$ is made small enough. The definition of the median is recovered when using $\mathbb{A}=(]-\infty,x] x\in\mathbb{R})$ and $\lambda(]-\infty,x])=x$. Ars answer falls into that framework I guess... tukey's half space location may be obtained using $\mathbb{A}(a)=( H_{x}=(t\in \mathbb{R}^d :\; \langle a, t \rangle \leq x ) $ and $\lambda(H_{x})=x$ (with $x\in \mathbb{R}$, $a\in\mathbb{R}^d$).
variational definition and M-estimation
The idea here is that the $\alpha$-quantile $Q_{\alpha}$ of a random variable $Y$ in $\mathbb{R}$ can be defined through a variational equality.
The most common definition is using the quantile regression function $\rho_{\alpha}$ (also known as pinball loss, guess why ? ) $Q_{\alpha}=arg\inf_{x\in \mathbb{R}}\mathbb{E}[\rho_{\alpha}(Y-x)]$. The case $\alpha=1/2$ gives $\rho_{1/2}(y)=|y|$ and you can generalize that to higher dimension using $l^1$ distances as done in @Srikant Answer. This is theoretical median but gives you empirical median if you replace expectation by empirical expectation (mean).
But Kolshinskii proposes to use Legendre-Fenchel transform: since $Q_{\alpha}=Arg\sup_s (s\alpha-f(s))$
where $f(s)=\frac{1}{2}\mathbb{E} [|s-Y|-|Y|+s]$ for $s\in \mathbb{R}$.
He gives a lot of deep reasons for that (see the paper ;)). Generalizing this to higher dimensions require working with a vectorial $\alpha$ and replacing $s\alpha$ by $\langle s,\alpha\rangle$ but you can take $\alpha=(1/2,\dots,1/2)$.
Partial ordering You can generalize the definition of quantiles in $\mathbb{R}^d$ as soon as you can create a partial order (with equivalence classes).
Obviously there are bridges between the different formulations. They are not all obvious...
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
As @Ars said there are no accepted definition (and this is a good point). There are general alternatives families of ways to generalize quantiles on $\mathbb{R}^d$, I think the most significant are:
|
6,940
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
There are distinct ways to generalize the concept of median to higher dimensions. One not yet mentioned, but which was proposed long ago, is to construct a convex hull, peel it away, and iterate for as long as you can: what's left in the last hull is a set of points that are all candidates to be "medians."
"Head-banging" is another more recent attempt (c. 1980) to construct a robust center to a 2D point cloud. (The link is to documentation and software available at the US National Cancer Institute.)
The principal reason why there are multiple distinct generalizations and no one obvious solution is that R1 can be ordered but R2, R3, ... cannot be.
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
There are distinct ways to generalize the concept of median to higher dimensions. One not yet mentioned, but which was proposed long ago, is to construct a convex hull, peel it away, and iterate for
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
There are distinct ways to generalize the concept of median to higher dimensions. One not yet mentioned, but which was proposed long ago, is to construct a convex hull, peel it away, and iterate for as long as you can: what's left in the last hull is a set of points that are all candidates to be "medians."
"Head-banging" is another more recent attempt (c. 1980) to construct a robust center to a 2D point cloud. (The link is to documentation and software available at the US National Cancer Institute.)
The principal reason why there are multiple distinct generalizations and no one obvious solution is that R1 can be ordered but R2, R3, ... cannot be.
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
There are distinct ways to generalize the concept of median to higher dimensions. One not yet mentioned, but which was proposed long ago, is to construct a convex hull, peel it away, and iterate for
|
6,941
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
Geometric median is the point with the smallest average euclidian distance from the samples
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
Geometric median is the point with the smallest average euclidian distance from the samples
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
Geometric median is the point with the smallest average euclidian distance from the samples
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
Geometric median is the point with the smallest average euclidian distance from the samples
|
6,942
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
The Tukey halfspace median can be extended to >2 dimensions using DEEPLOC, an algorithm due to Struyf and Rousseeuw; see here for details.
The algorithm is used to approximate the point of greatest depth efficiently; naive methods which attempt to determine this exactly usually run afoul of (the computational version of) "the curse of dimensionality", where the runtime required to calculate a statistic grows exponentially with the number of dimensions of the space.
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
The Tukey halfspace median can be extended to >2 dimensions using DEEPLOC, an algorithm due to Struyf and Rousseeuw; see here for details.
The algorithm is used to approximate the point of greatest de
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
The Tukey halfspace median can be extended to >2 dimensions using DEEPLOC, an algorithm due to Struyf and Rousseeuw; see here for details.
The algorithm is used to approximate the point of greatest depth efficiently; naive methods which attempt to determine this exactly usually run afoul of (the computational version of) "the curse of dimensionality", where the runtime required to calculate a statistic grows exponentially with the number of dimensions of the space.
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
The Tukey halfspace median can be extended to >2 dimensions using DEEPLOC, an algorithm due to Struyf and Rousseeuw; see here for details.
The algorithm is used to approximate the point of greatest de
|
6,943
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
A definition that comes close to it, for unimodal distributions, is the tukey halfspace median
http://cgm.cs.mcgill.ca/~athens/Geometric-Estimators/halfspace.html
http://www.isical.ac.in/~statmath/html/publication/Tukey_tech_rep.pdf
https://www.isical.ac.in/~statmath/report/11310-15.pdf
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
A definition that comes close to it, for unimodal distributions, is the tukey halfspace median
http://cgm.cs.mcgill.ca/~athens/Geometric-Estimators/halfspace.html
http://www.isical.ac.in/~statmath/h
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
A definition that comes close to it, for unimodal distributions, is the tukey halfspace median
http://cgm.cs.mcgill.ca/~athens/Geometric-Estimators/halfspace.html
http://www.isical.ac.in/~statmath/html/publication/Tukey_tech_rep.pdf
https://www.isical.ac.in/~statmath/report/11310-15.pdf
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
A definition that comes close to it, for unimodal distributions, is the tukey halfspace median
http://cgm.cs.mcgill.ca/~athens/Geometric-Estimators/halfspace.html
http://www.isical.ac.in/~statmath/h
|
6,944
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
I do not know if any such definition exists but I will try and extend the standard definition of the median to $R^2$. I will use the following notation:
$X$, $Y$: the random variables associated with the two dimensions.
$m_x$, $m_y$: the corresponding medians.
$f(x,y)$: the joint pdf for our random variables
To extend the definition of the median to $R^2$, we choose $m_x$ and $m_y$ to minimize the following:
$E(|(x,y) - (m_x,m_y)|$
The problem now is that we need a definition for what we mean by:
$|(x,y) - (m_x,m_y)|$
The above is in a sense a distance metric and several possible candidate definitions are possible.
Eucliedan Metric
$|(x,y) - (m_x,m_y)| = \sqrt{(x-m_x)^2 + (y-m_y)^2}$
Computing the median under the euclidean metric will require computing the expectation of the above with respect to the joint density $f(x,y)$.
Taxicab Metric
$|(x,y) - (m_x,m_y)| = |x-m_x| + |y-m_y|$
Computing the median in the case of the taxicab metric involves computing the median of $X$ and $Y$ separately as the metric is separable in $x$ and $y$.
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
|
I do not know if any such definition exists but I will try and extend the standard definition of the median to $R^2$. I will use the following notation:
$X$, $Y$: the random variables associated with
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
I do not know if any such definition exists but I will try and extend the standard definition of the median to $R^2$. I will use the following notation:
$X$, $Y$: the random variables associated with the two dimensions.
$m_x$, $m_y$: the corresponding medians.
$f(x,y)$: the joint pdf for our random variables
To extend the definition of the median to $R^2$, we choose $m_x$ and $m_y$ to minimize the following:
$E(|(x,y) - (m_x,m_y)|$
The problem now is that we need a definition for what we mean by:
$|(x,y) - (m_x,m_y)|$
The above is in a sense a distance metric and several possible candidate definitions are possible.
Eucliedan Metric
$|(x,y) - (m_x,m_y)| = \sqrt{(x-m_x)^2 + (y-m_y)^2}$
Computing the median under the euclidean metric will require computing the expectation of the above with respect to the joint density $f(x,y)$.
Taxicab Metric
$|(x,y) - (m_x,m_y)| = |x-m_x| + |y-m_y|$
Computing the median in the case of the taxicab metric involves computing the median of $X$ and $Y$ separately as the metric is separable in $x$ and $y$.
|
Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces?
I do not know if any such definition exists but I will try and extend the standard definition of the median to $R^2$. I will use the following notation:
$X$, $Y$: the random variables associated with
|
6,945
|
How to interpret root mean squared error (RMSE) vs standard deviation?
|
Let's say that our responses are $y_1, \dots, y_n$ and our predicted values are $\hat y_1, \dots, \hat y_n$.
The sample variance (using $n$ rather than $n-1$ for simplicity) is $\frac{1}{n} \sum_{i=1}^n (y_i - \bar y)^2$ while the MSE is $\frac{1}{n} \sum_{i=1}^n (y_i - \hat y_i)^2$. Thus the sample variance gives how much the responses vary around the mean while the MSE gives how much the responses vary around our predictions. If we think of the overall mean $\bar y$ as being the simplest predictor that we'd ever consider, then by comparing the MSE to the sample variance of the responses we can see how much more variation we've explained with our model. This is exactly what the $R^2$ value does in linear regression.
Consider the following picture:
The sample variance of the $y_i$ is the variability around the horizontal line. If we project all of the data onto the $Y$ axis we can see this. The MSE is the mean squared distance to the regression line, i.e. the variability around the regression line (i.e. the $\hat y_i$). So the variability measured by the sample variance is the averaged squared distance to the horizontal line, which we can see is substantially more than the average squared distance to the regression line.
|
How to interpret root mean squared error (RMSE) vs standard deviation?
|
Let's say that our responses are $y_1, \dots, y_n$ and our predicted values are $\hat y_1, \dots, \hat y_n$.
The sample variance (using $n$ rather than $n-1$ for simplicity) is $\frac{1}{n} \sum_{i=1}
|
How to interpret root mean squared error (RMSE) vs standard deviation?
Let's say that our responses are $y_1, \dots, y_n$ and our predicted values are $\hat y_1, \dots, \hat y_n$.
The sample variance (using $n$ rather than $n-1$ for simplicity) is $\frac{1}{n} \sum_{i=1}^n (y_i - \bar y)^2$ while the MSE is $\frac{1}{n} \sum_{i=1}^n (y_i - \hat y_i)^2$. Thus the sample variance gives how much the responses vary around the mean while the MSE gives how much the responses vary around our predictions. If we think of the overall mean $\bar y$ as being the simplest predictor that we'd ever consider, then by comparing the MSE to the sample variance of the responses we can see how much more variation we've explained with our model. This is exactly what the $R^2$ value does in linear regression.
Consider the following picture:
The sample variance of the $y_i$ is the variability around the horizontal line. If we project all of the data onto the $Y$ axis we can see this. The MSE is the mean squared distance to the regression line, i.e. the variability around the regression line (i.e. the $\hat y_i$). So the variability measured by the sample variance is the averaged squared distance to the horizontal line, which we can see is substantially more than the average squared distance to the regression line.
|
How to interpret root mean squared error (RMSE) vs standard deviation?
Let's say that our responses are $y_1, \dots, y_n$ and our predicted values are $\hat y_1, \dots, \hat y_n$.
The sample variance (using $n$ rather than $n-1$ for simplicity) is $\frac{1}{n} \sum_{i=1}
|
6,946
|
How to interpret root mean squared error (RMSE) vs standard deviation?
|
In the absence of better information, the mean value of the target variable can be considered a simple estimate for values of the target variable, whether in trying to model the existing data or trying to predict future values. This simple estimate of the target variable (that is, predicted values all equal the mean of the target variable) will be off by a certain error. A standard way to measure the average error is the standard deviation (SD), $ \sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \bar y)^2}$, since the SD has the nice property of fitting a bell-shaped (Gaussian) distribution if the target variable is normally distributed. So, the SD can be considered the amount of error that naturally occurs in the estimates of the target variable. This makes it the benchmark that any model needs to try to beat.
There are various ways to measure the error of a model estimation; among them, the Root Mean Squared Error (RMSE) that you mentioned, $ \sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \hat y_i)^2}$, is one of the most popular. It is conceptually quite similar to the SD: instead of measuring how far off an actual value is from the mean, it uses essentially the same formula to measure how far off an actual value is from the model's prediction for that value. A good model should, on average, have better predictions than the naïve estimate of the mean for all predictions. Thus, the measure of variation (RMSE) should reduce the randomness better than the SD.
This argument applies to other measures of error, not just to RMSE, but the RMSE is particularly attractive for direct comparison to the SD because their mathematical formulas are analogous.
Edit:
Someone asked me offline for a citation that supports the idea of the SD being a benchmark for the RMSE. Personally, I first learnt this principle from Shmueli et al. 2016. Sorry, but I do not have the book handy, so I cannot cite a page number.
Shmueli, G., Bruce, P. C., Stephens, M., & Patel, N. R. (2016). Data Mining for Business Analytics: Concepts, Techniques, and Applications with JMP Pro (3rd Edition). Wiley.
|
How to interpret root mean squared error (RMSE) vs standard deviation?
|
In the absence of better information, the mean value of the target variable can be considered a simple estimate for values of the target variable, whether in trying to model the existing data or tryin
|
How to interpret root mean squared error (RMSE) vs standard deviation?
In the absence of better information, the mean value of the target variable can be considered a simple estimate for values of the target variable, whether in trying to model the existing data or trying to predict future values. This simple estimate of the target variable (that is, predicted values all equal the mean of the target variable) will be off by a certain error. A standard way to measure the average error is the standard deviation (SD), $ \sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \bar y)^2}$, since the SD has the nice property of fitting a bell-shaped (Gaussian) distribution if the target variable is normally distributed. So, the SD can be considered the amount of error that naturally occurs in the estimates of the target variable. This makes it the benchmark that any model needs to try to beat.
There are various ways to measure the error of a model estimation; among them, the Root Mean Squared Error (RMSE) that you mentioned, $ \sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \hat y_i)^2}$, is one of the most popular. It is conceptually quite similar to the SD: instead of measuring how far off an actual value is from the mean, it uses essentially the same formula to measure how far off an actual value is from the model's prediction for that value. A good model should, on average, have better predictions than the naïve estimate of the mean for all predictions. Thus, the measure of variation (RMSE) should reduce the randomness better than the SD.
This argument applies to other measures of error, not just to RMSE, but the RMSE is particularly attractive for direct comparison to the SD because their mathematical formulas are analogous.
Edit:
Someone asked me offline for a citation that supports the idea of the SD being a benchmark for the RMSE. Personally, I first learnt this principle from Shmueli et al. 2016. Sorry, but I do not have the book handy, so I cannot cite a page number.
Shmueli, G., Bruce, P. C., Stephens, M., & Patel, N. R. (2016). Data Mining for Business Analytics: Concepts, Techniques, and Applications with JMP Pro (3rd Edition). Wiley.
|
How to interpret root mean squared error (RMSE) vs standard deviation?
In the absence of better information, the mean value of the target variable can be considered a simple estimate for values of the target variable, whether in trying to model the existing data or tryin
|
6,947
|
How to interpret root mean squared error (RMSE) vs standard deviation?
|
In case you are talking about the mean squared error of prediction, here it can be:
$$
\frac{\sum_i(y_i-\hat{y}_i)^2}{n-p},
$$
depending on how many (p) parameters are estimated for the prediction, i.e., loss of the degree of freedom (DF).
The sample variance can be:
$$
\frac{\sum_i(y_i - \bar{y}) ^2}{n-1},
$$
where the $\bar{y}$ is simply an estimator of the mean of $y_i$.
So you can consider the latter formula (sample variance) as a special case of the former (MSE), where $\hat{y}_i = \bar{y}$ and the loss of DF is 1 since the mean computation $\bar{y}$ is an estimation.
Or, if you do not care much about how $\hat{y}_i$ is predicted, but want to get a ballpark MSE on your model, you can still use the following formula to estimate it,
$$
\frac{\sum_i(y_i-\hat{y}_i)^2}{n},
$$
which is the easiest to compute.
|
How to interpret root mean squared error (RMSE) vs standard deviation?
|
In case you are talking about the mean squared error of prediction, here it can be:
$$
\frac{\sum_i(y_i-\hat{y}_i)^2}{n-p},
$$
depending on how many (p) parameters are estimated for the prediction,
|
How to interpret root mean squared error (RMSE) vs standard deviation?
In case you are talking about the mean squared error of prediction, here it can be:
$$
\frac{\sum_i(y_i-\hat{y}_i)^2}{n-p},
$$
depending on how many (p) parameters are estimated for the prediction, i.e., loss of the degree of freedom (DF).
The sample variance can be:
$$
\frac{\sum_i(y_i - \bar{y}) ^2}{n-1},
$$
where the $\bar{y}$ is simply an estimator of the mean of $y_i$.
So you can consider the latter formula (sample variance) as a special case of the former (MSE), where $\hat{y}_i = \bar{y}$ and the loss of DF is 1 since the mean computation $\bar{y}$ is an estimation.
Or, if you do not care much about how $\hat{y}_i$ is predicted, but want to get a ballpark MSE on your model, you can still use the following formula to estimate it,
$$
\frac{\sum_i(y_i-\hat{y}_i)^2}{n},
$$
which is the easiest to compute.
|
How to interpret root mean squared error (RMSE) vs standard deviation?
In case you are talking about the mean squared error of prediction, here it can be:
$$
\frac{\sum_i(y_i-\hat{y}_i)^2}{n-p},
$$
depending on how many (p) parameters are estimated for the prediction,
|
6,948
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
The simple way to explain it is that regularization helps to not fit to the noise, it doesn't do much in terms of determining the shape of the signal. If you think of deep learning as a giant glorious function approximator, then you realize that it needs a lot of data to define the shape of the complex signal.
If there was no noise then increasing complexity of NN would produce a better approximation. There would not be any penalty to the size of the NN, bigger would have been better in every case. Consider a Taylor approximation, more terms is always better for non-polynomial function (ignoring numerical precision issues).
This breaks down in presence of a noise, because you start fitting to the noise. So, here comes regularization to help: it may reduce fitting to the noise, thus allowing us to build bigger NN to fit nonlinear problems.
The following discussion is not essential to my answer, but I added in part to answer some comments and motivate the main body of the answer above. Basically, the rest of my answer is like french fires that come with a burger meal, you can skip it.
(Ir)relevant Case: Polynomial regression
Let's look at a toy example of a polynomial regression. It is also a pretty good approximator for many functions. We'll look at the $\sin(x)$ function in $x\in(-3,3)$ region. As you can see from its Taylor series below, 7th order expansion is already a pretty good fit, so we can expect that a polynomial of 7+ order should be a very good fit too:
Next, we're going to fit polynomials with progressively higher order to a small very noisy data set with 7 observations:
We can observe what we've been told about polynomials by many people in-the-know: they're unstable, and start to oscillate wildly with increase in the order of polynomials.
However, the problem is not the polynomials themselves. The problem is the noise. When we fit polynomials to noisy data, part of the fit is to the noise, not to the signal. Here's the same exact polynomials fit to the same data set but with noise completely removed. The fits are great!
Notice a visually perfect fit for order 6. This shouldn't be surprising since 7 observations is all we need to uniquely identify order 6 polynomial, and we saw from Taylor approximation plot above that order 6 is already a very good approximation to $\sin(x)$ in our data range.
Also notice that higher order polynomials do not fit as well as order 6, because there is not enough observations to define them. So, let's look at what happens with 100 observations. On a chart below you see how a larger data set allowed us to fit higher order polynomials, thus accomplishing a better fit!
Great, but the problem is that we usually deal with noisy data. Look at what happens if you fit the same to 100 observations of very noisy data, see the chart below. We're back to square one: higher order polynomials produce horrible oscillating fits. So, increasing data set didn't help that much in increasing the complexity of the model to better explain the data. This is, again, because complex model is fitting better not only to the shape of the signal, but to the shape of the noise too.
Finally, let's try some lame regularization on this problem. The chart below shows regularization (with different penalties) applied to order 9 polynomial regression. Compare this to order (power) 9 polynomial fit above: at an appropriate level of regularization it is possible to fit higher order polynomials to noisy data.
Just in case it wasn't clear: I'm not suggesting to use polynomial regression this way. Polynomials are good for local fits, so a piece-wise polynomial can be a good choice. To fit the entire domain with them is often a bad idea, because they are sensitive to noise, indeed, as it should be evident from plots above. Whether the noise is numerical or from some other source is not that important in this context. the noise is noise, and polynomials will react to it passionately.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
The simple way to explain it is that regularization helps to not fit to the noise, it doesn't do much in terms of determining the shape of the signal. If you think of deep learning as a giant glorious
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
The simple way to explain it is that regularization helps to not fit to the noise, it doesn't do much in terms of determining the shape of the signal. If you think of deep learning as a giant glorious function approximator, then you realize that it needs a lot of data to define the shape of the complex signal.
If there was no noise then increasing complexity of NN would produce a better approximation. There would not be any penalty to the size of the NN, bigger would have been better in every case. Consider a Taylor approximation, more terms is always better for non-polynomial function (ignoring numerical precision issues).
This breaks down in presence of a noise, because you start fitting to the noise. So, here comes regularization to help: it may reduce fitting to the noise, thus allowing us to build bigger NN to fit nonlinear problems.
The following discussion is not essential to my answer, but I added in part to answer some comments and motivate the main body of the answer above. Basically, the rest of my answer is like french fires that come with a burger meal, you can skip it.
(Ir)relevant Case: Polynomial regression
Let's look at a toy example of a polynomial regression. It is also a pretty good approximator for many functions. We'll look at the $\sin(x)$ function in $x\in(-3,3)$ region. As you can see from its Taylor series below, 7th order expansion is already a pretty good fit, so we can expect that a polynomial of 7+ order should be a very good fit too:
Next, we're going to fit polynomials with progressively higher order to a small very noisy data set with 7 observations:
We can observe what we've been told about polynomials by many people in-the-know: they're unstable, and start to oscillate wildly with increase in the order of polynomials.
However, the problem is not the polynomials themselves. The problem is the noise. When we fit polynomials to noisy data, part of the fit is to the noise, not to the signal. Here's the same exact polynomials fit to the same data set but with noise completely removed. The fits are great!
Notice a visually perfect fit for order 6. This shouldn't be surprising since 7 observations is all we need to uniquely identify order 6 polynomial, and we saw from Taylor approximation plot above that order 6 is already a very good approximation to $\sin(x)$ in our data range.
Also notice that higher order polynomials do not fit as well as order 6, because there is not enough observations to define them. So, let's look at what happens with 100 observations. On a chart below you see how a larger data set allowed us to fit higher order polynomials, thus accomplishing a better fit!
Great, but the problem is that we usually deal with noisy data. Look at what happens if you fit the same to 100 observations of very noisy data, see the chart below. We're back to square one: higher order polynomials produce horrible oscillating fits. So, increasing data set didn't help that much in increasing the complexity of the model to better explain the data. This is, again, because complex model is fitting better not only to the shape of the signal, but to the shape of the noise too.
Finally, let's try some lame regularization on this problem. The chart below shows regularization (with different penalties) applied to order 9 polynomial regression. Compare this to order (power) 9 polynomial fit above: at an appropriate level of regularization it is possible to fit higher order polynomials to noisy data.
Just in case it wasn't clear: I'm not suggesting to use polynomial regression this way. Polynomials are good for local fits, so a piece-wise polynomial can be a good choice. To fit the entire domain with them is often a bad idea, because they are sensitive to noise, indeed, as it should be evident from plots above. Whether the noise is numerical or from some other source is not that important in this context. the noise is noise, and polynomials will react to it passionately.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
The simple way to explain it is that regularization helps to not fit to the noise, it doesn't do much in terms of determining the shape of the signal. If you think of deep learning as a giant glorious
|
6,949
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
At this point in time, its not well-understood when and why certain regularization methods succeed and fail. In fact, its not understood at all why deep learning works in the first place.
Considering the fact that a sufficiently deep neural net can memorize most well-behaved training data perfectly, there are considerably more wrong solutions than there are right for any particular deep net. Regularization, broadly speaking, is an attempt to limit the expressivity of models for these "wrong" solutions - where "wrong" is defined by heuristics we think are important for a particular domain. But often it is difficult to define the heuristic such that you don't lose the "right" expressivity with it. A great example of this is L2 penalties.
Very few methods that would be considered a form of regularization are generally applicable to all application areas of ML. Vision, NLP, and structured prediction problems all have their own cookbook of regularization techniques that have been demonstrated to be effective experimentally for those particular domains. But even within those domains, these techniques are only effective under certain circumstances. For example, batch normalization on deep residual networks appears to make dropout redundant, despite the fact that both have been shown to independently improve generalization.
On a separate note, I think the term regularization is so broad that it makes it difficult to understand anything about it. Considering the fact that convolutions restrict the parameter space exponentially with respect to pixels, you could consider the convolutional neural network a form of regularization on the vanilla neural net.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
At this point in time, its not well-understood when and why certain regularization methods succeed and fail. In fact, its not understood at all why deep learning works in the first place.
Considering
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
At this point in time, its not well-understood when and why certain regularization methods succeed and fail. In fact, its not understood at all why deep learning works in the first place.
Considering the fact that a sufficiently deep neural net can memorize most well-behaved training data perfectly, there are considerably more wrong solutions than there are right for any particular deep net. Regularization, broadly speaking, is an attempt to limit the expressivity of models for these "wrong" solutions - where "wrong" is defined by heuristics we think are important for a particular domain. But often it is difficult to define the heuristic such that you don't lose the "right" expressivity with it. A great example of this is L2 penalties.
Very few methods that would be considered a form of regularization are generally applicable to all application areas of ML. Vision, NLP, and structured prediction problems all have their own cookbook of regularization techniques that have been demonstrated to be effective experimentally for those particular domains. But even within those domains, these techniques are only effective under certain circumstances. For example, batch normalization on deep residual networks appears to make dropout redundant, despite the fact that both have been shown to independently improve generalization.
On a separate note, I think the term regularization is so broad that it makes it difficult to understand anything about it. Considering the fact that convolutions restrict the parameter space exponentially with respect to pixels, you could consider the convolutional neural network a form of regularization on the vanilla neural net.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
At this point in time, its not well-understood when and why certain regularization methods succeed and fail. In fact, its not understood at all why deep learning works in the first place.
Considering
|
6,950
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
One class of theorems that show why this problem is fundamental are the No Free Lunch Theorems. For every problem with limited samples where a certain regularization helps, there is another problem where that same regularization will make things worse. As Austin points out, we generally find that L1/L2 regularization are helpful for many real-world problems, but this is only an observation and, because of the NFL theorems, there can be no general guarantees.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
One class of theorems that show why this problem is fundamental are the No Free Lunch Theorems. For every problem with limited samples where a certain regularization helps, there is another problem wh
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
One class of theorems that show why this problem is fundamental are the No Free Lunch Theorems. For every problem with limited samples where a certain regularization helps, there is another problem where that same regularization will make things worse. As Austin points out, we generally find that L1/L2 regularization are helpful for many real-world problems, but this is only an observation and, because of the NFL theorems, there can be no general guarantees.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
One class of theorems that show why this problem is fundamental are the No Free Lunch Theorems. For every problem with limited samples where a certain regularization helps, there is another problem wh
|
6,951
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
I would say that at a high level, the inductive bias of DNNs (deep neural networks) is powerful but slightly too loose or not opinionated enough. By that I mean that DNNs capture a lot of surface statistics about what is going on, but fail to get to the deeper causal/compositional high level structure. (You could view convolutions as a poor's man inductive bias specification).
In addition, it is believed in the machine learning community that the best way to generalize (making good inferences/predictions with little data) is to find the shortest program that gave rise to the data. But program induction/synthesis is hard and we have no good way of doing it efficiently. So instead we rely on a close approximation which is circuit search, and we know how do that with backpropagation. Here, Ilya Sutskever gives an overview of that idea.
To illustrate the difference in generalization power of models represented as actual programs vs deep learning models, I'll show the one in this paper: Simulation as an engine of physical scene understanding.
(A) The IPE [intuitive physics engine] model takes inputs (e.g., perception, language, memory, imagery, etc.) that instantiate a distribution over scenes (1), then simulates the effects of physics on the distribution (2), and then aggregates the results for output to other sensorimotor and cognitive faculties (3)
(B) Exp. 1 (Will it fall?) tower stimuli. The tower with the red border is actually delicately balanced, and the other two are the same height, but the blue-bordered one is judged much less likely to fall by the model and people.
(C) Probabilistic IPE model (x axis) vs. human judgment averages (y axis) in Exp. 1. See Fig. S3 for correlations for other values of σ and ϕ. Each point represents one tower (with SEM), and the three colored circles correspond to the three towers in B.
(D) Ground truth (nonprobabilistic) vs. human judgments (Exp. 1). Because it does not represent uncertainty, it cannot capture people’s judgments for a number of our stimuli, such as the red-bordered tower in B. (Note that these cases may be rare in natural scenes, where configurations tend to be more clearly stable or unstable and the IPE would be expected to correlate better with ground truth than it does on our stimuli.)
My point here is that the fit in C is really good, because the model captures the right biases about how humans make physical judgments. This is in big part because it models actual physics (remember that it is a actual physics engine) and can deal with uncertainty.
Now the obvious question is: can you do that with deep learning? This is what Lerer et al did in this work: Learning Physical Intuition of Block Towers by Example
Their model:
Their model is actually pretty good on the task at hand (predicting the number of falling blocks, and even their falling direction)
But it suffers two major drawbacks:
It needs a huge amount of data to train properly
In generalizes only in shallow ways: you can transfer to more realistic looking images, add or remove 1 or 2 blocks. But anything beyond that, and the performance goes down catastrophically: add 3 or 4 blocks, change the prediction task...
There was a comparison study done by Tenenbaum's lab about these two approaches: A Comparative Evaluation of Approximate Probabilistic Simulation and Deep Neural Networks as Accounts of Human Physical Scene Understanding.
Quoting the discussion section:
The performance of CNNs decreases as there are fewer training data.
Although AlexNet (not pretrained) performs better with 200,000
training images, it also suffers more from the lack of data, while
pretrained AlexNet is able to learn better from a small amount of
training images. For our task, both models require around 1,000 images
for their performance to be comparable to the IPE model and humans.
CNNs also have limited generalization ability across even small scene
variations, such as changing the number of blocks. In contrast, IPE
models naturally generalize and capture the ways that human judgment
accuracy decreases with the number of blocks in a stack.
Taken together, these results point to something fundamental about
human cognition that neural networks (or at least CNNs) are not
currently capturing: the existence of a mental model of the world’s
causal processes. Causal mental models can be simulated to predict
what will happen in qualitatively novel situations, and they do not
require vast and diverse training data to generalize broadly, but they
are inherently subject to certain kinds of errors (e.g., propagation
of uncertainty due to state and dynamics noise) just in virtue of
operating by simulation.
Back to the point I want to make: while neural networks are powerful models, they seem to lack the ability to represent causal, compositional and complex structure. And they make up for that by requiring lots of training data.
And back to your question: I would venture that the broad inductive bias and the fact that neural networks do not model causality/compositionality is why they need so much training data. Regularization is not a great fix because of the way they generalize. A better fix would be to change their bias, as is currently being tried by Hinton with capsules for modelling whole/part geometry, or interaction networks for modelling relations.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
I would say that at a high level, the inductive bias of DNNs (deep neural networks) is powerful but slightly too loose or not opinionated enough. By that I mean that DNNs capture a lot of surface stat
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
I would say that at a high level, the inductive bias of DNNs (deep neural networks) is powerful but slightly too loose or not opinionated enough. By that I mean that DNNs capture a lot of surface statistics about what is going on, but fail to get to the deeper causal/compositional high level structure. (You could view convolutions as a poor's man inductive bias specification).
In addition, it is believed in the machine learning community that the best way to generalize (making good inferences/predictions with little data) is to find the shortest program that gave rise to the data. But program induction/synthesis is hard and we have no good way of doing it efficiently. So instead we rely on a close approximation which is circuit search, and we know how do that with backpropagation. Here, Ilya Sutskever gives an overview of that idea.
To illustrate the difference in generalization power of models represented as actual programs vs deep learning models, I'll show the one in this paper: Simulation as an engine of physical scene understanding.
(A) The IPE [intuitive physics engine] model takes inputs (e.g., perception, language, memory, imagery, etc.) that instantiate a distribution over scenes (1), then simulates the effects of physics on the distribution (2), and then aggregates the results for output to other sensorimotor and cognitive faculties (3)
(B) Exp. 1 (Will it fall?) tower stimuli. The tower with the red border is actually delicately balanced, and the other two are the same height, but the blue-bordered one is judged much less likely to fall by the model and people.
(C) Probabilistic IPE model (x axis) vs. human judgment averages (y axis) in Exp. 1. See Fig. S3 for correlations for other values of σ and ϕ. Each point represents one tower (with SEM), and the three colored circles correspond to the three towers in B.
(D) Ground truth (nonprobabilistic) vs. human judgments (Exp. 1). Because it does not represent uncertainty, it cannot capture people’s judgments for a number of our stimuli, such as the red-bordered tower in B. (Note that these cases may be rare in natural scenes, where configurations tend to be more clearly stable or unstable and the IPE would be expected to correlate better with ground truth than it does on our stimuli.)
My point here is that the fit in C is really good, because the model captures the right biases about how humans make physical judgments. This is in big part because it models actual physics (remember that it is a actual physics engine) and can deal with uncertainty.
Now the obvious question is: can you do that with deep learning? This is what Lerer et al did in this work: Learning Physical Intuition of Block Towers by Example
Their model:
Their model is actually pretty good on the task at hand (predicting the number of falling blocks, and even their falling direction)
But it suffers two major drawbacks:
It needs a huge amount of data to train properly
In generalizes only in shallow ways: you can transfer to more realistic looking images, add or remove 1 or 2 blocks. But anything beyond that, and the performance goes down catastrophically: add 3 or 4 blocks, change the prediction task...
There was a comparison study done by Tenenbaum's lab about these two approaches: A Comparative Evaluation of Approximate Probabilistic Simulation and Deep Neural Networks as Accounts of Human Physical Scene Understanding.
Quoting the discussion section:
The performance of CNNs decreases as there are fewer training data.
Although AlexNet (not pretrained) performs better with 200,000
training images, it also suffers more from the lack of data, while
pretrained AlexNet is able to learn better from a small amount of
training images. For our task, both models require around 1,000 images
for their performance to be comparable to the IPE model and humans.
CNNs also have limited generalization ability across even small scene
variations, such as changing the number of blocks. In contrast, IPE
models naturally generalize and capture the ways that human judgment
accuracy decreases with the number of blocks in a stack.
Taken together, these results point to something fundamental about
human cognition that neural networks (or at least CNNs) are not
currently capturing: the existence of a mental model of the world’s
causal processes. Causal mental models can be simulated to predict
what will happen in qualitatively novel situations, and they do not
require vast and diverse training data to generalize broadly, but they
are inherently subject to certain kinds of errors (e.g., propagation
of uncertainty due to state and dynamics noise) just in virtue of
operating by simulation.
Back to the point I want to make: while neural networks are powerful models, they seem to lack the ability to represent causal, compositional and complex structure. And they make up for that by requiring lots of training data.
And back to your question: I would venture that the broad inductive bias and the fact that neural networks do not model causality/compositionality is why they need so much training data. Regularization is not a great fix because of the way they generalize. A better fix would be to change their bias, as is currently being tried by Hinton with capsules for modelling whole/part geometry, or interaction networks for modelling relations.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
I would say that at a high level, the inductive bias of DNNs (deep neural networks) is powerful but slightly too loose or not opinionated enough. By that I mean that DNNs capture a lot of surface stat
|
6,952
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
To clarify my thinking: Say we are using a large Deep NNet to try to model our data, but the data set is small and could actually be modeled by a linear model. Then why don't the network weights converge in such a way that one neuron simulates the linear regression and all the others converge to zeros? Why doesn't regularization help with this?
Neural nets can be trained like this. If proper L1 regularization used then much of the weights can be zeroed and this will make neural nets behave like concatenation of 1 or so linear regression neurons and many other zero nerons. So yes - L1/L2 regularizations or like that can be used to restrict the size or representational power of the neural network.
Actually the size of the model itself is a kind of regularization - if you make model large, it means that you injects a prior knowledge about the problem, that is, the problems is highly complex so it requires model that have high representational power. If you make model small, then it means you injects knowledge that the problem is simple so model don't need much capacity.
And this means L2 regularization will not make networks "sparse" as you described, because L2 regularization injects prior knowledge that contribution of each neuron (weight) should be small but non-zero. So network would use each of the neurons rather than use only small set of neurons.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
To clarify my thinking: Say we are using a large Deep NNet to try to model our data, but the data set is small and could actually be modeled by a linear model. Then why don't the network weights conve
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
To clarify my thinking: Say we are using a large Deep NNet to try to model our data, but the data set is small and could actually be modeled by a linear model. Then why don't the network weights converge in such a way that one neuron simulates the linear regression and all the others converge to zeros? Why doesn't regularization help with this?
Neural nets can be trained like this. If proper L1 regularization used then much of the weights can be zeroed and this will make neural nets behave like concatenation of 1 or so linear regression neurons and many other zero nerons. So yes - L1/L2 regularizations or like that can be used to restrict the size or representational power of the neural network.
Actually the size of the model itself is a kind of regularization - if you make model large, it means that you injects a prior knowledge about the problem, that is, the problems is highly complex so it requires model that have high representational power. If you make model small, then it means you injects knowledge that the problem is simple so model don't need much capacity.
And this means L2 regularization will not make networks "sparse" as you described, because L2 regularization injects prior knowledge that contribution of each neuron (weight) should be small but non-zero. So network would use each of the neurons rather than use only small set of neurons.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
To clarify my thinking: Say we are using a large Deep NNet to try to model our data, but the data set is small and could actually be modeled by a linear model. Then why don't the network weights conve
|
6,953
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
First of all there are plenty of regularization methods both in use and in active research for deep learning. So your premise isn't entirely certain.
As for methods in use, weight decay is a direct implementation of an L2 penalty on the weights via gradient descent. Take the gradient of the squared norm of your weights and add a small step in this direction to them at each iteration. Dropout is also considered a form of regularization, which imposes a kind of averaged structure. This would seem to imply something like an L2 penalty over an ensemble of networks with shared parameters.
You could presumably crank up the level of these or other techniques to address small samples. But note that regularization implies imposition of prior knowledge. The L2 penalty on the weights implies a Gaussian prior for the weights, for example. Increasing the amount of regularization essentially states that your prior knowledge is increasingly certain and biases your result towards that prior. So you can do it and it will overfit less but the biased output may suck. Obviously the solution is better prior knowledge. For image recognition this would mean much more structured priors regarding the statistics of your problem. The problem with this direction is you are imposing lots of domain expertise, and avoiding having to impose human expertise was one of the reasons you used deep learning.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
First of all there are plenty of regularization methods both in use and in active research for deep learning. So your premise isn't entirely certain.
As for methods in use, weight decay is a direct i
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
First of all there are plenty of regularization methods both in use and in active research for deep learning. So your premise isn't entirely certain.
As for methods in use, weight decay is a direct implementation of an L2 penalty on the weights via gradient descent. Take the gradient of the squared norm of your weights and add a small step in this direction to them at each iteration. Dropout is also considered a form of regularization, which imposes a kind of averaged structure. This would seem to imply something like an L2 penalty over an ensemble of networks with shared parameters.
You could presumably crank up the level of these or other techniques to address small samples. But note that regularization implies imposition of prior knowledge. The L2 penalty on the weights implies a Gaussian prior for the weights, for example. Increasing the amount of regularization essentially states that your prior knowledge is increasingly certain and biases your result towards that prior. So you can do it and it will overfit less but the biased output may suck. Obviously the solution is better prior knowledge. For image recognition this would mean much more structured priors regarding the statistics of your problem. The problem with this direction is you are imposing lots of domain expertise, and avoiding having to impose human expertise was one of the reasons you used deep learning.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
First of all there are plenty of regularization methods both in use and in active research for deep learning. So your premise isn't entirely certain.
As for methods in use, weight decay is a direct i
|
6,954
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
Regularization is a method for including prior information into a model. This will seem straightforward from the Bayesian perspective but is easy to see outside from the perspective as well. For example, the $L_2$ penalty + standarization of covariates in Ridge Regression is essentially using the prior information that we don't believe that estimation should be entirely dominated by a small number of predictors. Similarly, the $L_1$ penalty can be seen as "betting on sparseness of the solution" (side note: this doesn't make sense from the traditional Bayesian perspective but that's another story...).
A key point here is that regularization isn't always helpful. Rather, regularizing toward what should probably be true is very helpful, but regularizing in the wrong direction is clearly bad.
Now, when it comes to deep neural nets, the interpretability of this models makes regularization a little more difficult. For example, if we're trying to identify cats, in advance we know that "pointy ears" is an important feature. If we were using some like logistic regression with an $L_2$ penalty and we had an indicator variable "pointy ears" in our dataset, we could just reduce the penalty on the pointy ears variable (or better yet, penalize towards a positive value rather than 0) and then our model would need less data for accurate predictions.
But now suppose our data is images of cats fed into a deep neural networks. If "pointy ears" is, in fact, very helpful for identifying cats, maybe we would like to reduce the penalty to give this more predictive power. But we have no idea where in the network this will be represented! We can still introduce penalties so that some small part of the system doesn't dominate the whole network, but outside of that, it's hard to introduce regularization in a meaningful way.
In summary, it's extremely difficult to incorporate prior information into a system we don't understand.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
|
Regularization is a method for including prior information into a model. This will seem straightforward from the Bayesian perspective but is easy to see outside from the perspective as well. For examp
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
Regularization is a method for including prior information into a model. This will seem straightforward from the Bayesian perspective but is easy to see outside from the perspective as well. For example, the $L_2$ penalty + standarization of covariates in Ridge Regression is essentially using the prior information that we don't believe that estimation should be entirely dominated by a small number of predictors. Similarly, the $L_1$ penalty can be seen as "betting on sparseness of the solution" (side note: this doesn't make sense from the traditional Bayesian perspective but that's another story...).
A key point here is that regularization isn't always helpful. Rather, regularizing toward what should probably be true is very helpful, but regularizing in the wrong direction is clearly bad.
Now, when it comes to deep neural nets, the interpretability of this models makes regularization a little more difficult. For example, if we're trying to identify cats, in advance we know that "pointy ears" is an important feature. If we were using some like logistic regression with an $L_2$ penalty and we had an indicator variable "pointy ears" in our dataset, we could just reduce the penalty on the pointy ears variable (or better yet, penalize towards a positive value rather than 0) and then our model would need less data for accurate predictions.
But now suppose our data is images of cats fed into a deep neural networks. If "pointy ears" is, in fact, very helpful for identifying cats, maybe we would like to reduce the penalty to give this more predictive power. But we have no idea where in the network this will be represented! We can still introduce penalties so that some small part of the system doesn't dominate the whole network, but outside of that, it's hard to introduce regularization in a meaningful way.
In summary, it's extremely difficult to incorporate prior information into a system we don't understand.
|
Why doesn't regularization solve Deep Neural Nets hunger for data?
Regularization is a method for including prior information into a model. This will seem straightforward from the Bayesian perspective but is easy to see outside from the perspective as well. For examp
|
6,955
|
Intuitive explanation of Kolmogorov Smirnov Test
|
The Kolmogorov-Smirnov test assesses the hypothesis that a random sample (of numerical data) came from a continuous distribution that was completely specified without referring to the data.
Here is the graph of the cumulative distribution function (CDF) of such a distribution.
A sample can be fully described by its empirical (cumulative) distribution function, or ECDF. It plots the fraction of data less than or equal to the horizontal values. Thus, with a random sample of $n$ values, when we scan from left to right it jumps upwards by $1/n$ each time we cross a data value.
The next figure displays the ECDF for a sample of $n=10$ values taken from this distribution. The dot symbols locate the data. The lines are drawn to provide a visual connection among the points similar to the graph of the continuous CDF.
The K-S test compares the CDF to the ECDF using the greatest vertical
difference between their graphs. The amount (a positive number) is
the Kolmogorov-Smirnov test statistic.
We may visualize the KS test statistic by locating the data point situated furthest above or below the CDF. Here it is highlighted in red. The test statistic is the vertical distance between the extreme point and the value of the reference CDF. Two limiting curves, located this distance above and below the CDF, are drawn for reference. Thus, the ECDF lies between these curves and just touches at least one of them.
To assess the significance of the KS test statistic, we compare it--as usual--to the KS test statistics that would tend to occur in perfectly random samples from the hypothesized distribution. One way to visualize them is to graph the ECDFs for many such (independent) samples in a way that indicates what their KS statistics are. This forms the "null distribution" of the KS statistic.
The ECDF of each of $200$ samples is shown along with a single red marker located where it departs the most from the hypothesized CDF. In this case it is evident that the original sample (in blue) departs less from the CDF than would most random samples. (73% of the random samples depart further from the CDF than does the blue sample. Visually, this means 73% of the red dots fall outside the region delimited by the two red curves.) Thus, we have (on this basis) no evidence to conclude our (blue) sample was not generated by this CDF. That is, the difference is "not statistically significant."
More abstractly, we may plot the distribution of the KS statistics in this large set of random samples. This is called the null distribution of the test statistic. Here it is:
The vertical blue line locates the KS test statistic for the original sample. 27% of the random KS test statistics were smaller and 73% of the random statistics were greater. Scanning across, it looks like the KS statistic for a dataset (of this size, for this hypothesized CDF) would have to exceed 0.4 or so before we would conclude it is extremely large (and therefore constitutes significant evidence that the hypothesized CDF is incorrect).
Although much more can be said--in particular, about why KS test works the same way, and produces the same null distribution, for any continuous CDF--this is enough to understand the test and to use it together with probability plots to assess data distributions.
In response to requests, here is the essential R code I used for the calculations and plots. It uses the standard Normal distribution (pnorm) for the reference. The commented-out line established that my calculations agree with those of the built-in ks.test function. I had to modify its code in order to extract the specific data point contributing to the KS statistic.
ecdf.ks <- function(x, f=pnorm, col2="#00000010", accent="#d02020", cex=0.6,
limits=FALSE, ...) {
obj <- ecdf(x)
x <- sort(x)
n <- length(x)
y <- f(x) - (0:(n - 1))/n
p <- pmax(y, 1/n - y)
dp <- max(p)
i <- which(p >= dp)[1]
q <- ifelse(f(x[i]) > (i-1)/n, (i-1)/n, i/n)
# if (dp != ks.test(x, f)$statistic) stop("Incorrect.")
plot(obj, col=col2, cex=cex, ...)
points(x[i], q, col=accent, pch=19, cex=cex)
if (limits) {
curve(pmin(1, f(x)+dp), add=TRUE, col=accent)
curve(pmax(0, f(x)-dp), add=TRUE, col=accent)
}
c(i, dp)
}
|
Intuitive explanation of Kolmogorov Smirnov Test
|
The Kolmogorov-Smirnov test assesses the hypothesis that a random sample (of numerical data) came from a continuous distribution that was completely specified without referring to the data.
Here is th
|
Intuitive explanation of Kolmogorov Smirnov Test
The Kolmogorov-Smirnov test assesses the hypothesis that a random sample (of numerical data) came from a continuous distribution that was completely specified without referring to the data.
Here is the graph of the cumulative distribution function (CDF) of such a distribution.
A sample can be fully described by its empirical (cumulative) distribution function, or ECDF. It plots the fraction of data less than or equal to the horizontal values. Thus, with a random sample of $n$ values, when we scan from left to right it jumps upwards by $1/n$ each time we cross a data value.
The next figure displays the ECDF for a sample of $n=10$ values taken from this distribution. The dot symbols locate the data. The lines are drawn to provide a visual connection among the points similar to the graph of the continuous CDF.
The K-S test compares the CDF to the ECDF using the greatest vertical
difference between their graphs. The amount (a positive number) is
the Kolmogorov-Smirnov test statistic.
We may visualize the KS test statistic by locating the data point situated furthest above or below the CDF. Here it is highlighted in red. The test statistic is the vertical distance between the extreme point and the value of the reference CDF. Two limiting curves, located this distance above and below the CDF, are drawn for reference. Thus, the ECDF lies between these curves and just touches at least one of them.
To assess the significance of the KS test statistic, we compare it--as usual--to the KS test statistics that would tend to occur in perfectly random samples from the hypothesized distribution. One way to visualize them is to graph the ECDFs for many such (independent) samples in a way that indicates what their KS statistics are. This forms the "null distribution" of the KS statistic.
The ECDF of each of $200$ samples is shown along with a single red marker located where it departs the most from the hypothesized CDF. In this case it is evident that the original sample (in blue) departs less from the CDF than would most random samples. (73% of the random samples depart further from the CDF than does the blue sample. Visually, this means 73% of the red dots fall outside the region delimited by the two red curves.) Thus, we have (on this basis) no evidence to conclude our (blue) sample was not generated by this CDF. That is, the difference is "not statistically significant."
More abstractly, we may plot the distribution of the KS statistics in this large set of random samples. This is called the null distribution of the test statistic. Here it is:
The vertical blue line locates the KS test statistic for the original sample. 27% of the random KS test statistics were smaller and 73% of the random statistics were greater. Scanning across, it looks like the KS statistic for a dataset (of this size, for this hypothesized CDF) would have to exceed 0.4 or so before we would conclude it is extremely large (and therefore constitutes significant evidence that the hypothesized CDF is incorrect).
Although much more can be said--in particular, about why KS test works the same way, and produces the same null distribution, for any continuous CDF--this is enough to understand the test and to use it together with probability plots to assess data distributions.
In response to requests, here is the essential R code I used for the calculations and plots. It uses the standard Normal distribution (pnorm) for the reference. The commented-out line established that my calculations agree with those of the built-in ks.test function. I had to modify its code in order to extract the specific data point contributing to the KS statistic.
ecdf.ks <- function(x, f=pnorm, col2="#00000010", accent="#d02020", cex=0.6,
limits=FALSE, ...) {
obj <- ecdf(x)
x <- sort(x)
n <- length(x)
y <- f(x) - (0:(n - 1))/n
p <- pmax(y, 1/n - y)
dp <- max(p)
i <- which(p >= dp)[1]
q <- ifelse(f(x[i]) > (i-1)/n, (i-1)/n, i/n)
# if (dp != ks.test(x, f)$statistic) stop("Incorrect.")
plot(obj, col=col2, cex=cex, ...)
points(x[i], q, col=accent, pch=19, cex=cex)
if (limits) {
curve(pmin(1, f(x)+dp), add=TRUE, col=accent)
curve(pmax(0, f(x)-dp), add=TRUE, col=accent)
}
c(i, dp)
}
|
Intuitive explanation of Kolmogorov Smirnov Test
The Kolmogorov-Smirnov test assesses the hypothesis that a random sample (of numerical data) came from a continuous distribution that was completely specified without referring to the data.
Here is th
|
6,956
|
Intuitive explanation of Kolmogorov Smirnov Test
|
The one-sample Kolmogorov-Smirnov test finds the largest vertical distance between a completely specified continuous hypothesized cdf and the empirical cdf.
The two-sample Kolmogorov-Smirnov test finds the largest vertical distance between the empirical cdfs for two samples.
Unusually large distances indicate that the sample is not consistent with the hypothesized distribution (or that the two samples are not consistent with having come from the same distribution).
These tests are nonparametric in the sense that the distribution of the test statistic under the null doesn't depend on which specific distribution was specified under the null (or which common distribution the two samples are drawn from).
There are "one-sided" (in a particular sense) versions of these tests, but these are relatively rarely used.
You can do a Kolmogorov-Smirnov test with discrete distributions but the usual version of the test (i.e. using the usual null distribution) is conservative, and sometimes very conservative. You can (however) obtain new critical values for a completely specified discrete distribution.
There is a related test when parameters are estimated in a location-scale family* (or a subset of location and scale), properly called a Lilliefors test (Lilliefors did three tests for the normal case and a test for the exponential case). This is not distribution-free.
* up to a monotonic transformation
|
Intuitive explanation of Kolmogorov Smirnov Test
|
The one-sample Kolmogorov-Smirnov test finds the largest vertical distance between a completely specified continuous hypothesized cdf and the empirical cdf.
The two-sample Kolmogorov-Smirnov test find
|
Intuitive explanation of Kolmogorov Smirnov Test
The one-sample Kolmogorov-Smirnov test finds the largest vertical distance between a completely specified continuous hypothesized cdf and the empirical cdf.
The two-sample Kolmogorov-Smirnov test finds the largest vertical distance between the empirical cdfs for two samples.
Unusually large distances indicate that the sample is not consistent with the hypothesized distribution (or that the two samples are not consistent with having come from the same distribution).
These tests are nonparametric in the sense that the distribution of the test statistic under the null doesn't depend on which specific distribution was specified under the null (or which common distribution the two samples are drawn from).
There are "one-sided" (in a particular sense) versions of these tests, but these are relatively rarely used.
You can do a Kolmogorov-Smirnov test with discrete distributions but the usual version of the test (i.e. using the usual null distribution) is conservative, and sometimes very conservative. You can (however) obtain new critical values for a completely specified discrete distribution.
There is a related test when parameters are estimated in a location-scale family* (or a subset of location and scale), properly called a Lilliefors test (Lilliefors did three tests for the normal case and a test for the exponential case). This is not distribution-free.
* up to a monotonic transformation
|
Intuitive explanation of Kolmogorov Smirnov Test
The one-sample Kolmogorov-Smirnov test finds the largest vertical distance between a completely specified continuous hypothesized cdf and the empirical cdf.
The two-sample Kolmogorov-Smirnov test find
|
6,957
|
Intuitive explanation of Kolmogorov Smirnov Test
|
You are looking for the maximum deviation of empirical CDF (built from observations) from the theoretical values. By definition it can't be larger than 1.
Here's a plot for a uniform distribution CDF (black) and two stylized candidate CDFs (red):
You see that your candidate CDF can't be over the theoretical by more than $D^+$ or below it by more than $D^-$, both of which are bounded in magnitude by 1.
The empirical CDF $S_n$ for the purpose of this test is $S_i=i/N$.
Here we sorted the sample $x_i$ where $i=1,\dots,N$ so that $x_i<x_{i+1}$. You compare it with a theoretical CDF $F_i=F(x_i)$, then you have set of deviations $D^+_i=\max(0,S_i-F_i)$.
However, that's not what's amazing about KS statistic. It is that the distribution of $\sup_{x\in(-\infty,\infty)} D^+$ is the same for any distribution of data set! To me that's what you need to get intuitively if you can.
|
Intuitive explanation of Kolmogorov Smirnov Test
|
You are looking for the maximum deviation of empirical CDF (built from observations) from the theoretical values. By definition it can't be larger than 1.
Here's a plot for a uniform distribution CDF
|
Intuitive explanation of Kolmogorov Smirnov Test
You are looking for the maximum deviation of empirical CDF (built from observations) from the theoretical values. By definition it can't be larger than 1.
Here's a plot for a uniform distribution CDF (black) and two stylized candidate CDFs (red):
You see that your candidate CDF can't be over the theoretical by more than $D^+$ or below it by more than $D^-$, both of which are bounded in magnitude by 1.
The empirical CDF $S_n$ for the purpose of this test is $S_i=i/N$.
Here we sorted the sample $x_i$ where $i=1,\dots,N$ so that $x_i<x_{i+1}$. You compare it with a theoretical CDF $F_i=F(x_i)$, then you have set of deviations $D^+_i=\max(0,S_i-F_i)$.
However, that's not what's amazing about KS statistic. It is that the distribution of $\sup_{x\in(-\infty,\infty)} D^+$ is the same for any distribution of data set! To me that's what you need to get intuitively if you can.
|
Intuitive explanation of Kolmogorov Smirnov Test
You are looking for the maximum deviation of empirical CDF (built from observations) from the theoretical values. By definition it can't be larger than 1.
Here's a plot for a uniform distribution CDF
|
6,958
|
Intuitive explanation of Kolmogorov Smirnov Test
|
I find it helpful to think of the two CDFs, whether population of empirical, as dancing around each other but staying close. Dance partners can spin around each other but will stay two armslengths of each other, right? When two people are further apart than that, they probably aren't dancing with each other.
ONE-SAMPLE
In the one-sample (goodness-of-fit) test, we assume that the data come from some distribution that has a particular CDF. The data also have an empirical CDF. If we are right, then the CDF of the data should dance around the CDF of the assumed distribution but stay close. If the dance partners get too far apart (in vertical distance), then we see that as evidence against our assumption.
TWO-SAMPLE
In the two-sample test, we assume that two data sets come from the same distribution. If that is the case, then the two empirical CDFs should dance around each other but stay fairly close. If the dance partners get too far apart (again, in vertical distance), then we see that as evidence against our assumption.
|
Intuitive explanation of Kolmogorov Smirnov Test
|
I find it helpful to think of the two CDFs, whether population of empirical, as dancing around each other but staying close. Dance partners can spin around each other but will stay two armslengths of
|
Intuitive explanation of Kolmogorov Smirnov Test
I find it helpful to think of the two CDFs, whether population of empirical, as dancing around each other but staying close. Dance partners can spin around each other but will stay two armslengths of each other, right? When two people are further apart than that, they probably aren't dancing with each other.
ONE-SAMPLE
In the one-sample (goodness-of-fit) test, we assume that the data come from some distribution that has a particular CDF. The data also have an empirical CDF. If we are right, then the CDF of the data should dance around the CDF of the assumed distribution but stay close. If the dance partners get too far apart (in vertical distance), then we see that as evidence against our assumption.
TWO-SAMPLE
In the two-sample test, we assume that two data sets come from the same distribution. If that is the case, then the two empirical CDFs should dance around each other but stay fairly close. If the dance partners get too far apart (again, in vertical distance), then we see that as evidence against our assumption.
|
Intuitive explanation of Kolmogorov Smirnov Test
I find it helpful to think of the two CDFs, whether population of empirical, as dancing around each other but staying close. Dance partners can spin around each other but will stay two armslengths of
|
6,959
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
|
The glm function in R allows 3 ways to specify the formula for a logistic regression model.
The most common is that each row of the data frame represents a single observation and the response variable is either 0 or 1 (or a factor with 2 levels, or other varibale with only 2 unique values).
Another option is to use a 2 column matrix as the response variable with the first column being the counts of 'successes' and the second column being the counts of 'failures'.
You can also specify the response as a proportion between 0 and 1, then specify another column as the 'weight' that gives the total number that the proportion is from (so a response of 0.3 and a weight of 10 is the same as 3 'successes' and 7 'failures').
Either of the last 2 ways would fit what you are trying to do, the last seems the most direct for how you describe your data.
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
|
The glm function in R allows 3 ways to specify the formula for a logistic regression model.
The most common is that each row of the data frame represents a single observation and the response variab
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
The glm function in R allows 3 ways to specify the formula for a logistic regression model.
The most common is that each row of the data frame represents a single observation and the response variable is either 0 or 1 (or a factor with 2 levels, or other varibale with only 2 unique values).
Another option is to use a 2 column matrix as the response variable with the first column being the counts of 'successes' and the second column being the counts of 'failures'.
You can also specify the response as a proportion between 0 and 1, then specify another column as the 'weight' that gives the total number that the proportion is from (so a response of 0.3 and a weight of 10 is the same as 3 'successes' and 7 'failures').
Either of the last 2 ways would fit what you are trying to do, the last seems the most direct for how you describe your data.
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
The glm function in R allows 3 ways to specify the formula for a logistic regression model.
The most common is that each row of the data frame represents a single observation and the response variab
|
6,960
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
|
As a start, if you have a dependent variable that is a proportion, you can use Beta Regression. This doesn't extend (with my limited knowledge) to multiple proportions.
For Beta Regression overview and an R implementation check out betareg.
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
|
As a start, if you have a dependent variable that is a proportion, you can use Beta Regression. This doesn't extend (with my limited knowledge) to multiple proportions.
For Beta Regression overview an
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
As a start, if you have a dependent variable that is a proportion, you can use Beta Regression. This doesn't extend (with my limited knowledge) to multiple proportions.
For Beta Regression overview and an R implementation check out betareg.
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
As a start, if you have a dependent variable that is a proportion, you can use Beta Regression. This doesn't extend (with my limited knowledge) to multiple proportions.
For Beta Regression overview an
|
6,961
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
|
I'v been using nnet::multinom (package nnet is part of MASS) for a similar purpose, it accepts continuous input in [0, 1].
If you need a reference: C. Beleites et.al.:
Raman spectroscopic grading of astrocytoma tissues: using soft reference information.
Anal Bioanal Chem, 2011, Vol. 400(9), pp. 2801-2816
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
|
I'v been using nnet::multinom (package nnet is part of MASS) for a similar purpose, it accepts continuous input in [0, 1].
If you need a reference: C. Beleites et.al.:
Raman spectroscopic grading of a
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
I'v been using nnet::multinom (package nnet is part of MASS) for a similar purpose, it accepts continuous input in [0, 1].
If you need a reference: C. Beleites et.al.:
Raman spectroscopic grading of astrocytoma tissues: using soft reference information.
Anal Bioanal Chem, 2011, Vol. 400(9), pp. 2801-2816
|
How to do logistic regression in R when outcome is fractional (a ratio of two counts)?
I'v been using nnet::multinom (package nnet is part of MASS) for a similar purpose, it accepts continuous input in [0, 1].
If you need a reference: C. Beleites et.al.:
Raman spectroscopic grading of a
|
6,962
|
What is a standard deviation?
|
Standard deviation is a number that represents the "spread" or "dispersion" of a set of data. There are other measures for spread, such as range and variance.
Here are some example sets of data, and their standard deviations:
[1,1,1] standard deviation = 0 (there's no spread)
[-1,1,3] standard deviation = 1.6 (some spread)
[-99,1,101] standard deviation = 82 (big spead)
The above data sets have the same mean.
Deviation means "distance from the mean".
"Standard" here means "standardized", meaning the standard deviation and mean are in the same units, unlike variance.
For example, if the mean height is 2 meters, the standard deviation might be 0.3 meters, whereas the variance would be 0.09 meters squared.
It is convenient to know that at least 75% of the data points always lie within 2 standard deviations of the mean (or around 95% if the distribution is Normal).
For example, if the mean is 100, and the standard deviation is 15, then at least 75% of the values are between 70 and 130.
If the distribution happens to be Normal, then 95% of the values are between 70 and 130.
Generally speaking, IQ test scores are normally distributed and have an average of 100. Someone who is "very bright" is two standard deviations above the mean, meaning an IQ test score of 130.
|
What is a standard deviation?
|
Standard deviation is a number that represents the "spread" or "dispersion" of a set of data. There are other measures for spread, such as range and variance.
Here are some example sets of data, and
|
What is a standard deviation?
Standard deviation is a number that represents the "spread" or "dispersion" of a set of data. There are other measures for spread, such as range and variance.
Here are some example sets of data, and their standard deviations:
[1,1,1] standard deviation = 0 (there's no spread)
[-1,1,3] standard deviation = 1.6 (some spread)
[-99,1,101] standard deviation = 82 (big spead)
The above data sets have the same mean.
Deviation means "distance from the mean".
"Standard" here means "standardized", meaning the standard deviation and mean are in the same units, unlike variance.
For example, if the mean height is 2 meters, the standard deviation might be 0.3 meters, whereas the variance would be 0.09 meters squared.
It is convenient to know that at least 75% of the data points always lie within 2 standard deviations of the mean (or around 95% if the distribution is Normal).
For example, if the mean is 100, and the standard deviation is 15, then at least 75% of the values are between 70 and 130.
If the distribution happens to be Normal, then 95% of the values are between 70 and 130.
Generally speaking, IQ test scores are normally distributed and have an average of 100. Someone who is "very bright" is two standard deviations above the mean, meaning an IQ test score of 130.
|
What is a standard deviation?
Standard deviation is a number that represents the "spread" or "dispersion" of a set of data. There are other measures for spread, such as range and variance.
Here are some example sets of data, and
|
6,963
|
What is a standard deviation?
|
A quote from Wikipedia.
It shows how much variation there is from the "average" (mean, or expected/budgeted value). A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data is spread out over a large range of values.
|
What is a standard deviation?
|
A quote from Wikipedia.
It shows how much variation there is from the "average" (mean, or expected/budgeted value). A low standard deviation indicates that the data points tend to be very close to th
|
What is a standard deviation?
A quote from Wikipedia.
It shows how much variation there is from the "average" (mean, or expected/budgeted value). A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data is spread out over a large range of values.
|
What is a standard deviation?
A quote from Wikipedia.
It shows how much variation there is from the "average" (mean, or expected/budgeted value). A low standard deviation indicates that the data points tend to be very close to th
|
6,964
|
What is a standard deviation?
|
When describing a variable we typically summarise it using two measures: a measure of centre and a measure of spread. Common measures of centre include the mean, median and mode. Common measure of spread include the variance and interquartile range.
The variance (represented by the Greek lowercase sigma raised to the power two) is commonly used when the mean is reported. The variance is the average squared deviation of variable. The deviation is calculated by subtracting the mean from each observation. This is squared because the sum would otherwise be zero and squaring removes this problem while maintaining the relative size of the deviations. The problem with using the variation as a measure of spread is that it is in squared units. For example if our variable of interest was height measured in inches then the variance would be reported in squared-inches which makes little sense. The standard deviation (represented by the Greek lowercase sigma) is the square-root of the variance and returns the measure of spread to the original units. This is much more intuitive and is therefore more popular than the variance.
When using the standard deviation, one has to be careful of outliers as they will skew the standard deviation (and the mean) as they are not resistant measures of spread. A simple example will illustrate this property. The mean of my terrible cricket batting scores of 13, 14, 16, 23, 26, 28, 33, 39, and 61 is 28.11. If we consider 61 to be an outlier and deleted it, the mean would be 24.
|
What is a standard deviation?
|
When describing a variable we typically summarise it using two measures: a measure of centre and a measure of spread. Common measures of centre include the mean, median and mode. Common measure of spr
|
What is a standard deviation?
When describing a variable we typically summarise it using two measures: a measure of centre and a measure of spread. Common measures of centre include the mean, median and mode. Common measure of spread include the variance and interquartile range.
The variance (represented by the Greek lowercase sigma raised to the power two) is commonly used when the mean is reported. The variance is the average squared deviation of variable. The deviation is calculated by subtracting the mean from each observation. This is squared because the sum would otherwise be zero and squaring removes this problem while maintaining the relative size of the deviations. The problem with using the variation as a measure of spread is that it is in squared units. For example if our variable of interest was height measured in inches then the variance would be reported in squared-inches which makes little sense. The standard deviation (represented by the Greek lowercase sigma) is the square-root of the variance and returns the measure of spread to the original units. This is much more intuitive and is therefore more popular than the variance.
When using the standard deviation, one has to be careful of outliers as they will skew the standard deviation (and the mean) as they are not resistant measures of spread. A simple example will illustrate this property. The mean of my terrible cricket batting scores of 13, 14, 16, 23, 26, 28, 33, 39, and 61 is 28.11. If we consider 61 to be an outlier and deleted it, the mean would be 24.
|
What is a standard deviation?
When describing a variable we typically summarise it using two measures: a measure of centre and a measure of spread. Common measures of centre include the mean, median and mode. Common measure of spr
|
6,965
|
What is a standard deviation?
|
Here's how I would answer this question using a diagram.
Let's say we weigh 30 cats and calculate the mean weight. Then we produce a scatter plot, with weight on the y axis and cat identity on the x axis. The mean weight can be drawn in as a horizontal line. We can then draw in vertical lines which connect each data point to the mean line - these are the deviations of each data point from the mean, and we call them residuals. Now, these residuals can be useful because they can tell us something about the spread of the data: if there are many big residuals, then cats vary a lot in mass. Conversely, if the residuals are mainly small, then cats are fairly closely clustered around the average weight. So if we could have some metric which tells us the average length of a residual in this data set, this would be a handy way of denoting how much spread there is in the data. The standard deviation is, effectively, the length of the average residual.
I would follow on on from this by giving the calculation for s.d., explaining why we square and then square root (I like Vaibhav's short and sweet explanation). Then I would mention the problems of outliers, as Graham does in his last paragraph.
|
What is a standard deviation?
|
Here's how I would answer this question using a diagram.
Let's say we weigh 30 cats and calculate the mean weight. Then we produce a scatter plot, with weight on the y axis and cat identity on the
|
What is a standard deviation?
Here's how I would answer this question using a diagram.
Let's say we weigh 30 cats and calculate the mean weight. Then we produce a scatter plot, with weight on the y axis and cat identity on the x axis. The mean weight can be drawn in as a horizontal line. We can then draw in vertical lines which connect each data point to the mean line - these are the deviations of each data point from the mean, and we call them residuals. Now, these residuals can be useful because they can tell us something about the spread of the data: if there are many big residuals, then cats vary a lot in mass. Conversely, if the residuals are mainly small, then cats are fairly closely clustered around the average weight. So if we could have some metric which tells us the average length of a residual in this data set, this would be a handy way of denoting how much spread there is in the data. The standard deviation is, effectively, the length of the average residual.
I would follow on on from this by giving the calculation for s.d., explaining why we square and then square root (I like Vaibhav's short and sweet explanation). Then I would mention the problems of outliers, as Graham does in his last paragraph.
|
What is a standard deviation?
Here's how I would answer this question using a diagram.
Let's say we weigh 30 cats and calculate the mean weight. Then we produce a scatter plot, with weight on the y axis and cat identity on the
|
6,966
|
What is a standard deviation?
|
I like to think of it as follows: the standard deviation is the average distance from the average. This is more conceptually useful than mathematically useful, but its a nice way to explain it to the uninitiated.
|
What is a standard deviation?
|
I like to think of it as follows: the standard deviation is the average distance from the average. This is more conceptually useful than mathematically useful, but its a nice way to explain it to the
|
What is a standard deviation?
I like to think of it as follows: the standard deviation is the average distance from the average. This is more conceptually useful than mathematically useful, but its a nice way to explain it to the uninitiated.
|
What is a standard deviation?
I like to think of it as follows: the standard deviation is the average distance from the average. This is more conceptually useful than mathematically useful, but its a nice way to explain it to the
|
6,967
|
What is a standard deviation?
|
A standard deviation is the square root of the second central moment of a distribution. A central moment is the expected difference from the expected value of the distribution. A first central moment would usually be 0, so we define a second central moment as the expected value of the squared distance of a random variable from its expected value.
To put it on a scale that is more in line with the original observations, we take the square root of that second central moment and call it the standard deviation.
Standard deviation is a property of a population. It measures how much average "dispersion" there is to that population. Are all the obsrvations clustered around the mean, or are they widely spread out?
To estimate the standard deviation of a population, we often calculate the standard deviation of a "sample" from that population. To do this, you take observations from that population, calculate a mean of those observations, and then calculate the square root of the average squared deviation from that "sample mean".
To get an unbiased estimator of the variance, you don't actually calculate the average squared deviation from the sample mean, but instead, you divide by (N-1) where N is the number of observations in your sample. Note that this "sample standard deviation" is not an unbiased estimator of the standard deviation, but the square of the "sample standard deviation" is an unbiased estimator of the variance of the population.
|
What is a standard deviation?
|
A standard deviation is the square root of the second central moment of a distribution. A central moment is the expected difference from the expected value of the distribution. A first central moment
|
What is a standard deviation?
A standard deviation is the square root of the second central moment of a distribution. A central moment is the expected difference from the expected value of the distribution. A first central moment would usually be 0, so we define a second central moment as the expected value of the squared distance of a random variable from its expected value.
To put it on a scale that is more in line with the original observations, we take the square root of that second central moment and call it the standard deviation.
Standard deviation is a property of a population. It measures how much average "dispersion" there is to that population. Are all the obsrvations clustered around the mean, or are they widely spread out?
To estimate the standard deviation of a population, we often calculate the standard deviation of a "sample" from that population. To do this, you take observations from that population, calculate a mean of those observations, and then calculate the square root of the average squared deviation from that "sample mean".
To get an unbiased estimator of the variance, you don't actually calculate the average squared deviation from the sample mean, but instead, you divide by (N-1) where N is the number of observations in your sample. Note that this "sample standard deviation" is not an unbiased estimator of the standard deviation, but the square of the "sample standard deviation" is an unbiased estimator of the variance of the population.
|
What is a standard deviation?
A standard deviation is the square root of the second central moment of a distribution. A central moment is the expected difference from the expected value of the distribution. A first central moment
|
6,968
|
What is a standard deviation?
|
If the information required is the distribution of data about the mean, standard deviation comes in handy.
The sum of the difference of each value from the mean is zero (obviously, since the value are evenly spread around the mean), hence we square each difference so as to convert negative values to positive, sum them across the population, and take their square root. This value is then divided by the number of samples (or, the size of the population). This gives the standard deviation.
|
What is a standard deviation?
|
If the information required is the distribution of data about the mean, standard deviation comes in handy.
The sum of the difference of each value from the mean is zero (obviously, since the value are
|
What is a standard deviation?
If the information required is the distribution of data about the mean, standard deviation comes in handy.
The sum of the difference of each value from the mean is zero (obviously, since the value are evenly spread around the mean), hence we square each difference so as to convert negative values to positive, sum them across the population, and take their square root. This value is then divided by the number of samples (or, the size of the population). This gives the standard deviation.
|
What is a standard deviation?
If the information required is the distribution of data about the mean, standard deviation comes in handy.
The sum of the difference of each value from the mean is zero (obviously, since the value are
|
6,969
|
What is a standard deviation?
|
The best way I have understood standard deviation is to think of a hair dresser! (You need to collect data from a hair dresser and averge her hair cutting speed for this example to work.)
It takes an average of 30 minutes for the hair dresser to cut a persons hair.
Suppose you do the calculation (most software packages will do this for you) and you find that the standard deviation is 5 minutes. It means the following:
the hair dresser cuts hair of 68% of her clients within 25 minutes and 35 minutes
the hair dresser cuts hair of 96% of her clients within 20 and 40 minutes
How do I know this? You need to look at the normal curve, where 68% falls within 1 standard deviation and 96% falls within 2 standard deviations of the mean (in this case 30 minutes). So you add or subtract the standard deviation from the mean.
If consistency is desired, as in this case, then the smaller the standard deviation, the better. In this case, the hair dresser spends a maximum of about 40 minutes with any given client. You need to cut hair fast in order to run a successful saloon!
|
What is a standard deviation?
|
The best way I have understood standard deviation is to think of a hair dresser! (You need to collect data from a hair dresser and averge her hair cutting speed for this example to work.)
It takes an
|
What is a standard deviation?
The best way I have understood standard deviation is to think of a hair dresser! (You need to collect data from a hair dresser and averge her hair cutting speed for this example to work.)
It takes an average of 30 minutes for the hair dresser to cut a persons hair.
Suppose you do the calculation (most software packages will do this for you) and you find that the standard deviation is 5 minutes. It means the following:
the hair dresser cuts hair of 68% of her clients within 25 minutes and 35 minutes
the hair dresser cuts hair of 96% of her clients within 20 and 40 minutes
How do I know this? You need to look at the normal curve, where 68% falls within 1 standard deviation and 96% falls within 2 standard deviations of the mean (in this case 30 minutes). So you add or subtract the standard deviation from the mean.
If consistency is desired, as in this case, then the smaller the standard deviation, the better. In this case, the hair dresser spends a maximum of about 40 minutes with any given client. You need to cut hair fast in order to run a successful saloon!
|
What is a standard deviation?
The best way I have understood standard deviation is to think of a hair dresser! (You need to collect data from a hair dresser and averge her hair cutting speed for this example to work.)
It takes an
|
6,970
|
Linearity of PCA
|
When we say that PCA is a linear method, we refer to the dimensionality reducing mapping $f:\mathbf x\mapsto \mathbf z$ from high-dimensional space $\mathbb R^p$ to a lower-dimensional space $\mathbb R^k$. In PCA, this mapping is given by multiplication of $\mathbf x$ by the matrix of PCA eigenvectors and so is manifestly linear (matrix multiplication is linear): $$\mathbf z = f(\mathbf x) = \mathbf V^\top \mathbf x.$$ This is in contrast with nonlinear methods of dimensionality reduction, where the dimensionality reducing mapping can be nonlinear.
On the other hand, the $k$ top eigenvectors $\mathbf V\in \mathbb R^{p\times k}$ are computed from the data matrix $\mathbf X\in \mathbb R^{n\times p}$ using what you called $\mathrm{PCA}()$ in your question: $$\mathbf V = \mathrm{PCA}(\mathbf X),$$
and this mapping is certainly non-linear: it involves computing eigenvectors of the covariance matrix, which is a non-linear procedure. (As a trivial example, multiplying $\mathbf X$ by $2$ increases the covariance matrix by $4$, but its eigenvectors stay the same as they are normalized to have unit length.)
|
Linearity of PCA
|
When we say that PCA is a linear method, we refer to the dimensionality reducing mapping $f:\mathbf x\mapsto \mathbf z$ from high-dimensional space $\mathbb R^p$ to a lower-dimensional space $\mathbb
|
Linearity of PCA
When we say that PCA is a linear method, we refer to the dimensionality reducing mapping $f:\mathbf x\mapsto \mathbf z$ from high-dimensional space $\mathbb R^p$ to a lower-dimensional space $\mathbb R^k$. In PCA, this mapping is given by multiplication of $\mathbf x$ by the matrix of PCA eigenvectors and so is manifestly linear (matrix multiplication is linear): $$\mathbf z = f(\mathbf x) = \mathbf V^\top \mathbf x.$$ This is in contrast with nonlinear methods of dimensionality reduction, where the dimensionality reducing mapping can be nonlinear.
On the other hand, the $k$ top eigenvectors $\mathbf V\in \mathbb R^{p\times k}$ are computed from the data matrix $\mathbf X\in \mathbb R^{n\times p}$ using what you called $\mathrm{PCA}()$ in your question: $$\mathbf V = \mathrm{PCA}(\mathbf X),$$
and this mapping is certainly non-linear: it involves computing eigenvectors of the covariance matrix, which is a non-linear procedure. (As a trivial example, multiplying $\mathbf X$ by $2$ increases the covariance matrix by $4$, but its eigenvectors stay the same as they are normalized to have unit length.)
|
Linearity of PCA
When we say that PCA is a linear method, we refer to the dimensionality reducing mapping $f:\mathbf x\mapsto \mathbf z$ from high-dimensional space $\mathbb R^p$ to a lower-dimensional space $\mathbb
|
6,971
|
Linearity of PCA
|
"Linear" can mean many things, and is not exclusively employed in a formal manner.
PCA is not often defined as a function in the formal sense, and therefore it is not expected to fulfill the requirements of a linear function when described as such. It is more often described, as you said, as a procedure, and sometimes an algorithm (although I don't like this last option). It is often said to be linear in an informal, not well-defined way.
PCA can be considered linear, for instance, in the following sense. It belongs to a family of methods that consider that each variable $X_i$ can be approximated by a function
$$
X_i \approx f_Y(\alpha)
$$
where $\alpha \in \mathbb{R}^k$ and $Y$ is a set of $k$ variables with some desirable property. In the case of PCA, $Y$ is a set of independent variables that can be reduced in cardinality with minimal loss in approximation accuracy in a specific sense. Those are desirable properties in numerous settings.
Now, for PCA, each $f_i$ is restricted to the form
$$
f_Y(\alpha) = \sum_{i=1}^k \alpha_{i}Y_i
$$
that is, a linear combination of the variables in $Y$.
Given this restriction, it offers a procedure to find the optimal (in some sense) values of $Y$ and the $\alpha_{ij}$'s. That is, PCA only considers linear functions as plausible hypotheses. In this sense, I think it can be legitimately described as "linear".
|
Linearity of PCA
|
"Linear" can mean many things, and is not exclusively employed in a formal manner.
PCA is not often defined as a function in the formal sense, and therefore it is not expected to fulfill the requireme
|
Linearity of PCA
"Linear" can mean many things, and is not exclusively employed in a formal manner.
PCA is not often defined as a function in the formal sense, and therefore it is not expected to fulfill the requirements of a linear function when described as such. It is more often described, as you said, as a procedure, and sometimes an algorithm (although I don't like this last option). It is often said to be linear in an informal, not well-defined way.
PCA can be considered linear, for instance, in the following sense. It belongs to a family of methods that consider that each variable $X_i$ can be approximated by a function
$$
X_i \approx f_Y(\alpha)
$$
where $\alpha \in \mathbb{R}^k$ and $Y$ is a set of $k$ variables with some desirable property. In the case of PCA, $Y$ is a set of independent variables that can be reduced in cardinality with minimal loss in approximation accuracy in a specific sense. Those are desirable properties in numerous settings.
Now, for PCA, each $f_i$ is restricted to the form
$$
f_Y(\alpha) = \sum_{i=1}^k \alpha_{i}Y_i
$$
that is, a linear combination of the variables in $Y$.
Given this restriction, it offers a procedure to find the optimal (in some sense) values of $Y$ and the $\alpha_{ij}$'s. That is, PCA only considers linear functions as plausible hypotheses. In this sense, I think it can be legitimately described as "linear".
|
Linearity of PCA
"Linear" can mean many things, and is not exclusively employed in a formal manner.
PCA is not often defined as a function in the formal sense, and therefore it is not expected to fulfill the requireme
|
6,972
|
Linearity of PCA
|
PCA provides/is a linear transformation.
If you take the map associated with a particular analysis, say $\mathbf{M} \equiv PCA(X_1 + X_2)$ then $\mathbf{M}(X_1+X_2) = \mathbf{M}(X_1) + \mathbf{M}(X_2)$.
The culprit is that $PCA(X_1 + X_2)$, $PCA(X_1)$ and $PCA(X_2)$ are not the same linear transformations.
As a comparison a very simple example of a process that uses a linear transformation but is not a linear transformation itself:
The rotation $D(\mathbf{v})$ that doubles the angle of a vector $\mathbf{v}$ (say a point in 2-d euclidian space) with some reference vector (say $\left[x,y\right]=\left[1,0\right]$), is not a linear transformation. For example
$D(\left[1,1\right]) \rightarrow \left[0,\sqrt{2}\right]$
and
$D(\left[0,1\right]) \rightarrow \left[-1,0\right]$
but
$D(\left[1,1\right]+\left[0,1\right]=\left[1,2\right]) \rightarrow \left[-0.78,2.09\right] \neq \left[-1,\sqrt{2}\right]$
this doubling of the angle, which involves calculation of angles, is not linear, and is analogous to the statement of amoeba, that the calculation of eigenvector is not linear
|
Linearity of PCA
|
PCA provides/is a linear transformation.
If you take the map associated with a particular analysis, say $\mathbf{M} \equiv PCA(X_1 + X_2)$ then $\mathbf{M}(X_1+X_2) = \mathbf{M}(X_1) + \mathbf{M}(X_
|
Linearity of PCA
PCA provides/is a linear transformation.
If you take the map associated with a particular analysis, say $\mathbf{M} \equiv PCA(X_1 + X_2)$ then $\mathbf{M}(X_1+X_2) = \mathbf{M}(X_1) + \mathbf{M}(X_2)$.
The culprit is that $PCA(X_1 + X_2)$, $PCA(X_1)$ and $PCA(X_2)$ are not the same linear transformations.
As a comparison a very simple example of a process that uses a linear transformation but is not a linear transformation itself:
The rotation $D(\mathbf{v})$ that doubles the angle of a vector $\mathbf{v}$ (say a point in 2-d euclidian space) with some reference vector (say $\left[x,y\right]=\left[1,0\right]$), is not a linear transformation. For example
$D(\left[1,1\right]) \rightarrow \left[0,\sqrt{2}\right]$
and
$D(\left[0,1\right]) \rightarrow \left[-1,0\right]$
but
$D(\left[1,1\right]+\left[0,1\right]=\left[1,2\right]) \rightarrow \left[-0.78,2.09\right] \neq \left[-1,\sqrt{2}\right]$
this doubling of the angle, which involves calculation of angles, is not linear, and is analogous to the statement of amoeba, that the calculation of eigenvector is not linear
|
Linearity of PCA
PCA provides/is a linear transformation.
If you take the map associated with a particular analysis, say $\mathbf{M} \equiv PCA(X_1 + X_2)$ then $\mathbf{M}(X_1+X_2) = \mathbf{M}(X_1) + \mathbf{M}(X_
|
6,973
|
how to represent geography or zip code in machine learning model or recommender system?
|
One of my favorite uses of zip code data is to look up demographic variables based on zipcode that may not be available at the individual level otherwise...
For instance, with http://www.city-data.com/ you can look up income distribution, age ranges, etc., which might tell you something about your data. These continuous variables are often far more useful than just going based on binarized zip codes, at least for relatively finite amounts of data.
Also, zip codes are hierarchical... if you take the first two or three digits, and binarize based on those, you have some amount of regional information, which gets you more data than individual zips.
As Zach said, used latitude and longitude can also be useful, especially in a tree based model. For a regularized linear model, you can use quadtrees, splitting up the United States into four geographic groups, binarized those, then each of those areas into four groups, and including those as additional binary variables... so for n total leaf regions you end up with [(4n - 1)/3 - 1] total variables (n for the smallest regions, n/4 for the next level up, etc). Of course this is multicollinear, which is why regularization is needed to do this.
|
how to represent geography or zip code in machine learning model or recommender system?
|
One of my favorite uses of zip code data is to look up demographic variables based on zipcode that may not be available at the individual level otherwise...
For instance, with http://www.city-data.co
|
how to represent geography or zip code in machine learning model or recommender system?
One of my favorite uses of zip code data is to look up demographic variables based on zipcode that may not be available at the individual level otherwise...
For instance, with http://www.city-data.com/ you can look up income distribution, age ranges, etc., which might tell you something about your data. These continuous variables are often far more useful than just going based on binarized zip codes, at least for relatively finite amounts of data.
Also, zip codes are hierarchical... if you take the first two or three digits, and binarize based on those, you have some amount of regional information, which gets you more data than individual zips.
As Zach said, used latitude and longitude can also be useful, especially in a tree based model. For a regularized linear model, you can use quadtrees, splitting up the United States into four geographic groups, binarized those, then each of those areas into four groups, and including those as additional binary variables... so for n total leaf regions you end up with [(4n - 1)/3 - 1] total variables (n for the smallest regions, n/4 for the next level up, etc). Of course this is multicollinear, which is why regularization is needed to do this.
|
how to represent geography or zip code in machine learning model or recommender system?
One of my favorite uses of zip code data is to look up demographic variables based on zipcode that may not be available at the individual level otherwise...
For instance, with http://www.city-data.co
|
6,974
|
how to represent geography or zip code in machine learning model or recommender system?
|
There's 2 good options that I've seen:
Convert each zipcode to a dummy variable. If you have a lot of
data, this can be a quick and easy solution, but you won't be able
to make predictions for new zip codes. If you're worried about the number of features, you can add some regularization to your model to drop some of the zipcodes out of the model.
Use the latitude and longitude of the center point of the zipcode as variables. This works really well in tree-based models, as they can cut up the latitude/longitude grid into regions that are relevant to your target variable. This will also allow you to make predictions for new zipcodes, and doesn't require as much data to get right. However, this won't work well for linear models.
Personally, I really like tree-based models (such as random forest or GBMs), so I almost always choose option 2. If you want to get really fancy, you can use the lat/lon of the center of population for the zipcode, rather than the zipcode centroid. But that can be hard to get ahold of.
|
how to represent geography or zip code in machine learning model or recommender system?
|
There's 2 good options that I've seen:
Convert each zipcode to a dummy variable. If you have a lot of
data, this can be a quick and easy solution, but you won't be able
to make predictions for new z
|
how to represent geography or zip code in machine learning model or recommender system?
There's 2 good options that I've seen:
Convert each zipcode to a dummy variable. If you have a lot of
data, this can be a quick and easy solution, but you won't be able
to make predictions for new zip codes. If you're worried about the number of features, you can add some regularization to your model to drop some of the zipcodes out of the model.
Use the latitude and longitude of the center point of the zipcode as variables. This works really well in tree-based models, as they can cut up the latitude/longitude grid into regions that are relevant to your target variable. This will also allow you to make predictions for new zipcodes, and doesn't require as much data to get right. However, this won't work well for linear models.
Personally, I really like tree-based models (such as random forest or GBMs), so I almost always choose option 2. If you want to get really fancy, you can use the lat/lon of the center of population for the zipcode, rather than the zipcode centroid. But that can be hard to get ahold of.
|
how to represent geography or zip code in machine learning model or recommender system?
There's 2 good options that I've seen:
Convert each zipcode to a dummy variable. If you have a lot of
data, this can be a quick and easy solution, but you won't be able
to make predictions for new z
|
6,975
|
how to represent geography or zip code in machine learning model or recommender system?
|
If you are calculating distance between records, as in clustering or K-NN, distances between zipcodes in their raw form might be informative. 02138 is much closer to 02139, geographically, than it is to 45809.
|
how to represent geography or zip code in machine learning model or recommender system?
|
If you are calculating distance between records, as in clustering or K-NN, distances between zipcodes in their raw form might be informative. 02138 is much closer to 02139, geographically, than it is
|
how to represent geography or zip code in machine learning model or recommender system?
If you are calculating distance between records, as in clustering or K-NN, distances between zipcodes in their raw form might be informative. 02138 is much closer to 02139, geographically, than it is to 45809.
|
how to represent geography or zip code in machine learning model or recommender system?
If you are calculating distance between records, as in clustering or K-NN, distances between zipcodes in their raw form might be informative. 02138 is much closer to 02139, geographically, than it is
|
6,976
|
how to represent geography or zip code in machine learning model or recommender system?
|
I would make a choropleth map of your model's residuals at the zip code level.
The result is called a spatial residual map and it may help you choose a new explanatory variable to include in your model. This approach is called exploratory spatial data analysis (ESDA) .
One potential workflow:
for each zip code get the average residual
make a choropleth map to see the geographic distribution of the residuals
look for patterns that might be explained by a new explanatory variable. For example, if you see all suburban or southern or beach zipcodes with high residuals then you can add a regional dummy variable defined by the relevant zipcode grouping, or if you see high residuals for high income zipcodes then you can add an income variable.
|
how to represent geography or zip code in machine learning model or recommender system?
|
I would make a choropleth map of your model's residuals at the zip code level.
The result is called a spatial residual map and it may help you choose a new explanatory variable to include in your mode
|
how to represent geography or zip code in machine learning model or recommender system?
I would make a choropleth map of your model's residuals at the zip code level.
The result is called a spatial residual map and it may help you choose a new explanatory variable to include in your model. This approach is called exploratory spatial data analysis (ESDA) .
One potential workflow:
for each zip code get the average residual
make a choropleth map to see the geographic distribution of the residuals
look for patterns that might be explained by a new explanatory variable. For example, if you see all suburban or southern or beach zipcodes with high residuals then you can add a regional dummy variable defined by the relevant zipcode grouping, or if you see high residuals for high income zipcodes then you can add an income variable.
|
how to represent geography or zip code in machine learning model or recommender system?
I would make a choropleth map of your model's residuals at the zip code level.
The result is called a spatial residual map and it may help you choose a new explanatory variable to include in your mode
|
6,977
|
how to represent geography or zip code in machine learning model or recommender system?
|
I dealt with something similar when training a classifier that used native language as a feature (how do you measure similarity between English and Spanish?) There are lots of methods out there for determining similarity among non-categorical data.
It depends on your data, but if you find that geographic distance from a zip code is not as important as whether a given input contains particular zip codes, then non-categorical methods might help.
|
how to represent geography or zip code in machine learning model or recommender system?
|
I dealt with something similar when training a classifier that used native language as a feature (how do you measure similarity between English and Spanish?) There are lots of methods out there for de
|
how to represent geography or zip code in machine learning model or recommender system?
I dealt with something similar when training a classifier that used native language as a feature (how do you measure similarity between English and Spanish?) There are lots of methods out there for determining similarity among non-categorical data.
It depends on your data, but if you find that geographic distance from a zip code is not as important as whether a given input contains particular zip codes, then non-categorical methods might help.
|
how to represent geography or zip code in machine learning model or recommender system?
I dealt with something similar when training a classifier that used native language as a feature (how do you measure similarity between English and Spanish?) There are lots of methods out there for de
|
6,978
|
how to represent geography or zip code in machine learning model or recommender system?
|
You could transform your zip code into a nominal variable (string/factor). However, as far as I remember, zip code might contain other information like county, region, etc. What I would do is to understand how zip code encodes information and decode that into multiple features.
Anyway letting zip code as a numeric variable is not a good idea since some models might consider the numeric ordering or distances as something to learn.
|
how to represent geography or zip code in machine learning model or recommender system?
|
You could transform your zip code into a nominal variable (string/factor). However, as far as I remember, zip code might contain other information like county, region, etc. What I would do is to under
|
how to represent geography or zip code in machine learning model or recommender system?
You could transform your zip code into a nominal variable (string/factor). However, as far as I remember, zip code might contain other information like county, region, etc. What I would do is to understand how zip code encodes information and decode that into multiple features.
Anyway letting zip code as a numeric variable is not a good idea since some models might consider the numeric ordering or distances as something to learn.
|
how to represent geography or zip code in machine learning model or recommender system?
You could transform your zip code into a nominal variable (string/factor). However, as far as I remember, zip code might contain other information like county, region, etc. What I would do is to under
|
6,979
|
how to represent geography or zip code in machine learning model or recommender system?
|
Any demographic data contain lots of categories, and there are different methods to convert categorical data into numeric. However, normal encoding method like one hot encoding might cause high cardinality issue or might get some issue related to memory.
In my opinion the best way to handle zip codes is to use "HashEncoder" or "TargetEncoder" .
Both these encoders are already available in category_encoders package in python.
|
how to represent geography or zip code in machine learning model or recommender system?
|
Any demographic data contain lots of categories, and there are different methods to convert categorical data into numeric. However, normal encoding method like one hot encoding might cause high cardin
|
how to represent geography or zip code in machine learning model or recommender system?
Any demographic data contain lots of categories, and there are different methods to convert categorical data into numeric. However, normal encoding method like one hot encoding might cause high cardinality issue or might get some issue related to memory.
In my opinion the best way to handle zip codes is to use "HashEncoder" or "TargetEncoder" .
Both these encoders are already available in category_encoders package in python.
|
how to represent geography or zip code in machine learning model or recommender system?
Any demographic data contain lots of categories, and there are different methods to convert categorical data into numeric. However, normal encoding method like one hot encoding might cause high cardin
|
6,980
|
how to represent geography or zip code in machine learning model or recommender system?
|
You can featurize the Zipcodes using the above techniques, but let me suggest an alternative.
Suppose we have binary class labels. And in data we have "n" zip codes. Now we take the probability of occurence of each pincode in data, provided some class label(either 1 or zero).
So, lets say for a zipcode "j" ------>>>>
We get a probability P_j as:
no. of occurences of "j" / Total no of occurences of "j", when class label is 1 or 0.
This way we can convert it into a very nice proabilistic interpretation.
|
how to represent geography or zip code in machine learning model or recommender system?
|
You can featurize the Zipcodes using the above techniques, but let me suggest an alternative.
Suppose we have binary class labels. And in data we have "n" zip codes. Now we take the probability of occ
|
how to represent geography or zip code in machine learning model or recommender system?
You can featurize the Zipcodes using the above techniques, but let me suggest an alternative.
Suppose we have binary class labels. And in data we have "n" zip codes. Now we take the probability of occurence of each pincode in data, provided some class label(either 1 or zero).
So, lets say for a zipcode "j" ------>>>>
We get a probability P_j as:
no. of occurences of "j" / Total no of occurences of "j", when class label is 1 or 0.
This way we can convert it into a very nice proabilistic interpretation.
|
how to represent geography or zip code in machine learning model or recommender system?
You can featurize the Zipcodes using the above techniques, but let me suggest an alternative.
Suppose we have binary class labels. And in data we have "n" zip codes. Now we take the probability of occ
|
6,981
|
Is it possible to prove a null hypothesis?
|
If you are talking about the real world & not formal logic, the answer is of course. "Proof" of anything by empirical means depends on the strength of the inference one can make, which in turn is determined by validity of the testing process as evaluated in light of everything one knows about how the world works (i.e., theory). Whenever one accepts that certain empirical results justify rejecting the "null" hypothesis, one is necessarily making judgments of this sort (validity of design; world works in certain way), so having to make the analogous assumptions necessary to justify inferring "proof of the null" is not problematic at all.
So what are the analogous assumptions? Here is an example of "proving the null" that is commonplace in health science & in social science. (1) Define "null" or "no effect" in some way that is practically meaningful. Let's say that I believe that I should conduct myself as if there is no meaningful difference between 2 treatments, t1 & t2, for a disease unless one gives a 3% better chance of recovery than the other. (2) Figure out a valid design for testing whether there is any effect-- in this case, whether there is a difference in recovery likelihood between t1 & t2. (3) Do a power analysis to determine whether what sample size is necessary to generate a sufficiently high likelihood-- one that I am confident relying on given what's at stake -- that I would see the meaningful effect, 3% in my example, assuming it exists. Usually people say power is sufficient if the likelihood of observing a specified effect at a specified alpha is at least 0.80, but the right level of confidence is really a matter of how averse you are to error -- same as it is when you select p-value threshold for "rejecting the null."(4) Perform the empirical test & observe the effect. If it is below the specified "meaningful difference" value -- 3% in my example -- you've "proven" that there is "no effect."
For a good treatment of this matter, see Streiner, D.L. Unicorns Do Exist: A Tutorial on “Proving” the Null Hypothesis. Canadian Journal of Psychiatry 48, 756-761 (2003).
|
Is it possible to prove a null hypothesis?
|
If you are talking about the real world & not formal logic, the answer is of course. "Proof" of anything by empirical means depends on the strength of the inference one can make, which in turn is dete
|
Is it possible to prove a null hypothesis?
If you are talking about the real world & not formal logic, the answer is of course. "Proof" of anything by empirical means depends on the strength of the inference one can make, which in turn is determined by validity of the testing process as evaluated in light of everything one knows about how the world works (i.e., theory). Whenever one accepts that certain empirical results justify rejecting the "null" hypothesis, one is necessarily making judgments of this sort (validity of design; world works in certain way), so having to make the analogous assumptions necessary to justify inferring "proof of the null" is not problematic at all.
So what are the analogous assumptions? Here is an example of "proving the null" that is commonplace in health science & in social science. (1) Define "null" or "no effect" in some way that is practically meaningful. Let's say that I believe that I should conduct myself as if there is no meaningful difference between 2 treatments, t1 & t2, for a disease unless one gives a 3% better chance of recovery than the other. (2) Figure out a valid design for testing whether there is any effect-- in this case, whether there is a difference in recovery likelihood between t1 & t2. (3) Do a power analysis to determine whether what sample size is necessary to generate a sufficiently high likelihood-- one that I am confident relying on given what's at stake -- that I would see the meaningful effect, 3% in my example, assuming it exists. Usually people say power is sufficient if the likelihood of observing a specified effect at a specified alpha is at least 0.80, but the right level of confidence is really a matter of how averse you are to error -- same as it is when you select p-value threshold for "rejecting the null."(4) Perform the empirical test & observe the effect. If it is below the specified "meaningful difference" value -- 3% in my example -- you've "proven" that there is "no effect."
For a good treatment of this matter, see Streiner, D.L. Unicorns Do Exist: A Tutorial on “Proving” the Null Hypothesis. Canadian Journal of Psychiatry 48, 756-761 (2003).
|
Is it possible to prove a null hypothesis?
If you are talking about the real world & not formal logic, the answer is of course. "Proof" of anything by empirical means depends on the strength of the inference one can make, which in turn is dete
|
6,982
|
Is it possible to prove a null hypothesis?
|
Answer from the mathematical side : it is possible if and only if "hypotheses are mutually singular".
If by "prove" you mean have a rule that can "accept" (should I say that:) ) $H_0$ with a probability to make a mistake that is zero, then you are searching what could be called "ideal test" and this exists:
If you are testing wether a random variable $X$ is drawn from $P_0$ or from $P_1$ (i.e testing $H_0: X\leadsto P_0$ versus $H_1: X\leadsto P_1$) then there exists an ideal test if and only if $P_1\bot P_0$ ($P_1$ and $P_0$ are "mutually singular").
If you don't know what "mutually singular" means I can give you an example: $\mathcal{U}[0,1]$ and $\mathcal{U}[3,4]$ (uniforms on $[0,1]$ and $[3,4]$) are mutually singular. This means if you want to test
$H_0: X\leadsto \mathcal{U}[0,1]$ versus $H_1: X\leadsto \mathcal{U}[3,4]$
then there exist an ideal test (guess what it is :) ) : a test that is never wrong !
If $P_1$ and $P_0$ are not mutually singular, then this does not exist (this results from the "only if part")!
In non mathematical terms this means that you can prove the null if and only if the proof is already in your assumptions (i.e. if and only if you have chosen the hypothesis $H_0$ and $H_1$ that are so different that a single observation from $H_0$ cannot be identifyed as one from $H_1$ and vise versa).
|
Is it possible to prove a null hypothesis?
|
Answer from the mathematical side : it is possible if and only if "hypotheses are mutually singular".
If by "prove" you mean have a rule that can "accept" (should I say that:) ) $H_0$ with a probabil
|
Is it possible to prove a null hypothesis?
Answer from the mathematical side : it is possible if and only if "hypotheses are mutually singular".
If by "prove" you mean have a rule that can "accept" (should I say that:) ) $H_0$ with a probability to make a mistake that is zero, then you are searching what could be called "ideal test" and this exists:
If you are testing wether a random variable $X$ is drawn from $P_0$ or from $P_1$ (i.e testing $H_0: X\leadsto P_0$ versus $H_1: X\leadsto P_1$) then there exists an ideal test if and only if $P_1\bot P_0$ ($P_1$ and $P_0$ are "mutually singular").
If you don't know what "mutually singular" means I can give you an example: $\mathcal{U}[0,1]$ and $\mathcal{U}[3,4]$ (uniforms on $[0,1]$ and $[3,4]$) are mutually singular. This means if you want to test
$H_0: X\leadsto \mathcal{U}[0,1]$ versus $H_1: X\leadsto \mathcal{U}[3,4]$
then there exist an ideal test (guess what it is :) ) : a test that is never wrong !
If $P_1$ and $P_0$ are not mutually singular, then this does not exist (this results from the "only if part")!
In non mathematical terms this means that you can prove the null if and only if the proof is already in your assumptions (i.e. if and only if you have chosen the hypothesis $H_0$ and $H_1$ that are so different that a single observation from $H_0$ cannot be identifyed as one from $H_1$ and vise versa).
|
Is it possible to prove a null hypothesis?
Answer from the mathematical side : it is possible if and only if "hypotheses are mutually singular".
If by "prove" you mean have a rule that can "accept" (should I say that:) ) $H_0$ with a probabil
|
6,983
|
Is it possible to prove a null hypothesis?
|
Yes there is a definitive answer. That answer is: No, there isn't a way to prove a null hypothesis. The best you can do, as far as I know, is throw confidence intervals around your estimate and demonstrate that the effect is so small that it might as well be essentially non-existent.
|
Is it possible to prove a null hypothesis?
|
Yes there is a definitive answer. That answer is: No, there isn't a way to prove a null hypothesis. The best you can do, as far as I know, is throw confidence intervals around your estimate and demo
|
Is it possible to prove a null hypothesis?
Yes there is a definitive answer. That answer is: No, there isn't a way to prove a null hypothesis. The best you can do, as far as I know, is throw confidence intervals around your estimate and demonstrate that the effect is so small that it might as well be essentially non-existent.
|
Is it possible to prove a null hypothesis?
Yes there is a definitive answer. That answer is: No, there isn't a way to prove a null hypothesis. The best you can do, as far as I know, is throw confidence intervals around your estimate and demo
|
6,984
|
Is it possible to prove a null hypothesis?
|
For me, the decision theoretical framework presents the easiest way to understand the "null hypothesis". It basically says that there must be at least two alternatives: the Null hypothesis, and at least one alternative. Then the "decision problem" is to accept one of the alternatives, and reject the others (although we need to be precise about what we mean by "accepting" and "rejecting" the hypothesis). I see the question of "can we prove the null hypothesis?" as analogous to "can we always make the correct decision?". From a decision theory perspective the answer is clearly yes if
1)there is no uncertainty in the decision making process, for then it is a mathematical exercise to work out what the correct decision is.
2)we accept all the other premises/assumptions of the problem. The most critical one (I think) is that the hypothesis we are deciding between are exhaustive, and one (and only one) of them must be true, and the others must be false.
From a more philosophical standpoint, it is not possible to "prove" anything, in the sense that the "proof" depends entirely on the assumptions / axioms which lead to that "proof". I see proof as a kind of logical equivalence rather than a "fact" or "truth" in the sense that if the proof is wrong, the assumptions which led to it are also wrong.
Applying this to the "proving the null hypothesis" I can "prove" it to be true by simply assuming that it is true, or by assuming that it is true if certain conditions are meet (such as the value of a statistic).
|
Is it possible to prove a null hypothesis?
|
For me, the decision theoretical framework presents the easiest way to understand the "null hypothesis". It basically says that there must be at least two alternatives: the Null hypothesis, and at le
|
Is it possible to prove a null hypothesis?
For me, the decision theoretical framework presents the easiest way to understand the "null hypothesis". It basically says that there must be at least two alternatives: the Null hypothesis, and at least one alternative. Then the "decision problem" is to accept one of the alternatives, and reject the others (although we need to be precise about what we mean by "accepting" and "rejecting" the hypothesis). I see the question of "can we prove the null hypothesis?" as analogous to "can we always make the correct decision?". From a decision theory perspective the answer is clearly yes if
1)there is no uncertainty in the decision making process, for then it is a mathematical exercise to work out what the correct decision is.
2)we accept all the other premises/assumptions of the problem. The most critical one (I think) is that the hypothesis we are deciding between are exhaustive, and one (and only one) of them must be true, and the others must be false.
From a more philosophical standpoint, it is not possible to "prove" anything, in the sense that the "proof" depends entirely on the assumptions / axioms which lead to that "proof". I see proof as a kind of logical equivalence rather than a "fact" or "truth" in the sense that if the proof is wrong, the assumptions which led to it are also wrong.
Applying this to the "proving the null hypothesis" I can "prove" it to be true by simply assuming that it is true, or by assuming that it is true if certain conditions are meet (such as the value of a statistic).
|
Is it possible to prove a null hypothesis?
For me, the decision theoretical framework presents the easiest way to understand the "null hypothesis". It basically says that there must be at least two alternatives: the Null hypothesis, and at le
|
6,985
|
Is it possible to prove a null hypothesis?
|
Yes, it is possible to prove the null--in exactly the same sense that it is possible to prove any alternative to the null. In a Bayesian analysis, it is perfectly possible for the odds in favor of the null versus any of the proposed alternatives to it to become arbitrarily large. Moreover, it is false to assert, as some of the above answers assert, that one can only prove the null if the alternatives to it are disjoint (do not overlap with the null). In a Bayesian analysis every hypothesis has a prior probability distribution. This distribution spreads a unit mass of prior probability out over the proposed alternatives. The null hypothesis puts all of the prior probability on a single alternative. In principle, alternatives to the null may put all of the prior probability on some non-null alternative (on another "point"), but this is rare. In general, alternatives hedge, that is, they spread the same mass of prior probability out over other alternatives--either to the exclusion of the null alternative, or, more commonly, including the null alternative. The question then becomes which hypothesis puts the most prior probability where the experimental data actually fall. If the data fall tightly around where the null says they should fall, then it will be the odds-on favority (among the proposed hypotheses) EVEN THOUGH IT IS INCLUDED IN (NESTED IN, NOT MUTUALLY EXCLUSIVE WITH) THE ALTERNATIVES TO IT. The believe that it is not possible for a nested alternative to be more likely than the set in which it is nested reflects the failure to distinguish between probability and likelihood. While it is impossible for a component of a set to be less probable than the entire set, it is perfectly possible for the posterior likelihood of a component of a set of hypotheses to be greater than the posterior likelihood of the set as a whole. The posterior likelihood of an hypothesis is the product of the likelihood function and the prior probability distribution that the hypothesis posits. If an hypothesis puts all of the prior probability in the right place (e.g., on the null), then it will have a higher posterior likelihood than an hypothesis that puts some of the prior probability in the wrong place (not on the null).
|
Is it possible to prove a null hypothesis?
|
Yes, it is possible to prove the null--in exactly the same sense that it is possible to prove any alternative to the null. In a Bayesian analysis, it is perfectly possible for the odds in favor of the
|
Is it possible to prove a null hypothesis?
Yes, it is possible to prove the null--in exactly the same sense that it is possible to prove any alternative to the null. In a Bayesian analysis, it is perfectly possible for the odds in favor of the null versus any of the proposed alternatives to it to become arbitrarily large. Moreover, it is false to assert, as some of the above answers assert, that one can only prove the null if the alternatives to it are disjoint (do not overlap with the null). In a Bayesian analysis every hypothesis has a prior probability distribution. This distribution spreads a unit mass of prior probability out over the proposed alternatives. The null hypothesis puts all of the prior probability on a single alternative. In principle, alternatives to the null may put all of the prior probability on some non-null alternative (on another "point"), but this is rare. In general, alternatives hedge, that is, they spread the same mass of prior probability out over other alternatives--either to the exclusion of the null alternative, or, more commonly, including the null alternative. The question then becomes which hypothesis puts the most prior probability where the experimental data actually fall. If the data fall tightly around where the null says they should fall, then it will be the odds-on favority (among the proposed hypotheses) EVEN THOUGH IT IS INCLUDED IN (NESTED IN, NOT MUTUALLY EXCLUSIVE WITH) THE ALTERNATIVES TO IT. The believe that it is not possible for a nested alternative to be more likely than the set in which it is nested reflects the failure to distinguish between probability and likelihood. While it is impossible for a component of a set to be less probable than the entire set, it is perfectly possible for the posterior likelihood of a component of a set of hypotheses to be greater than the posterior likelihood of the set as a whole. The posterior likelihood of an hypothesis is the product of the likelihood function and the prior probability distribution that the hypothesis posits. If an hypothesis puts all of the prior probability in the right place (e.g., on the null), then it will have a higher posterior likelihood than an hypothesis that puts some of the prior probability in the wrong place (not on the null).
|
Is it possible to prove a null hypothesis?
Yes, it is possible to prove the null--in exactly the same sense that it is possible to prove any alternative to the null. In a Bayesian analysis, it is perfectly possible for the odds in favor of the
|
6,986
|
Is it possible to prove a null hypothesis?
|
Technically, no, a null hypothesis cannot be proven. For any fixed, finite sample size, there will always be some small but nonzero effect size for which your statistical test has virtually no power. More practically, though, you can prove that you're within some small epsilon of the null hypothesis, such that deviations less than this epsilon are not practically significant.
|
Is it possible to prove a null hypothesis?
|
Technically, no, a null hypothesis cannot be proven. For any fixed, finite sample size, there will always be some small but nonzero effect size for which your statistical test has virtually no power.
|
Is it possible to prove a null hypothesis?
Technically, no, a null hypothesis cannot be proven. For any fixed, finite sample size, there will always be some small but nonzero effect size for which your statistical test has virtually no power. More practically, though, you can prove that you're within some small epsilon of the null hypothesis, such that deviations less than this epsilon are not practically significant.
|
Is it possible to prove a null hypothesis?
Technically, no, a null hypothesis cannot be proven. For any fixed, finite sample size, there will always be some small but nonzero effect size for which your statistical test has virtually no power.
|
6,987
|
Is it possible to prove a null hypothesis?
|
There is a case where a proof is possible. Suppose you have a school and your null hypothesis is that the numbers of boys and of girls is equal. As the sample size increases, the uncertainty in the ratio of boys to girls tends to reduce, eventually reaching certainty (which is what I assume you mean by proof) when the whole pupil population is sampled.
But if you do not have a finite population, or if you are sampling with replacement and cannot spot resampled individuals, then you cannot reduce the uncertainty to zero with a finite sample.
|
Is it possible to prove a null hypothesis?
|
There is a case where a proof is possible. Suppose you have a school and your null hypothesis is that the numbers of boys and of girls is equal. As the sample size increases, the uncertainty in the
|
Is it possible to prove a null hypothesis?
There is a case where a proof is possible. Suppose you have a school and your null hypothesis is that the numbers of boys and of girls is equal. As the sample size increases, the uncertainty in the ratio of boys to girls tends to reduce, eventually reaching certainty (which is what I assume you mean by proof) when the whole pupil population is sampled.
But if you do not have a finite population, or if you are sampling with replacement and cannot spot resampled individuals, then you cannot reduce the uncertainty to zero with a finite sample.
|
Is it possible to prove a null hypothesis?
There is a case where a proof is possible. Suppose you have a school and your null hypothesis is that the numbers of boys and of girls is equal. As the sample size increases, the uncertainty in the
|
6,988
|
Is it possible to prove a null hypothesis?
|
I would like to discuss here a point a lot of users are somewhat confused. What is the real meaning of the Null Hypothesis statement H0: p=0? Are we trying to determine if the parameter p is zero? Of course not, there is no way to achieve such a goal.
What we intend to establish is that, given the data set, the evaluated parameter value is (or not) indiscernible from zero. Remember that NHST is "unfair" towards the alternative hypotheses: the null is ascribed a 95% Confidence Level, and only 5% to the alternative. In consequence
a “non-significant" result does not mean that H0 holds but simply that
we did not found sufficient evidence that the alternative is likely.
|
Is it possible to prove a null hypothesis?
|
I would like to discuss here a point a lot of users are somewhat confused. What is the real meaning of the Null Hypothesis statement H0: p=0? Are we trying to determine if the parameter p is zero? Of
|
Is it possible to prove a null hypothesis?
I would like to discuss here a point a lot of users are somewhat confused. What is the real meaning of the Null Hypothesis statement H0: p=0? Are we trying to determine if the parameter p is zero? Of course not, there is no way to achieve such a goal.
What we intend to establish is that, given the data set, the evaluated parameter value is (or not) indiscernible from zero. Remember that NHST is "unfair" towards the alternative hypotheses: the null is ascribed a 95% Confidence Level, and only 5% to the alternative. In consequence
a “non-significant" result does not mean that H0 holds but simply that
we did not found sufficient evidence that the alternative is likely.
|
Is it possible to prove a null hypothesis?
I would like to discuss here a point a lot of users are somewhat confused. What is the real meaning of the Null Hypothesis statement H0: p=0? Are we trying to determine if the parameter p is zero? Of
|
6,989
|
How to resolve Simpson's paradox?
|
In your question, you state that you don't know what "causal Bayesian networks" and "back door tests" are.
Suppose you have a causal Bayesian network. That is, a directed acyclic graph whose nodes represent propositions and whose directed edges represent potential causal relationships. You may have many such networks for each of your hypotheses. There are three ways to make a compelling argument about the strength or existence of an edge $A \stackrel?\rightarrow B$.
The easiest way is an intervention. This is what the other answers are suggesting when they say that "proper randomization" will fix the problem. You randomly force $A$ to have different values and you measure $B$. If you can do that, you're done, but you can't always do that. In your example, it may be unethical to give people ineffective treatments to deadly diseases, or they may be have some say in their treatment, e.g., they may choose the less harsh (treatment B) when their kidney stones are small and less painful.
The second way is the front door method. You want to show that $A$ acts on $B$ via $C$, i.e., $A\rightarrow C \rightarrow B$. If you assume that $C$ is potentially caused by $A$ but has no other causes, and you can measure that $C$ is correlated with $A$, and $B$ is correlated with $C$, then you can conclude evidence must be flowing via $C$. The original example: $A$ is smoking, $B$ is cancer, $C$ is tar accumulation. Tar can only come from smoking, and it correlates with both smoking and cancer. Therefore, smoking causes cancer via tar (though there could be other causal paths that mitigate this effect).
The third way is the back door method. You want to show that $A$ and $B$ aren't correlated because of a "back door", e.g. common cause, i.e., $A \leftarrow D \rightarrow B$. Since you have assumed a causal model, you merely need to block the all of the paths (by observing variables and conditioning on them) that evidence can flow up from $A$ and down to $B$. It's a bit tricky to block these paths, but Pearl gives a clear algorithm that lets you know which variables you have to observe to block these paths.
gung is right that with good randomization, confounders won't matter. Since we're assuming that intervening at the the hypothetical cause (treatment) is not allowed, any common cause between the hypothetical cause (treatment) and effect (survival), such as age or kidney stone size will be a confounder. The solution is to take the right measurements to block all of the back doors. For further reading see:
Pearl, Judea. "Causal diagrams for empirical research." Biometrika 82.4 (1995): 669-688.
To apply this to your problem, let us first draw the causal graph. (Treatment-preceding) kidney stone size $X$ and treatment type $Y$ are both causes of success $Z$. $X$ may be a cause of $Y$ if other doctors are assigning tratment based on kidney stone size. Clearly there are no other causal relationships between $X$,$Y$, and $Z$. $Y$ comes after $X$ so it cannot be its cause. Similarly $Z$ comes after $X$ and $Y$.
Since $X$ is a common cause, it should be measured. It is up to the experimenter to determine the universe of variables and potential causal relationships. For every experiment, the experimenter measures the necessary "back door variables" and then calculates the marginal probability distribution of treatment success for each configuration of variables. For a new patient, you measure the variables and follow the treatment indicated by the marginal distribution. If you can't measure everything or you don't have a lot of data but know something about the architecture of the relationships, you can do "belief propagation" (Bayesian inference) on the network.
|
How to resolve Simpson's paradox?
|
In your question, you state that you don't know what "causal Bayesian networks" and "back door tests" are.
Suppose you have a causal Bayesian network. That is, a directed acyclic graph whose nodes re
|
How to resolve Simpson's paradox?
In your question, you state that you don't know what "causal Bayesian networks" and "back door tests" are.
Suppose you have a causal Bayesian network. That is, a directed acyclic graph whose nodes represent propositions and whose directed edges represent potential causal relationships. You may have many such networks for each of your hypotheses. There are three ways to make a compelling argument about the strength or existence of an edge $A \stackrel?\rightarrow B$.
The easiest way is an intervention. This is what the other answers are suggesting when they say that "proper randomization" will fix the problem. You randomly force $A$ to have different values and you measure $B$. If you can do that, you're done, but you can't always do that. In your example, it may be unethical to give people ineffective treatments to deadly diseases, or they may be have some say in their treatment, e.g., they may choose the less harsh (treatment B) when their kidney stones are small and less painful.
The second way is the front door method. You want to show that $A$ acts on $B$ via $C$, i.e., $A\rightarrow C \rightarrow B$. If you assume that $C$ is potentially caused by $A$ but has no other causes, and you can measure that $C$ is correlated with $A$, and $B$ is correlated with $C$, then you can conclude evidence must be flowing via $C$. The original example: $A$ is smoking, $B$ is cancer, $C$ is tar accumulation. Tar can only come from smoking, and it correlates with both smoking and cancer. Therefore, smoking causes cancer via tar (though there could be other causal paths that mitigate this effect).
The third way is the back door method. You want to show that $A$ and $B$ aren't correlated because of a "back door", e.g. common cause, i.e., $A \leftarrow D \rightarrow B$. Since you have assumed a causal model, you merely need to block the all of the paths (by observing variables and conditioning on them) that evidence can flow up from $A$ and down to $B$. It's a bit tricky to block these paths, but Pearl gives a clear algorithm that lets you know which variables you have to observe to block these paths.
gung is right that with good randomization, confounders won't matter. Since we're assuming that intervening at the the hypothetical cause (treatment) is not allowed, any common cause between the hypothetical cause (treatment) and effect (survival), such as age or kidney stone size will be a confounder. The solution is to take the right measurements to block all of the back doors. For further reading see:
Pearl, Judea. "Causal diagrams for empirical research." Biometrika 82.4 (1995): 669-688.
To apply this to your problem, let us first draw the causal graph. (Treatment-preceding) kidney stone size $X$ and treatment type $Y$ are both causes of success $Z$. $X$ may be a cause of $Y$ if other doctors are assigning tratment based on kidney stone size. Clearly there are no other causal relationships between $X$,$Y$, and $Z$. $Y$ comes after $X$ so it cannot be its cause. Similarly $Z$ comes after $X$ and $Y$.
Since $X$ is a common cause, it should be measured. It is up to the experimenter to determine the universe of variables and potential causal relationships. For every experiment, the experimenter measures the necessary "back door variables" and then calculates the marginal probability distribution of treatment success for each configuration of variables. For a new patient, you measure the variables and follow the treatment indicated by the marginal distribution. If you can't measure everything or you don't have a lot of data but know something about the architecture of the relationships, you can do "belief propagation" (Bayesian inference) on the network.
|
How to resolve Simpson's paradox?
In your question, you state that you don't know what "causal Bayesian networks" and "back door tests" are.
Suppose you have a causal Bayesian network. That is, a directed acyclic graph whose nodes re
|
6,990
|
How to resolve Simpson's paradox?
|
I have a prior answer that discusses Simpson's paradox here: Basic Simpson's paradox. It may help you to read that to better understand the phenomenon.
In short, Simpson's paradox occurs because of confounding. In your example, the treatment is confounded* with the kind of kidney stones each patient had. We know from the full table of results presented that treatment A is always better. Thus, a doctor should choose treatment A. The only reason treatment B looks better in the aggregate is that it was given more often to patients with the less severe condition, whereas treatment A was given to patients with the more severe condition. Nonetheless, treatment A performed better with both conditions. As a doctor, you don't care about the fact that in the past the worse treatment was given to patients who had the lesser condition, you only care about the patient before you, and if you want that patient to improve, you will provide them with the best treatment available.
*Note that the point of running experiments, and randomizing treatments, is to create a situation in which the treatments are not confounded. If the study in question was an experiment, I would say that the randomization process failed to create equitable groups, although it may well have been an observational study--I don't know.
|
How to resolve Simpson's paradox?
|
I have a prior answer that discusses Simpson's paradox here: Basic Simpson's paradox. It may help you to read that to better understand the phenomenon.
In short, Simpson's paradox occurs because of
|
How to resolve Simpson's paradox?
I have a prior answer that discusses Simpson's paradox here: Basic Simpson's paradox. It may help you to read that to better understand the phenomenon.
In short, Simpson's paradox occurs because of confounding. In your example, the treatment is confounded* with the kind of kidney stones each patient had. We know from the full table of results presented that treatment A is always better. Thus, a doctor should choose treatment A. The only reason treatment B looks better in the aggregate is that it was given more often to patients with the less severe condition, whereas treatment A was given to patients with the more severe condition. Nonetheless, treatment A performed better with both conditions. As a doctor, you don't care about the fact that in the past the worse treatment was given to patients who had the lesser condition, you only care about the patient before you, and if you want that patient to improve, you will provide them with the best treatment available.
*Note that the point of running experiments, and randomizing treatments, is to create a situation in which the treatments are not confounded. If the study in question was an experiment, I would say that the randomization process failed to create equitable groups, although it may well have been an observational study--I don't know.
|
How to resolve Simpson's paradox?
I have a prior answer that discusses Simpson's paradox here: Basic Simpson's paradox. It may help you to read that to better understand the phenomenon.
In short, Simpson's paradox occurs because of
|
6,991
|
How to resolve Simpson's paradox?
|
This nice article by Judea Pearl published in 2013 deals exactly with the problem of which option to choose when confronted with Simpson's paradox:
Understanding Simpson's paradox (PDF)
|
How to resolve Simpson's paradox?
|
This nice article by Judea Pearl published in 2013 deals exactly with the problem of which option to choose when confronted with Simpson's paradox:
Understanding Simpson's paradox (PDF)
|
How to resolve Simpson's paradox?
This nice article by Judea Pearl published in 2013 deals exactly with the problem of which option to choose when confronted with Simpson's paradox:
Understanding Simpson's paradox (PDF)
|
How to resolve Simpson's paradox?
This nice article by Judea Pearl published in 2013 deals exactly with the problem of which option to choose when confronted with Simpson's paradox:
Understanding Simpson's paradox (PDF)
|
6,992
|
How to resolve Simpson's paradox?
|
One important "take away" is that if treatment assignments are disproportionate between subgroups, one must take subgroups into account when analyzing the data.
A second important "take away" is that observational studies are especially prone to delivering wrong answers due to the unknown presence of Simpson's paradox. That's because we cannot correct for the fact that Treatment A tended to be given to the more difficult cases if we don't know that it was.
In a properly randomized study we can either (1) allocate treatment randomly so that giving an "unfair advantage" to one treatment is highly unlikely and will automatically get taken care of in the data analysis or, (2) if there is an important reason to do so, allocate the treatments randomly but disproportionately based on some known issue and then take that issue into account during the analysis.
|
How to resolve Simpson's paradox?
|
One important "take away" is that if treatment assignments are disproportionate between subgroups, one must take subgroups into account when analyzing the data.
A second important "take away" is that
|
How to resolve Simpson's paradox?
One important "take away" is that if treatment assignments are disproportionate between subgroups, one must take subgroups into account when analyzing the data.
A second important "take away" is that observational studies are especially prone to delivering wrong answers due to the unknown presence of Simpson's paradox. That's because we cannot correct for the fact that Treatment A tended to be given to the more difficult cases if we don't know that it was.
In a properly randomized study we can either (1) allocate treatment randomly so that giving an "unfair advantage" to one treatment is highly unlikely and will automatically get taken care of in the data analysis or, (2) if there is an important reason to do so, allocate the treatments randomly but disproportionately based on some known issue and then take that issue into account during the analysis.
|
How to resolve Simpson's paradox?
One important "take away" is that if treatment assignments are disproportionate between subgroups, one must take subgroups into account when analyzing the data.
A second important "take away" is that
|
6,993
|
How to resolve Simpson's paradox?
|
Do you want the solution to the one example or the paradox in general? There is none for the latter because the paradox can arise for more than one reason and needs to be assessed on a case by case basis.
The paradox is primarily problematic when reporting summary data and is critical in training individuals how to analyze and report data. We don't want researchers reporting summary statistics that hide or obfuscate patterns in the data or data analysts failing to recognize what the real pattern in the data is. No solution was given because there is no one solution.
In this particular case the doctor with the table would clearly always pick A and ignore the summary line. It makes no difference if they know the size of the stone or not. If someone analyzing the data had only reported the summary lines presented for A and B then there'd be an issue because the data the doctor received wouldn't reflect reality. In this case they probably should have also left the last line off of the table since it's only correct under one interpretation of what the summary statistic should be (there are two possible). Leaving the reader to interpret the individual cells would generally have produced the correct result.
(Your copious comments seem to suggest you're most concerned about unequal N issues and Simpson is broader than that so I'm reluctant to dwell on the unequal N issue further. Perhaps ask a more targeted question. Furthermore, you seem to think I am advocating a normalization conclusion. I am not. I am arguing that you need to consider that the summary statistic is relatively arbitrarily selected and that selection by some analyst gave rise to the paradox. I'm further arguing that you look at the cells you have.)
|
How to resolve Simpson's paradox?
|
Do you want the solution to the one example or the paradox in general? There is none for the latter because the paradox can arise for more than one reason and needs to be assessed on a case by case ba
|
How to resolve Simpson's paradox?
Do you want the solution to the one example or the paradox in general? There is none for the latter because the paradox can arise for more than one reason and needs to be assessed on a case by case basis.
The paradox is primarily problematic when reporting summary data and is critical in training individuals how to analyze and report data. We don't want researchers reporting summary statistics that hide or obfuscate patterns in the data or data analysts failing to recognize what the real pattern in the data is. No solution was given because there is no one solution.
In this particular case the doctor with the table would clearly always pick A and ignore the summary line. It makes no difference if they know the size of the stone or not. If someone analyzing the data had only reported the summary lines presented for A and B then there'd be an issue because the data the doctor received wouldn't reflect reality. In this case they probably should have also left the last line off of the table since it's only correct under one interpretation of what the summary statistic should be (there are two possible). Leaving the reader to interpret the individual cells would generally have produced the correct result.
(Your copious comments seem to suggest you're most concerned about unequal N issues and Simpson is broader than that so I'm reluctant to dwell on the unequal N issue further. Perhaps ask a more targeted question. Furthermore, you seem to think I am advocating a normalization conclusion. I am not. I am arguing that you need to consider that the summary statistic is relatively arbitrarily selected and that selection by some analyst gave rise to the paradox. I'm further arguing that you look at the cells you have.)
|
How to resolve Simpson's paradox?
Do you want the solution to the one example or the paradox in general? There is none for the latter because the paradox can arise for more than one reason and needs to be assessed on a case by case ba
|
6,994
|
Estimating same model over multiple time series
|
You could do a grid search: start with ARIMA(1,0,0) and try all the possibilities up to ARIMA(5,2,5) or something. Fit the model to each series, and estimate a scale-independent error measurement like MAPE or MASE (MASE would probably be better). Choose the ARIMA model with the lowest average MASE across all your models.
You could improve this procedure by cross-validating your error measurement for each series, and also by comparing your results to a naive forecast.
It might be a good idea to ask why you're looking for a single model to describe all of the series. Unless they're generated by the same process, this doesn't seem like a good idea.
|
Estimating same model over multiple time series
|
You could do a grid search: start with ARIMA(1,0,0) and try all the possibilities up to ARIMA(5,2,5) or something. Fit the model to each series, and estimate a scale-independent error measurement lik
|
Estimating same model over multiple time series
You could do a grid search: start with ARIMA(1,0,0) and try all the possibilities up to ARIMA(5,2,5) or something. Fit the model to each series, and estimate a scale-independent error measurement like MAPE or MASE (MASE would probably be better). Choose the ARIMA model with the lowest average MASE across all your models.
You could improve this procedure by cross-validating your error measurement for each series, and also by comparing your results to a naive forecast.
It might be a good idea to ask why you're looking for a single model to describe all of the series. Unless they're generated by the same process, this doesn't seem like a good idea.
|
Estimating same model over multiple time series
You could do a grid search: start with ARIMA(1,0,0) and try all the possibilities up to ARIMA(5,2,5) or something. Fit the model to each series, and estimate a scale-independent error measurement lik
|
6,995
|
Estimating same model over multiple time series
|
One way to do that is to construct a long time series with all of your data, and with sequences of missing values between the series to separate them. For example, in R, if you have three series (x, y and z) each of length 100 and frequency 12, you can join them as follows
combined <- ts(c(x,rep(NA,56),y,rep(NA,56),z,rep(NA,56)),frequency=12)
Notice that the number of missing values is chosen to ensure the seasonal period is retained. I've padded out the final year with 8 missing values and then added four missing years (48 values) before the next series. That should be enough to ensure any serial correlations wash out between series.
Then you can use auto.arima() to find the best model:
library(forecast)
fit <- auto.arima(combined)
Finally, you can apply the combined model to each series separately in order to obtain forecasts:
fit.x <- Arima(x,model=fit)
fit.y <- Arima(y,model=fit)
fit.z <- Arima(z,model=fit)
|
Estimating same model over multiple time series
|
One way to do that is to construct a long time series with all of your data, and with sequences of missing values between the series to separate them. For example, in R, if you have three series (x, y
|
Estimating same model over multiple time series
One way to do that is to construct a long time series with all of your data, and with sequences of missing values between the series to separate them. For example, in R, if you have three series (x, y and z) each of length 100 and frequency 12, you can join them as follows
combined <- ts(c(x,rep(NA,56),y,rep(NA,56),z,rep(NA,56)),frequency=12)
Notice that the number of missing values is chosen to ensure the seasonal period is retained. I've padded out the final year with 8 missing values and then added four missing years (48 values) before the next series. That should be enough to ensure any serial correlations wash out between series.
Then you can use auto.arima() to find the best model:
library(forecast)
fit <- auto.arima(combined)
Finally, you can apply the combined model to each series separately in order to obtain forecasts:
fit.x <- Arima(x,model=fit)
fit.y <- Arima(y,model=fit)
fit.z <- Arima(z,model=fit)
|
Estimating same model over multiple time series
One way to do that is to construct a long time series with all of your data, and with sequences of missing values between the series to separate them. For example, in R, if you have three series (x, y
|
6,996
|
Estimating same model over multiple time series
|
Estimating single model for multiple time series is the realm of panel data econometrics. However in your case with no explanatory variable @Rob Hyndman answer is probably the best fit. However if it turns out that the means of time series are different (test it, since in this case @Rob Hyndman's method should fail!), but ARMA structure is the same, then you will have to use Arellano-Bond type estimator. The model in that case would be:
$$y_{it}=\alpha_i+\rho_1 y_{i,t-1}+...+\rho_p y_{i,t-p}+\varepsilon_{it}$$
where $i$ indicates different time series and $\varepsilon_{it}$ can have the same covariance structure across all $i$.
|
Estimating same model over multiple time series
|
Estimating single model for multiple time series is the realm of panel data econometrics. However in your case with no explanatory variable @Rob Hyndman answer is probably the best fit. However if it
|
Estimating same model over multiple time series
Estimating single model for multiple time series is the realm of panel data econometrics. However in your case with no explanatory variable @Rob Hyndman answer is probably the best fit. However if it turns out that the means of time series are different (test it, since in this case @Rob Hyndman's method should fail!), but ARMA structure is the same, then you will have to use Arellano-Bond type estimator. The model in that case would be:
$$y_{it}=\alpha_i+\rho_1 y_{i,t-1}+...+\rho_p y_{i,t-p}+\varepsilon_{it}$$
where $i$ indicates different time series and $\varepsilon_{it}$ can have the same covariance structure across all $i$.
|
Estimating same model over multiple time series
Estimating single model for multiple time series is the realm of panel data econometrics. However in your case with no explanatory variable @Rob Hyndman answer is probably the best fit. However if it
|
6,997
|
Estimating same model over multiple time series
|
An alternative to Rob Hyndman's approach, to make a single data series, is to merge the data. This might be appropriate if your multiple time series represent noisy readings from a set of machines recording the same event. (If each time series is on a different scale you need to normalize the data first.)
NOTE: you still only end up with 28 readings, just less noise, so this may not be appropriate for your situation.
t1=xts(jitter(sin(1:28/10),amount=0.2),as.Date("2012-01-01")+1:28)
t2=xts(jitter(sin(1:28/10),amount=0.2),as.Date("2012-01-01")+1:28)
t3=(t1+t2)/2
|
Estimating same model over multiple time series
|
An alternative to Rob Hyndman's approach, to make a single data series, is to merge the data. This might be appropriate if your multiple time series represent noisy readings from a set of machines rec
|
Estimating same model over multiple time series
An alternative to Rob Hyndman's approach, to make a single data series, is to merge the data. This might be appropriate if your multiple time series represent noisy readings from a set of machines recording the same event. (If each time series is on a different scale you need to normalize the data first.)
NOTE: you still only end up with 28 readings, just less noise, so this may not be appropriate for your situation.
t1=xts(jitter(sin(1:28/10),amount=0.2),as.Date("2012-01-01")+1:28)
t2=xts(jitter(sin(1:28/10),amount=0.2),as.Date("2012-01-01")+1:28)
t3=(t1+t2)/2
|
Estimating same model over multiple time series
An alternative to Rob Hyndman's approach, to make a single data series, is to merge the data. This might be appropriate if your multiple time series represent noisy readings from a set of machines rec
|
6,998
|
Estimating same model over multiple time series
|
I would look at hidden Markov models and dynamic Bayesian networks. They model time series data. Also they are trained using multiple time series instances e.g. multiple blood pressure time series from various individuals .
You should find packages in Python and R to build those. You might have to define structure for these models.
|
Estimating same model over multiple time series
|
I would look at hidden Markov models and dynamic Bayesian networks. They model time series data. Also they are trained using multiple time series instances e.g. multiple blood pressure time series fro
|
Estimating same model over multiple time series
I would look at hidden Markov models and dynamic Bayesian networks. They model time series data. Also they are trained using multiple time series instances e.g. multiple blood pressure time series from various individuals .
You should find packages in Python and R to build those. You might have to define structure for these models.
|
Estimating same model over multiple time series
I would look at hidden Markov models and dynamic Bayesian networks. They model time series data. Also they are trained using multiple time series instances e.g. multiple blood pressure time series fro
|
6,999
|
Is there a way to use the covariance matrix to find coefficients for multiple regression?
|
Yes, the covariance matrix of all the variables--explanatory and response--contains the information needed to find all the coefficients, provided an intercept (constant) term is included in the model. (Although the covariances provide no information about the constant term, it can be found from the means of the data.)
Analysis
Let the data for the explanatory variables be arranged as $n$-dimensional column vectors $x_1, x_2, \ldots, x_p$ with covariance matrix $C_X$ and the response variable be the column vector $y$, considered to be a realization of a random variable $Y$. The ordinary least squares estimates $\hat\beta$ of the coefficients in the model
$$\mathbb{E}(Y) = \alpha + X\beta$$
are obtained by assembling the $p+1$ column vectors $X_0 = (1, 1, \ldots, 1)^\prime, X_1, \ldots, X_p$ into an $n \times p+1$ array $X$ and solving the system of linear equations
$$X^\prime X \hat\beta = X^\prime y.$$
It is equivalent to the system
$$\frac{1}{n}X^\prime X \hat\beta = \frac{1}{n}X^\prime y.$$
Gaussian elimination will solve this system. It proceeds by adjoining the $p+1\times p+1$ matrix $\frac{1}{n}X^\prime X$ and the $p+1$-vector $\frac{1}{n}X^\prime y$ into a $p+1 \times p+2$ array $A$ and row-reducing it.
The first step will inspect $\frac{1}{n}(X^\prime X)_{11} = \frac{1}{n}X_0^\prime X_0 = 1$. Finding this to be nonzero, it proceeds to subtract appropriate multiples of the first row of $A$ from the remaining rows in order to zero out the remaining entries in its first column. These multiples will be $\frac{1}{n}X_0^\prime X_i = \overline X_i$ and the number subtracted from the entry $A_{i+1,j+1} = X_i^\prime X_j$ will equal $\overline X_i \overline X_j$. This is just the formula for the covariance of $X_i$ and $X_j$. Moreover, the number left in the $i+1, p+2$ position equals $\frac{1}{n}X_i^\prime y - \overline{X_i}\overline{y}$, the covariance of $X_i$ with $y$.
Thus, after the first step of Gaussian elimination the system is reduced to solving
$$C_X\hat{\beta} = (\text{Cov}(X_i, y))^\prime$$
and obviously--since all the coefficients are covariances--that solution can be found from the covariance matrix of all the variables.
(When $C_X$ is invertible the solution can be written $C_X^{-1}(\text{Cov}(X_i, y))^\prime$. The formulas given in the question are special cases of this when $p=1$ and $p=2$. Writing out such formulas explicitly will become more and more complex as $p$ grows. Moreover, they are inferior for numerical computation, which is best carried out by solving the system of equations rather than by inverting the matrix $C_X$.)
The constant term will be the difference between the mean of $y$ and the mean values predicted from the estimates, $X\hat{\beta}$.
Example
To illustrate, the following R code creates some data, computes their covariances, and obtains the least squares coefficient estimates solely from that information. It compares them to the estimates obtained from the least-squares estimator lm.
#
# 1. Generate some data.
#
n <- 10 # Data set size
p <- 2 # Number of regressors
set.seed(17)
z <- matrix(rnorm(n*(p+1)), nrow=n, dimnames=list(NULL, paste0("x", 1:(p+1))))
y <- z[, p+1]
x <- z[, -(p+1), drop=FALSE];
#
# 2. Find the OLS coefficients from the covariances only.
#
a <- cov(x)
b <- cov(x,y)
beta.hat <- solve(a, b)[, 1] # Coefficients from the covariance matrix
#
# 2a. Find the intercept from the means and coefficients.
#
y.bar <- mean(y)
x.bar <- colMeans(x)
intercept <- y.bar - x.bar %*% beta.hat
The output shows agreement between the two methods:
(rbind(`From covariances` = c(`(Intercept)`=intercept, beta.hat),
`From data via OLS` = coef(lm(y ~ x))))
(Intercept) x1 x2
From covariances 0.946155 -0.424551 -1.006675
From data via OLS 0.946155 -0.424551 -1.006675
|
Is there a way to use the covariance matrix to find coefficients for multiple regression?
|
Yes, the covariance matrix of all the variables--explanatory and response--contains the information needed to find all the coefficients, provided an intercept (constant) term is included in the model.
|
Is there a way to use the covariance matrix to find coefficients for multiple regression?
Yes, the covariance matrix of all the variables--explanatory and response--contains the information needed to find all the coefficients, provided an intercept (constant) term is included in the model. (Although the covariances provide no information about the constant term, it can be found from the means of the data.)
Analysis
Let the data for the explanatory variables be arranged as $n$-dimensional column vectors $x_1, x_2, \ldots, x_p$ with covariance matrix $C_X$ and the response variable be the column vector $y$, considered to be a realization of a random variable $Y$. The ordinary least squares estimates $\hat\beta$ of the coefficients in the model
$$\mathbb{E}(Y) = \alpha + X\beta$$
are obtained by assembling the $p+1$ column vectors $X_0 = (1, 1, \ldots, 1)^\prime, X_1, \ldots, X_p$ into an $n \times p+1$ array $X$ and solving the system of linear equations
$$X^\prime X \hat\beta = X^\prime y.$$
It is equivalent to the system
$$\frac{1}{n}X^\prime X \hat\beta = \frac{1}{n}X^\prime y.$$
Gaussian elimination will solve this system. It proceeds by adjoining the $p+1\times p+1$ matrix $\frac{1}{n}X^\prime X$ and the $p+1$-vector $\frac{1}{n}X^\prime y$ into a $p+1 \times p+2$ array $A$ and row-reducing it.
The first step will inspect $\frac{1}{n}(X^\prime X)_{11} = \frac{1}{n}X_0^\prime X_0 = 1$. Finding this to be nonzero, it proceeds to subtract appropriate multiples of the first row of $A$ from the remaining rows in order to zero out the remaining entries in its first column. These multiples will be $\frac{1}{n}X_0^\prime X_i = \overline X_i$ and the number subtracted from the entry $A_{i+1,j+1} = X_i^\prime X_j$ will equal $\overline X_i \overline X_j$. This is just the formula for the covariance of $X_i$ and $X_j$. Moreover, the number left in the $i+1, p+2$ position equals $\frac{1}{n}X_i^\prime y - \overline{X_i}\overline{y}$, the covariance of $X_i$ with $y$.
Thus, after the first step of Gaussian elimination the system is reduced to solving
$$C_X\hat{\beta} = (\text{Cov}(X_i, y))^\prime$$
and obviously--since all the coefficients are covariances--that solution can be found from the covariance matrix of all the variables.
(When $C_X$ is invertible the solution can be written $C_X^{-1}(\text{Cov}(X_i, y))^\prime$. The formulas given in the question are special cases of this when $p=1$ and $p=2$. Writing out such formulas explicitly will become more and more complex as $p$ grows. Moreover, they are inferior for numerical computation, which is best carried out by solving the system of equations rather than by inverting the matrix $C_X$.)
The constant term will be the difference between the mean of $y$ and the mean values predicted from the estimates, $X\hat{\beta}$.
Example
To illustrate, the following R code creates some data, computes their covariances, and obtains the least squares coefficient estimates solely from that information. It compares them to the estimates obtained from the least-squares estimator lm.
#
# 1. Generate some data.
#
n <- 10 # Data set size
p <- 2 # Number of regressors
set.seed(17)
z <- matrix(rnorm(n*(p+1)), nrow=n, dimnames=list(NULL, paste0("x", 1:(p+1))))
y <- z[, p+1]
x <- z[, -(p+1), drop=FALSE];
#
# 2. Find the OLS coefficients from the covariances only.
#
a <- cov(x)
b <- cov(x,y)
beta.hat <- solve(a, b)[, 1] # Coefficients from the covariance matrix
#
# 2a. Find the intercept from the means and coefficients.
#
y.bar <- mean(y)
x.bar <- colMeans(x)
intercept <- y.bar - x.bar %*% beta.hat
The output shows agreement between the two methods:
(rbind(`From covariances` = c(`(Intercept)`=intercept, beta.hat),
`From data via OLS` = coef(lm(y ~ x))))
(Intercept) x1 x2
From covariances 0.946155 -0.424551 -1.006675
From data via OLS 0.946155 -0.424551 -1.006675
|
Is there a way to use the covariance matrix to find coefficients for multiple regression?
Yes, the covariance matrix of all the variables--explanatory and response--contains the information needed to find all the coefficients, provided an intercept (constant) term is included in the model.
|
7,000
|
How to draw neat polygons around scatterplot regions in ggplot2 [closed]
|
With some googling I came across the website of Gota Morota who has an example of doing this already on her website. Below is that example extended to your data.
library(ggplot2)
library(plyr)
work <- "E:\\Forum_Post_Stuff\\convex_hull_ggplot2"
setwd(work)
#note you have some missing data
mydata <- read.table(file = "emD71JT5.txt",header = TRUE, fill = TRUE)
nomissing <- na.omit(mydata) #chull function does not work with missing data
#getting the convex hull of each unique point set
df <- nomissing
find_hull <- function(df) df[chull(df$eff, df$man), ]
hulls <- ddply(df, "issue", find_hull)
plot <- ggplot(data = nomissing, aes(x = eff, y = man, colour=issue, fill = issue)) +
geom_point() +
geom_polygon(data = hulls, alpha = 0.5) +
labs(x = "Efficiency", y = "Mandate")
plot
|
How to draw neat polygons around scatterplot regions in ggplot2 [closed]
|
With some googling I came across the website of Gota Morota who has an example of doing this already on her website. Below is that example extended to your data.
library(ggplot2)
library(plyr)
work <
|
How to draw neat polygons around scatterplot regions in ggplot2 [closed]
With some googling I came across the website of Gota Morota who has an example of doing this already on her website. Below is that example extended to your data.
library(ggplot2)
library(plyr)
work <- "E:\\Forum_Post_Stuff\\convex_hull_ggplot2"
setwd(work)
#note you have some missing data
mydata <- read.table(file = "emD71JT5.txt",header = TRUE, fill = TRUE)
nomissing <- na.omit(mydata) #chull function does not work with missing data
#getting the convex hull of each unique point set
df <- nomissing
find_hull <- function(df) df[chull(df$eff, df$man), ]
hulls <- ddply(df, "issue", find_hull)
plot <- ggplot(data = nomissing, aes(x = eff, y = man, colour=issue, fill = issue)) +
geom_point() +
geom_polygon(data = hulls, alpha = 0.5) +
labs(x = "Efficiency", y = "Mandate")
plot
|
How to draw neat polygons around scatterplot regions in ggplot2 [closed]
With some googling I came across the website of Gota Morota who has an example of doing this already on her website. Below is that example extended to your data.
library(ggplot2)
library(plyr)
work <
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.