idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
13,901
|
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model?
|
I don't have an answer for you, but a few things to consider.
First, there are some duplicates rows in the data you are using, so obviously overfitting is not such a big issue in that case.
However, removing duplicate rows won't really solve the weird behavior of AIC
library(tidyverse)
library(splines)
data(mpg)
mpg_unique <- distinct(mpg[, c('displ',
'hwy')])
df <- 1:19
models_mpg_poly <- lapply(df, function(x)
{lm(hwy ~ poly(displ, x), data=mpg)})
models_mpg_poly_unique <- lapply(df,
function(x) {lm(hwy ~ poly(displ, x),
data=mpg_unique)})
models_mpg_splines <- lapply(df, function(x)
{lm(hwy ~ bs(displ, x), data=mpg)})
models_mpg_splines_unique <- lapply(df,
function(x) {lm(hwy ~ bs(displ, x),
data=mpg_unique)})
par(mfrow=c(2,2))
plot(df, sapply(X = models_mpg_poly,
FUN = AICc),
main='AICc poly', ylab = 'AICc')
plot(df, sapply(X = models_mpg_poly_unique,
FUN = AICc),
main='AICc poly unique rows',
ylab = 'AICc')
plot(df, sapply(X = models_mpg_splines,
FUN = AICc),
main='AICc splines', ylab = 'AICc')
plot(df, sapply(X =
models_mpg_splines_unique, FUN = AICc),
main='AICc splines unique rows',
ylab = 'AICc')
Now gam will automatically select df around 5 when we give it the same problem, with the fit that looking pretty smooth (it's cubic splines instead of bsplines, but that shouldn't matter)
library(mgcv)
gam_model <- gam(hwy ~ s(displ, k=20,
bs='cr'), data=mpg_unique)
summary(gam_model)
plot(gam_model)
and when we compare different complexities manually using AIC, we select df = 6.
models_mpg_gam <- lapply(1:19, function(x)
gam(hwy ~ s(displ, k=x, bs='cr', fx=T),
data=mpg_unique))
plot(sapply(X = models_mpg_gam, FUN = AICc))
So why does this happen? I don't know. It is said that AIC prefers complicated models, but I wouldn't expect it to fail as spectacularly in such a simple model. It is supposed to approximate out of sample, or LOOCV performance, so models with too many parameters might not actually perform that bad if you have enough data. Your overfitted spline model looks like it would work fine out of sample, although not perfectly, this is definitely not the case for your polynomial model. In the end, it's just an approximation, and there are many other approximations and model selection criteria out there. For example, for linear models, you can calculate LOOCV without the need to refit your models. What is striking is that in some fields, the AIC is the most important number that is used to judge your models.
|
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model?
|
I don't have an answer for you, but a few things to consider.
First, there are some duplicates rows in the data you are using, so obviously overfitting is not such a big issue in that case.
However,
|
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model?
I don't have an answer for you, but a few things to consider.
First, there are some duplicates rows in the data you are using, so obviously overfitting is not such a big issue in that case.
However, removing duplicate rows won't really solve the weird behavior of AIC
library(tidyverse)
library(splines)
data(mpg)
mpg_unique <- distinct(mpg[, c('displ',
'hwy')])
df <- 1:19
models_mpg_poly <- lapply(df, function(x)
{lm(hwy ~ poly(displ, x), data=mpg)})
models_mpg_poly_unique <- lapply(df,
function(x) {lm(hwy ~ poly(displ, x),
data=mpg_unique)})
models_mpg_splines <- lapply(df, function(x)
{lm(hwy ~ bs(displ, x), data=mpg)})
models_mpg_splines_unique <- lapply(df,
function(x) {lm(hwy ~ bs(displ, x),
data=mpg_unique)})
par(mfrow=c(2,2))
plot(df, sapply(X = models_mpg_poly,
FUN = AICc),
main='AICc poly', ylab = 'AICc')
plot(df, sapply(X = models_mpg_poly_unique,
FUN = AICc),
main='AICc poly unique rows',
ylab = 'AICc')
plot(df, sapply(X = models_mpg_splines,
FUN = AICc),
main='AICc splines', ylab = 'AICc')
plot(df, sapply(X =
models_mpg_splines_unique, FUN = AICc),
main='AICc splines unique rows',
ylab = 'AICc')
Now gam will automatically select df around 5 when we give it the same problem, with the fit that looking pretty smooth (it's cubic splines instead of bsplines, but that shouldn't matter)
library(mgcv)
gam_model <- gam(hwy ~ s(displ, k=20,
bs='cr'), data=mpg_unique)
summary(gam_model)
plot(gam_model)
and when we compare different complexities manually using AIC, we select df = 6.
models_mpg_gam <- lapply(1:19, function(x)
gam(hwy ~ s(displ, k=x, bs='cr', fx=T),
data=mpg_unique))
plot(sapply(X = models_mpg_gam, FUN = AICc))
So why does this happen? I don't know. It is said that AIC prefers complicated models, but I wouldn't expect it to fail as spectacularly in such a simple model. It is supposed to approximate out of sample, or LOOCV performance, so models with too many parameters might not actually perform that bad if you have enough data. Your overfitted spline model looks like it would work fine out of sample, although not perfectly, this is definitely not the case for your polynomial model. In the end, it's just an approximation, and there are many other approximations and model selection criteria out there. For example, for linear models, you can calculate LOOCV without the need to refit your models. What is striking is that in some fields, the AIC is the most important number that is used to judge your models.
|
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model?
I don't have an answer for you, but a few things to consider.
First, there are some duplicates rows in the data you are using, so obviously overfitting is not such a big issue in that case.
However,
|
13,902
|
Does KNN have a loss function?
|
$k$-NN does not have a loss function that can be minimized during training. In fact, this algorithm is not trained at all. The only "training" that happens for $k$-NN, is memorising the data (creating a local copy), so that during prediction you can do a search and majority vote. Technically, no function is fitted to the data, and so, no optimization is done (it cannot be trained using gradient descent).
|
Does KNN have a loss function?
|
$k$-NN does not have a loss function that can be minimized during training. In fact, this algorithm is not trained at all. The only "training" that happens for $k$-NN, is memorising the data (creating
|
Does KNN have a loss function?
$k$-NN does not have a loss function that can be minimized during training. In fact, this algorithm is not trained at all. The only "training" that happens for $k$-NN, is memorising the data (creating a local copy), so that during prediction you can do a search and majority vote. Technically, no function is fitted to the data, and so, no optimization is done (it cannot be trained using gradient descent).
|
Does KNN have a loss function?
$k$-NN does not have a loss function that can be minimized during training. In fact, this algorithm is not trained at all. The only "training" that happens for $k$-NN, is memorising the data (creating
|
13,903
|
Does KNN have a loss function?
|
As an alternative to the accepted answer:
Every stats algorithm is explicitly or implicitly minimizing some objective, even if there are no parameters or hyperparameters, and even if the minimization is not done iteratively. The kNN is so simple that one does not typically think of it like this, but you can actually write down an explicit objective function:
$$ \hat{t} = \text{argmax}_\mathcal{C} \sum_{i: x_i \in N_k(\{x\}, \hat{x})} \delta(t_i, \mathcal{C}) $$
What this says it that the predicted class $\hat{t}$ for a point $\hat{x}$ is equal to the class $\mathcal{C}$ which maximizes the number of other points $x_i$ that are in the set of $k$ nearby points $N_k(\{x\}, \hat{x})$ that also have the same class, measured by $\delta(t_i, \mathcal{C})$ which is $1$ when $x_i$ is in class $\mathcal{C}$, $0$ otherwise.
The advantage of writing it this way is that one can see how to make the objective "softer" by weighting points by proximity. Regarding "training," there are no parameters here to fit. But one could tune the distance metric (which is used to define $N_k$) or the weighting of points in this sum to optimize some additional classification objective. This leads into Neighborhood Component Analysis: https://www.cs.toronto.edu/~hinton/absps/nca.pdf which learns a distance metric.
|
Does KNN have a loss function?
|
As an alternative to the accepted answer:
Every stats algorithm is explicitly or implicitly minimizing some objective, even if there are no parameters or hyperparameters, and even if the minimization
|
Does KNN have a loss function?
As an alternative to the accepted answer:
Every stats algorithm is explicitly or implicitly minimizing some objective, even if there are no parameters or hyperparameters, and even if the minimization is not done iteratively. The kNN is so simple that one does not typically think of it like this, but you can actually write down an explicit objective function:
$$ \hat{t} = \text{argmax}_\mathcal{C} \sum_{i: x_i \in N_k(\{x\}, \hat{x})} \delta(t_i, \mathcal{C}) $$
What this says it that the predicted class $\hat{t}$ for a point $\hat{x}$ is equal to the class $\mathcal{C}$ which maximizes the number of other points $x_i$ that are in the set of $k$ nearby points $N_k(\{x\}, \hat{x})$ that also have the same class, measured by $\delta(t_i, \mathcal{C})$ which is $1$ when $x_i$ is in class $\mathcal{C}$, $0$ otherwise.
The advantage of writing it this way is that one can see how to make the objective "softer" by weighting points by proximity. Regarding "training," there are no parameters here to fit. But one could tune the distance metric (which is used to define $N_k$) or the weighting of points in this sum to optimize some additional classification objective. This leads into Neighborhood Component Analysis: https://www.cs.toronto.edu/~hinton/absps/nca.pdf which learns a distance metric.
|
Does KNN have a loss function?
As an alternative to the accepted answer:
Every stats algorithm is explicitly or implicitly minimizing some objective, even if there are no parameters or hyperparameters, and even if the minimization
|
13,904
|
Is the Gaussian distribution a specific case of the Beta Distribution?
|
They are both symmetric and more or less bell shaped, but the symmetric beta (whether at 4,4 or at any other specific value) is not actually Gaussian. You can tell this even without looking at the density -- beta distributions are on (0,1) while all Gaussian distributions are on $(-\infty,\infty)$
Let's look a bit more closely at the comparison. We'll standardize the beta(4,4) so that it has mean 0 and standard deviation 1 (a standardized beta) and look at how the density compares to a standard Gaussian:
The standardized beta(4,4) is restricted to lie between -3 and 3 (the standard Gaussian can take any value); it is also less peaked than the Gaussian and has rounder "shoulders" around 1 or so standard deviations either side of the mean. Its kurtosis is 27/11 ($\approx$2.45, vs 3 for the Gaussian).
Symmetric beta distributions with larger parameter values are closer to Gaussian.
In the limit as the parameter approaches infinity, a standardized symmetric beta approaches a standard normal distribution (example proof here).
So no specific case of the symmetric beta is Gaussian, but the limiting case of a suitably standardized beta is Gaussian. We can see this approach more easily by looking at the cdf of the beta, transformed by the quantile function of the Gaussian. On this scale the Gaussian would lie on the $y=x$ line, while the symmetric beta family would approach the $y=x$ line as the parameter got larger and larger.
In the plot below we look at the deviations from the $y=x$ line to more clearly see the approach of the beta($\alpha$,$\alpha$) to the Gaussian as $\alpha$ increases.
|
Is the Gaussian distribution a specific case of the Beta Distribution?
|
They are both symmetric and more or less bell shaped, but the symmetric beta (whether at 4,4 or at any other specific value) is not actually Gaussian. You can tell this even without looking at the den
|
Is the Gaussian distribution a specific case of the Beta Distribution?
They are both symmetric and more or less bell shaped, but the symmetric beta (whether at 4,4 or at any other specific value) is not actually Gaussian. You can tell this even without looking at the density -- beta distributions are on (0,1) while all Gaussian distributions are on $(-\infty,\infty)$
Let's look a bit more closely at the comparison. We'll standardize the beta(4,4) so that it has mean 0 and standard deviation 1 (a standardized beta) and look at how the density compares to a standard Gaussian:
The standardized beta(4,4) is restricted to lie between -3 and 3 (the standard Gaussian can take any value); it is also less peaked than the Gaussian and has rounder "shoulders" around 1 or so standard deviations either side of the mean. Its kurtosis is 27/11 ($\approx$2.45, vs 3 for the Gaussian).
Symmetric beta distributions with larger parameter values are closer to Gaussian.
In the limit as the parameter approaches infinity, a standardized symmetric beta approaches a standard normal distribution (example proof here).
So no specific case of the symmetric beta is Gaussian, but the limiting case of a suitably standardized beta is Gaussian. We can see this approach more easily by looking at the cdf of the beta, transformed by the quantile function of the Gaussian. On this scale the Gaussian would lie on the $y=x$ line, while the symmetric beta family would approach the $y=x$ line as the parameter got larger and larger.
In the plot below we look at the deviations from the $y=x$ line to more clearly see the approach of the beta($\alpha$,$\alpha$) to the Gaussian as $\alpha$ increases.
|
Is the Gaussian distribution a specific case of the Beta Distribution?
They are both symmetric and more or less bell shaped, but the symmetric beta (whether at 4,4 or at any other specific value) is not actually Gaussian. You can tell this even without looking at the den
|
13,905
|
Covariance functions or kernels - what exactly are they?
|
In loose terms, a kernel or covariance function $k(x, x^\prime)$ specifies the statistical relationship between two points $x, x^\prime$ in your input space; that is, how markedly a change in the value of the Gaussian Process (GP) at $x$ correlates with a change in the GP at $x^\prime$. In some sense, you can think of $k(\cdot, \cdot)$ as defining a similarity between inputs (*).
Typical kernels might simply depend on the Euclidean distance (or linear transformations thereof) between points, but the fun starts when you realize that you can do much, much more.
As David Duvenaud puts it:
Kernels can be defined over all types of data structures: Text,
images, matrices, and even kernels. Coming up with a kernel on a new
type of data used to be an easy way to get a NIPS paper.
For an easy overview of kernels for GPs, I warmly recommend his Kernel Cookbook and references therein.
(*) As @Dikran Marsupial notes, beware that the converse is not true; not all similarity metrics are valid kernels (see his answer).
|
Covariance functions or kernels - what exactly are they?
|
In loose terms, a kernel or covariance function $k(x, x^\prime)$ specifies the statistical relationship between two points $x, x^\prime$ in your input space; that is, how markedly a change in the valu
|
Covariance functions or kernels - what exactly are they?
In loose terms, a kernel or covariance function $k(x, x^\prime)$ specifies the statistical relationship between two points $x, x^\prime$ in your input space; that is, how markedly a change in the value of the Gaussian Process (GP) at $x$ correlates with a change in the GP at $x^\prime$. In some sense, you can think of $k(\cdot, \cdot)$ as defining a similarity between inputs (*).
Typical kernels might simply depend on the Euclidean distance (or linear transformations thereof) between points, but the fun starts when you realize that you can do much, much more.
As David Duvenaud puts it:
Kernels can be defined over all types of data structures: Text,
images, matrices, and even kernels. Coming up with a kernel on a new
type of data used to be an easy way to get a NIPS paper.
For an easy overview of kernels for GPs, I warmly recommend his Kernel Cookbook and references therein.
(*) As @Dikran Marsupial notes, beware that the converse is not true; not all similarity metrics are valid kernels (see his answer).
|
Covariance functions or kernels - what exactly are they?
In loose terms, a kernel or covariance function $k(x, x^\prime)$ specifies the statistical relationship between two points $x, x^\prime$ in your input space; that is, how markedly a change in the valu
|
13,906
|
Covariance functions or kernels - what exactly are they?
|
As @lacerbi suggests a kernel function (or covariance function in a Gaussian Process setting) is essentially a similarity metric, so that the value of the kernel is high if the two input vectors are considered "similar" according to the needs of the application and lower if they are dissimilar. However not all similarity metrics are valid kernel functions. To be a valid kernel, the function must be interpretable as computing an inner product in some transformed feature space, i.e. $K(x, x') = \phi(x)\cdot\phi(x')$ where $\phi(\cdot)$ is a function that maps the input vectors into the feature space.
So why must the kernel be interpretable as an inner product in some feature space? The reason is that it is much easier to devise theoretical bounds on generalisation performance for linear models (such as logistic regression) than it is for non-linear models (such as a neural network). Most linear models can be written so that the input vectors only appear in the form of inner products. This means that we can build a non-linear model by constructing a linear model in the kernel feature space. This is a fixed transformation of the data, so all of the theoretical performance bounds for the linear model automatically apply to the new kernel non-linear model*.
An important point that is difficult to grasp at first is that we tend not to think of a feature space that would be good for our particular application and then design a kernel giving rise to that feature space. In general we come up with a good similarity metric and then see if it is a kernel (the test is straightforward, if any matrix of pairwise evaluations of the kernel function at points in general position is positive definite, then it is a valid kernel).
$^*$ Of course if you tune the kernel parameters to optimise generalisation performance, e.g. by minimising the cross-validation error, then it is no longer a fixed transformation, but one that has been learned from the data and much of the beautiful theory has just been invalidated. So in practice, while the the design of kernel methods has a lot of reassuring theory behind them, the bounds themselves generally don't apply to practical applications - but it is still reassuring as there are sound principles underpinning the model.
|
Covariance functions or kernels - what exactly are they?
|
As @lacerbi suggests a kernel function (or covariance function in a Gaussian Process setting) is essentially a similarity metric, so that the value of the kernel is high if the two input vectors are c
|
Covariance functions or kernels - what exactly are they?
As @lacerbi suggests a kernel function (or covariance function in a Gaussian Process setting) is essentially a similarity metric, so that the value of the kernel is high if the two input vectors are considered "similar" according to the needs of the application and lower if they are dissimilar. However not all similarity metrics are valid kernel functions. To be a valid kernel, the function must be interpretable as computing an inner product in some transformed feature space, i.e. $K(x, x') = \phi(x)\cdot\phi(x')$ where $\phi(\cdot)$ is a function that maps the input vectors into the feature space.
So why must the kernel be interpretable as an inner product in some feature space? The reason is that it is much easier to devise theoretical bounds on generalisation performance for linear models (such as logistic regression) than it is for non-linear models (such as a neural network). Most linear models can be written so that the input vectors only appear in the form of inner products. This means that we can build a non-linear model by constructing a linear model in the kernel feature space. This is a fixed transformation of the data, so all of the theoretical performance bounds for the linear model automatically apply to the new kernel non-linear model*.
An important point that is difficult to grasp at first is that we tend not to think of a feature space that would be good for our particular application and then design a kernel giving rise to that feature space. In general we come up with a good similarity metric and then see if it is a kernel (the test is straightforward, if any matrix of pairwise evaluations of the kernel function at points in general position is positive definite, then it is a valid kernel).
$^*$ Of course if you tune the kernel parameters to optimise generalisation performance, e.g. by minimising the cross-validation error, then it is no longer a fixed transformation, but one that has been learned from the data and much of the beautiful theory has just been invalidated. So in practice, while the the design of kernel methods has a lot of reassuring theory behind them, the bounds themselves generally don't apply to practical applications - but it is still reassuring as there are sound principles underpinning the model.
|
Covariance functions or kernels - what exactly are they?
As @lacerbi suggests a kernel function (or covariance function in a Gaussian Process setting) is essentially a similarity metric, so that the value of the kernel is high if the two input vectors are c
|
13,907
|
How to calculate the expected value of a standard normal distribution?
|
You are almost there,
follow your last step:
$$E[X] = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} xe^{\displaystyle\frac{-x^{2}}{2}}\mathrm{d}x\\=-\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-x^2/2}d(-\frac{x^2}{2})\\=-\frac{1}{\sqrt{2\pi}}e^{-x^2/2}\mid_{-\infty}^{\infty}\\=0$$.
Or you can directly use the fact that $xe^{-x^2/2}$ is an odd function and the limits of the integral are symmetric about $x=0$.
|
How to calculate the expected value of a standard normal distribution?
|
You are almost there,
follow your last step:
$$E[X] = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} xe^{\displaystyle\frac{-x^{2}}{2}}\mathrm{d}x\\=-\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-x^2/
|
How to calculate the expected value of a standard normal distribution?
You are almost there,
follow your last step:
$$E[X] = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} xe^{\displaystyle\frac{-x^{2}}{2}}\mathrm{d}x\\=-\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-x^2/2}d(-\frac{x^2}{2})\\=-\frac{1}{\sqrt{2\pi}}e^{-x^2/2}\mid_{-\infty}^{\infty}\\=0$$.
Or you can directly use the fact that $xe^{-x^2/2}$ is an odd function and the limits of the integral are symmetric about $x=0$.
|
How to calculate the expected value of a standard normal distribution?
You are almost there,
follow your last step:
$$E[X] = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} xe^{\displaystyle\frac{-x^{2}}{2}}\mathrm{d}x\\=-\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-x^2/
|
13,908
|
How to calculate the expected value of a standard normal distribution?
|
Since you want to learn methods for computing expectations, and you wish to know some simple ways, you will enjoy using the moment generating function (mgf)
$$\phi(t) = E[e^{tX}].$$
The method works especially well when the distribution function or its density are given as exponentials themselves. In this case, you don't actually have to do any integration after you observe
$$t^2/2 -\left(x - t\right)^2/2 = t^2/2 + (-x^2/2 + tx - t^2/2) = -x^2/2 + tx,$$
because, writing the standard normal density function at $x$ as $C e^{-x^2/2}$ (for a constant $C$ whose value you will not need to know), this permits you to rewrite its mgf as
$$\phi(t) = C\int_\mathbb{R} e^{tx} e^{-x^2/2} dx = C\int_\mathbb{R} e^{-x^2/2 + tx} dx = e^{t^2/2}C\int_\mathbb{R} e^{-(x-t)^2/2} dx .$$
On the right hand side, following the $e^{t^2/2}$ term, you will recognize the integral of the total probability of a Normal distribution with mean $t$ and unit variance, which therefore is $1$. Consequently
$$\phi(t) = e^{t^2/2}.$$
Because the Normal density gets small at large values so rapidly, there are no convergence issues regardless of the value of $t$. $\phi$ is recognizably analytic at $0$, meaning it equals its MacLaurin series
$$\phi(t) = e^{t^2/2} = 1 + (t^2/2) + \frac{1}{2} \left(t^2/2\right)^2 + \cdots + \frac{1}{k!}\left(t^2/2\right)^k + \cdots.$$
However, since $e^{tX}$ converges absolutely for all values of $tX$, we also may write
$$E[e^{tX}] = E\left[1 + tX + \frac{1}{2}(tX)^2 + \cdots + \frac{1}{n!}(tX)^n + \cdots\right] \\
= 1 + E[X]t + \frac{1}{2}E[X^2]t^2 + \cdots + \frac{1}{n!}E[X^n]t^n + \cdots.$$
Two convergent power series can be equal only if they are equal term by term, whence (comparing the terms involving $t^{2k} = t^n$)
$$\frac{1}{(2k)!}E[X^{2k}]t^{2k} = \frac{1}{k!}(t^2/2)^k = \frac{1}{2^kk!} t^{2k},$$
implying
$$E[X^{2k}] = \frac{(2k)!}{2^kk!},\ k = 0, 1, 2, \ldots$$
(and all expectations of odd powers of $X$ are zero). For practically no effort you have obtained the expectations of all positive integral powers of $X$ at once.
Variations of this technique can work just as nicely in some cases, such as $E[1/(1-tX)] = E[1 + tX + (tX)^2 + \cdots + (tX)^n + \cdots]$, provided the range of $X$ is suitably limited. The mgf (and its close relative the characteristic function $E[e^{itX}]$) are so generally useful, though, that you will find them given in tables of distributional properties, such as in the Wikipedia entry on the Normal distribution.
|
How to calculate the expected value of a standard normal distribution?
|
Since you want to learn methods for computing expectations, and you wish to know some simple ways, you will enjoy using the moment generating function (mgf)
$$\phi(t) = E[e^{tX}].$$
The method works e
|
How to calculate the expected value of a standard normal distribution?
Since you want to learn methods for computing expectations, and you wish to know some simple ways, you will enjoy using the moment generating function (mgf)
$$\phi(t) = E[e^{tX}].$$
The method works especially well when the distribution function or its density are given as exponentials themselves. In this case, you don't actually have to do any integration after you observe
$$t^2/2 -\left(x - t\right)^2/2 = t^2/2 + (-x^2/2 + tx - t^2/2) = -x^2/2 + tx,$$
because, writing the standard normal density function at $x$ as $C e^{-x^2/2}$ (for a constant $C$ whose value you will not need to know), this permits you to rewrite its mgf as
$$\phi(t) = C\int_\mathbb{R} e^{tx} e^{-x^2/2} dx = C\int_\mathbb{R} e^{-x^2/2 + tx} dx = e^{t^2/2}C\int_\mathbb{R} e^{-(x-t)^2/2} dx .$$
On the right hand side, following the $e^{t^2/2}$ term, you will recognize the integral of the total probability of a Normal distribution with mean $t$ and unit variance, which therefore is $1$. Consequently
$$\phi(t) = e^{t^2/2}.$$
Because the Normal density gets small at large values so rapidly, there are no convergence issues regardless of the value of $t$. $\phi$ is recognizably analytic at $0$, meaning it equals its MacLaurin series
$$\phi(t) = e^{t^2/2} = 1 + (t^2/2) + \frac{1}{2} \left(t^2/2\right)^2 + \cdots + \frac{1}{k!}\left(t^2/2\right)^k + \cdots.$$
However, since $e^{tX}$ converges absolutely for all values of $tX$, we also may write
$$E[e^{tX}] = E\left[1 + tX + \frac{1}{2}(tX)^2 + \cdots + \frac{1}{n!}(tX)^n + \cdots\right] \\
= 1 + E[X]t + \frac{1}{2}E[X^2]t^2 + \cdots + \frac{1}{n!}E[X^n]t^n + \cdots.$$
Two convergent power series can be equal only if they are equal term by term, whence (comparing the terms involving $t^{2k} = t^n$)
$$\frac{1}{(2k)!}E[X^{2k}]t^{2k} = \frac{1}{k!}(t^2/2)^k = \frac{1}{2^kk!} t^{2k},$$
implying
$$E[X^{2k}] = \frac{(2k)!}{2^kk!},\ k = 0, 1, 2, \ldots$$
(and all expectations of odd powers of $X$ are zero). For practically no effort you have obtained the expectations of all positive integral powers of $X$ at once.
Variations of this technique can work just as nicely in some cases, such as $E[1/(1-tX)] = E[1 + tX + (tX)^2 + \cdots + (tX)^n + \cdots]$, provided the range of $X$ is suitably limited. The mgf (and its close relative the characteristic function $E[e^{itX}]$) are so generally useful, though, that you will find them given in tables of distributional properties, such as in the Wikipedia entry on the Normal distribution.
|
How to calculate the expected value of a standard normal distribution?
Since you want to learn methods for computing expectations, and you wish to know some simple ways, you will enjoy using the moment generating function (mgf)
$$\phi(t) = E[e^{tX}].$$
The method works e
|
13,909
|
How to calculate the expected value of a standard normal distribution?
|
A more straightforward and general way to calculate these kinds of integrals is by changing of variable:
Suppose your normal distribution has mean $\mu$ and variance $\sigma^2$: $\mathcal{N(\mu, \sigma^2)}$
$$
E(x) = \frac{1}{\sigma\sqrt{2 \pi}} \int x \exp(-\frac{(x-\mu)^2}{2\sigma^2})dx
$$
now by changing the variable $y = \frac{x-\mu}{\sigma}$ and $\frac{dy}{dx}=\frac{1}{\sigma} \rightarrow dx = \sigma dy$.
$$
E(x) = \frac{\sigma}{\sigma\sqrt{2 \pi}} \int (\sigma y + \mu) \exp(-\frac{y^2}{2})dy = \\
\frac{\sigma}{\sigma\sqrt{2 \pi}} \left[ \int \sigma y e^{-\frac{y^2}{2}} dy
+ \mu \int e^{-\frac{y^2}{2}} dy \right]
$$
The first integral is an odd integral since $y$ is an odd function and $e^{-\frac{y^2}{2}}$ is an even function which results in an odd function with symmetric integral boundaries and is zero. The second integral itself has an answer of $2\pi$.
$$
E(x) = \frac{\mu \sigma \sqrt{2 \pi}}{\sigma \sqrt{2 \pi}} = \mu
$$
In your case since your distribution has a mean of zero by your definition the answer is
$E(x) = 0$.
|
How to calculate the expected value of a standard normal distribution?
|
A more straightforward and general way to calculate these kinds of integrals is by changing of variable:
Suppose your normal distribution has mean $\mu$ and variance $\sigma^2$: $\mathcal{N(\mu, \sigm
|
How to calculate the expected value of a standard normal distribution?
A more straightforward and general way to calculate these kinds of integrals is by changing of variable:
Suppose your normal distribution has mean $\mu$ and variance $\sigma^2$: $\mathcal{N(\mu, \sigma^2)}$
$$
E(x) = \frac{1}{\sigma\sqrt{2 \pi}} \int x \exp(-\frac{(x-\mu)^2}{2\sigma^2})dx
$$
now by changing the variable $y = \frac{x-\mu}{\sigma}$ and $\frac{dy}{dx}=\frac{1}{\sigma} \rightarrow dx = \sigma dy$.
$$
E(x) = \frac{\sigma}{\sigma\sqrt{2 \pi}} \int (\sigma y + \mu) \exp(-\frac{y^2}{2})dy = \\
\frac{\sigma}{\sigma\sqrt{2 \pi}} \left[ \int \sigma y e^{-\frac{y^2}{2}} dy
+ \mu \int e^{-\frac{y^2}{2}} dy \right]
$$
The first integral is an odd integral since $y$ is an odd function and $e^{-\frac{y^2}{2}}$ is an even function which results in an odd function with symmetric integral boundaries and is zero. The second integral itself has an answer of $2\pi$.
$$
E(x) = \frac{\mu \sigma \sqrt{2 \pi}}{\sigma \sqrt{2 \pi}} = \mu
$$
In your case since your distribution has a mean of zero by your definition the answer is
$E(x) = 0$.
|
How to calculate the expected value of a standard normal distribution?
A more straightforward and general way to calculate these kinds of integrals is by changing of variable:
Suppose your normal distribution has mean $\mu$ and variance $\sigma^2$: $\mathcal{N(\mu, \sigm
|
13,910
|
Julia: Taking stock of how it has been doing
|
I have switched to Julia, and here are my pragmatic reasons:
It does glue code really well. I have a lot of legacy code in MATLAB, and MATLAB.jl took 5 minutes to install, works perfectly, and has a succinct syntax that makes it natural to use MATLAB functions. Julia also has the same for R, Python, C, Fortran, and many other languages.
Julia does parallelism really well. I'm not just talking about multiple processor (shared memory) parallelism, but also multi-node parallelism. I have access to a HPC nodes that aren't used too often because each is pretty slow, so I decided to give Julia a try. I added @parallel to a loop, started it by telling it the machine file, and bam it used all 5 nodes. Try doing that in R/Python. In MPI that would take awhile to get it to work (and that's with knowing what you're doing), not a few minutes the first time you try it!
Julia's vectorization is fast (in many cases faster than any other higher level language), and its devectorized code is almost C fast. So if you write scientific algorithms, usually you first write it in MATLAB and then re-write it in C. Julia lets you write it once, then give it compiler codes and 5 minutes later it's fast. Even if you don't, this means you just write the code whatever way feels natural and it will run well. In R/Python, you sometimes have to think pretty hard to get a good vectorized version (that can be tough to understand later).
The metaprogramming is great. Think of the number of times you've been like "I wish I could ______ in the language". Write a macro for it. Usually someone already has.
Everything is on Github. The source code. The packages. Super easy to read the code, report issues to the developers, talk to them to find out how to do something, or even improve packages yourself.
They have some really good libraries. For statistics, you'd probably be interested in their optimization packages (JuliaOpt is a group which manages them). The numeric packages are already top notch and only improving.
That said, I still really love Rstudio, but the new Juno on Atom is really nice. When it's no longer in heavy development and is stable, I can see it as better than Rstudio because of the ease of plugins (example: it has a good plugin for adapting to hidpi screens). So I think Julia is a good language to learn now. It has worked out well for me so far. YMMV.
|
Julia: Taking stock of how it has been doing
|
I have switched to Julia, and here are my pragmatic reasons:
It does glue code really well. I have a lot of legacy code in MATLAB, and MATLAB.jl took 5 minutes to install, works perfectly, and has a
|
Julia: Taking stock of how it has been doing
I have switched to Julia, and here are my pragmatic reasons:
It does glue code really well. I have a lot of legacy code in MATLAB, and MATLAB.jl took 5 minutes to install, works perfectly, and has a succinct syntax that makes it natural to use MATLAB functions. Julia also has the same for R, Python, C, Fortran, and many other languages.
Julia does parallelism really well. I'm not just talking about multiple processor (shared memory) parallelism, but also multi-node parallelism. I have access to a HPC nodes that aren't used too often because each is pretty slow, so I decided to give Julia a try. I added @parallel to a loop, started it by telling it the machine file, and bam it used all 5 nodes. Try doing that in R/Python. In MPI that would take awhile to get it to work (and that's with knowing what you're doing), not a few minutes the first time you try it!
Julia's vectorization is fast (in many cases faster than any other higher level language), and its devectorized code is almost C fast. So if you write scientific algorithms, usually you first write it in MATLAB and then re-write it in C. Julia lets you write it once, then give it compiler codes and 5 minutes later it's fast. Even if you don't, this means you just write the code whatever way feels natural and it will run well. In R/Python, you sometimes have to think pretty hard to get a good vectorized version (that can be tough to understand later).
The metaprogramming is great. Think of the number of times you've been like "I wish I could ______ in the language". Write a macro for it. Usually someone already has.
Everything is on Github. The source code. The packages. Super easy to read the code, report issues to the developers, talk to them to find out how to do something, or even improve packages yourself.
They have some really good libraries. For statistics, you'd probably be interested in their optimization packages (JuliaOpt is a group which manages them). The numeric packages are already top notch and only improving.
That said, I still really love Rstudio, but the new Juno on Atom is really nice. When it's no longer in heavy development and is stable, I can see it as better than Rstudio because of the ease of plugins (example: it has a good plugin for adapting to hidpi screens). So I think Julia is a good language to learn now. It has worked out well for me so far. YMMV.
|
Julia: Taking stock of how it has been doing
I have switched to Julia, and here are my pragmatic reasons:
It does glue code really well. I have a lot of legacy code in MATLAB, and MATLAB.jl took 5 minutes to install, works perfectly, and has a
|
13,911
|
Julia: Taking stock of how it has been doing
|
I think "learn X over Y" isn't the right way to formulate the question. In fact, you can learn (at least basics of) both and decide on the right tool depending on concrete task at hand. And since Julia inherited most of its syntax and concepts from other languages, it shoud be really easy to grasp it (as well as Python, though I'm not sure the same may be said about R).
So which language is better suited for what task? Based on my experience with these tools I would rate them as follows:
For pure statistical research that can be done with REPL and a couple of scripts, R seems to be the perfect choice. It is specifically designed for statistics, has longest history of tools and probably largest set of statistical libraries.
If you want to integrate statistics (or, for example, machine learning) into production system, Python seems like much better alternative: as a general-purpose programming language it has an awesome web stack, bindings to most APIs and libraries literaly for everything, from scrapping the web to creating 3D games.
High-performance algorithms are much easier to write in Julia. If you only need to use or combine existing libraries like SciKit Learn or e1071 backed by C/C++, you will be fine with Python and R. But when it comes to fast backend itself, Julia becomes real time-saver: it's much faster than Python or R and doesn't require additional knowledge of C/C++. As an example, Mocha.jl reimplements in pure Julia deep learning framework Caffe, originally written in C++ with a wrapper in Python.
Also don't forget that some libraries are available only in some languages. E.g. only Python has mature ecosystem for computer vision, some shape-matching and trasnformation algorithms are implemented only in Julia and I've heard of some unique packages for statistics in medicine in R.
|
Julia: Taking stock of how it has been doing
|
I think "learn X over Y" isn't the right way to formulate the question. In fact, you can learn (at least basics of) both and decide on the right tool depending on concrete task at hand. And since Juli
|
Julia: Taking stock of how it has been doing
I think "learn X over Y" isn't the right way to formulate the question. In fact, you can learn (at least basics of) both and decide on the right tool depending on concrete task at hand. And since Julia inherited most of its syntax and concepts from other languages, it shoud be really easy to grasp it (as well as Python, though I'm not sure the same may be said about R).
So which language is better suited for what task? Based on my experience with these tools I would rate them as follows:
For pure statistical research that can be done with REPL and a couple of scripts, R seems to be the perfect choice. It is specifically designed for statistics, has longest history of tools and probably largest set of statistical libraries.
If you want to integrate statistics (or, for example, machine learning) into production system, Python seems like much better alternative: as a general-purpose programming language it has an awesome web stack, bindings to most APIs and libraries literaly for everything, from scrapping the web to creating 3D games.
High-performance algorithms are much easier to write in Julia. If you only need to use or combine existing libraries like SciKit Learn or e1071 backed by C/C++, you will be fine with Python and R. But when it comes to fast backend itself, Julia becomes real time-saver: it's much faster than Python or R and doesn't require additional knowledge of C/C++. As an example, Mocha.jl reimplements in pure Julia deep learning framework Caffe, originally written in C++ with a wrapper in Python.
Also don't forget that some libraries are available only in some languages. E.g. only Python has mature ecosystem for computer vision, some shape-matching and trasnformation algorithms are implemented only in Julia and I've heard of some unique packages for statistics in medicine in R.
|
Julia: Taking stock of how it has been doing
I think "learn X over Y" isn't the right way to formulate the question. In fact, you can learn (at least basics of) both and decide on the right tool depending on concrete task at hand. And since Juli
|
13,912
|
Julia: Taking stock of how it has been doing
|
(b) What sort of Statistics use-cases would you advise someone to use Julia in
(c) If R is slow at a certain task does it make sense to switch to
Julia or Python?
High dimensional and compute intensive problems.
Multiprocessing. Julia's single node parallel capabilities (@spawnat) are much more convenient than those in python. E.g. in python you cannot use a map reduce multiprocessing pool on the REPL and every function you wish to parallelise requires lots of boilerplate.
Cluster computing. Julia's ClusterManagers package lets you use a compute cluster almost as you would a single machine with several cores. [I've been playing with making this feel more like scripting in ClusterUtils ]
Shared Memory. Julia's SharedArray objects are superior to the equivalent shared
memory objects in python.
Speed. My Julia implementation is (single-machine) faster than my R
implementation at random number generation, and at linear algebra (supports multithreaded BLAS).
Interoperability. Julia's PyCall module gives you access the python ecosystem without wrappers - e.g. I use this for pylab. There's something similar for R, but I've not tried it. There is also ccall for C/Fortran libraries.
GPU. Julia's CUDA wrappers are far more developed than those in python (Rs were nearly non-existent when I checked). I suspect this will continue to be the case because of how much easier it is to call external libraries in Julia than in python.
Ecosystem. The Pkg module uses github as a backend. I believe this will have a big impact on the longrun maintainability of Julia modules as it makes it much more straightforward to offer patches or for owners to pass on responsibility.
$\sigma$ is a valid variable name ;)
Writing fast code for large problems will increasingly be dependent on parallel computing. Python is inherently parallel unfriendly (GIL), and native multiprocessing in R is nonexistent AFAIK. Julia doesn't require you to drop down to C to write performant code, while retaining much of the feel of python/R/Matlab.
The main downside to Julia coming from python/R is lack of documentation outside of the core functionality. python is very mature, and what you can't find in the docs is usually on stackoverflow. R's documentation system is pretty good in comparison.
(a) Would you advise any new users of statistical tools to learn Julia
over R?
Yes, if you fit the use cases in part (b). If your use case involves lots of heterogeneous work
|
Julia: Taking stock of how it has been doing
|
(b) What sort of Statistics use-cases would you advise someone to use Julia in
(c) If R is slow at a certain task does it make sense to switch to
Julia or Python?
High dimensional and compute intensi
|
Julia: Taking stock of how it has been doing
(b) What sort of Statistics use-cases would you advise someone to use Julia in
(c) If R is slow at a certain task does it make sense to switch to
Julia or Python?
High dimensional and compute intensive problems.
Multiprocessing. Julia's single node parallel capabilities (@spawnat) are much more convenient than those in python. E.g. in python you cannot use a map reduce multiprocessing pool on the REPL and every function you wish to parallelise requires lots of boilerplate.
Cluster computing. Julia's ClusterManagers package lets you use a compute cluster almost as you would a single machine with several cores. [I've been playing with making this feel more like scripting in ClusterUtils ]
Shared Memory. Julia's SharedArray objects are superior to the equivalent shared
memory objects in python.
Speed. My Julia implementation is (single-machine) faster than my R
implementation at random number generation, and at linear algebra (supports multithreaded BLAS).
Interoperability. Julia's PyCall module gives you access the python ecosystem without wrappers - e.g. I use this for pylab. There's something similar for R, but I've not tried it. There is also ccall for C/Fortran libraries.
GPU. Julia's CUDA wrappers are far more developed than those in python (Rs were nearly non-existent when I checked). I suspect this will continue to be the case because of how much easier it is to call external libraries in Julia than in python.
Ecosystem. The Pkg module uses github as a backend. I believe this will have a big impact on the longrun maintainability of Julia modules as it makes it much more straightforward to offer patches or for owners to pass on responsibility.
$\sigma$ is a valid variable name ;)
Writing fast code for large problems will increasingly be dependent on parallel computing. Python is inherently parallel unfriendly (GIL), and native multiprocessing in R is nonexistent AFAIK. Julia doesn't require you to drop down to C to write performant code, while retaining much of the feel of python/R/Matlab.
The main downside to Julia coming from python/R is lack of documentation outside of the core functionality. python is very mature, and what you can't find in the docs is usually on stackoverflow. R's documentation system is pretty good in comparison.
(a) Would you advise any new users of statistical tools to learn Julia
over R?
Yes, if you fit the use cases in part (b). If your use case involves lots of heterogeneous work
|
Julia: Taking stock of how it has been doing
(b) What sort of Statistics use-cases would you advise someone to use Julia in
(c) If R is slow at a certain task does it make sense to switch to
Julia or Python?
High dimensional and compute intensi
|
13,913
|
What would be an illustrative picture for linear mixed models?
|
For a talk, I've used the following picture which is based on the sleepstudy dataset from the lme4 package. The idea was to illustrate the difference between independent regression fits from subject-specific data (gray) versus predictions from random-effects models, especially that (1) predicted values from random-effects model are shrinkage estimators and that (2) individuals trajectories share a common slope with a random-intercept only model (orange). The distributions of subject intercepts are shown as kernel density estimates on the y-axis (R code).
(The density curves extend beyond the range of observed values because there are relatively few observations.)
A more 'conventional' graphic might be the next one, which is from Doug Bates (available on R-forge site for lme4, e.g. 4Longitudinal.R), where we could add individual data in each panel.
|
What would be an illustrative picture for linear mixed models?
|
For a talk, I've used the following picture which is based on the sleepstudy dataset from the lme4 package. The idea was to illustrate the difference between independent regression fits from subject-s
|
What would be an illustrative picture for linear mixed models?
For a talk, I've used the following picture which is based on the sleepstudy dataset from the lme4 package. The idea was to illustrate the difference between independent regression fits from subject-specific data (gray) versus predictions from random-effects models, especially that (1) predicted values from random-effects model are shrinkage estimators and that (2) individuals trajectories share a common slope with a random-intercept only model (orange). The distributions of subject intercepts are shown as kernel density estimates on the y-axis (R code).
(The density curves extend beyond the range of observed values because there are relatively few observations.)
A more 'conventional' graphic might be the next one, which is from Doug Bates (available on R-forge site for lme4, e.g. 4Longitudinal.R), where we could add individual data in each panel.
|
What would be an illustrative picture for linear mixed models?
For a talk, I've used the following picture which is based on the sleepstudy dataset from the lme4 package. The idea was to illustrate the difference between independent regression fits from subject-s
|
13,914
|
What would be an illustrative picture for linear mixed models?
|
So something not "extremely elegant" but showing random intercepts and slopes too with R. (I guess it would be even cooler if if showed the actual equations also)
N =100; set.seed(123);
x1 = runif(N)*3; readings1 <- 2*x1 + 1.0 + rnorm(N)*.99;
x2 = runif(N)*3; readings2 <- 3*x2 + 1.5 + rnorm(N)*.99;
x3 = runif(N)*3; readings3 <- 4*x3 + 2.0 + rnorm(N)*.99;
x4 = runif(N)*3; readings4 <- 5*x4 + 2.5 + rnorm(N)*.99;
x5 = runif(N)*3; readings5 <- 6*x5 + 3.0 + rnorm(N)*.99;
X = c(x1,x2,x3,x4,x5);
Y = c(readings1,readings2,readings3,readings4,readings5)
Grouping = c(rep(1,N),rep(2,N),rep(3,N),rep(4,N),rep(5,N))
library(lme4);
LMERFIT <- lmer(Y ~ 1+ X+ (X|Grouping))
RIaS <-unlist( ranef(LMERFIT)) #Random Intercepts and Slopes
FixedEff <- fixef(LMERFIT) # Fixed Intercept and Slope
png('SampleLMERFIT_withRandomSlopes_and_Intercepts.png', width=800,height=450,units="px" )
par(mfrow=c(1,2))
plot(X,Y,xlab="x",ylab="readings")
plot(x1,readings1, xlim=c(0,3), ylim=c(min(Y)-1,max(Y)+1), pch=16,xlab="x",ylab="readings" )
points(x2,readings2, col='red', pch=16)
points(x3,readings3, col='green', pch=16)
points(x4,readings4, col='blue', pch=16)
points(x5,readings5, col='orange', pch=16)
abline(v=(seq(-1,4 ,1)), col="lightgray", lty="dotted");
abline(h=(seq( -1,25 ,1)), col="lightgray", lty="dotted")
lines(x1,FixedEff[1]+ (RIaS[6] + FixedEff[2])* x1+ RIaS[1], col='black')
lines(x2,FixedEff[1]+ (RIaS[7] + FixedEff[2])* x2+ RIaS[2], col='red')
lines(x3,FixedEff[1]+ (RIaS[8] + FixedEff[2])* x3+ RIaS[3], col='green')
lines(x4,FixedEff[1]+ (RIaS[9] + FixedEff[2])* x4+ RIaS[4], col='blue')
lines(x5,FixedEff[1]+ (RIaS[10]+ FixedEff[2])* x5+ RIaS[5], col='orange')
legend(0, 24, c("Group1","Group2","Group3","Group4","Group5" ), lty=c(1,1), col=c('black','red', 'green','blue','orange'))
dev.off()
|
What would be an illustrative picture for linear mixed models?
|
So something not "extremely elegant" but showing random intercepts and slopes too with R. (I guess it would be even cooler if if showed the actual equations also)
N =100; set.seed(123);
x1 = runif(
|
What would be an illustrative picture for linear mixed models?
So something not "extremely elegant" but showing random intercepts and slopes too with R. (I guess it would be even cooler if if showed the actual equations also)
N =100; set.seed(123);
x1 = runif(N)*3; readings1 <- 2*x1 + 1.0 + rnorm(N)*.99;
x2 = runif(N)*3; readings2 <- 3*x2 + 1.5 + rnorm(N)*.99;
x3 = runif(N)*3; readings3 <- 4*x3 + 2.0 + rnorm(N)*.99;
x4 = runif(N)*3; readings4 <- 5*x4 + 2.5 + rnorm(N)*.99;
x5 = runif(N)*3; readings5 <- 6*x5 + 3.0 + rnorm(N)*.99;
X = c(x1,x2,x3,x4,x5);
Y = c(readings1,readings2,readings3,readings4,readings5)
Grouping = c(rep(1,N),rep(2,N),rep(3,N),rep(4,N),rep(5,N))
library(lme4);
LMERFIT <- lmer(Y ~ 1+ X+ (X|Grouping))
RIaS <-unlist( ranef(LMERFIT)) #Random Intercepts and Slopes
FixedEff <- fixef(LMERFIT) # Fixed Intercept and Slope
png('SampleLMERFIT_withRandomSlopes_and_Intercepts.png', width=800,height=450,units="px" )
par(mfrow=c(1,2))
plot(X,Y,xlab="x",ylab="readings")
plot(x1,readings1, xlim=c(0,3), ylim=c(min(Y)-1,max(Y)+1), pch=16,xlab="x",ylab="readings" )
points(x2,readings2, col='red', pch=16)
points(x3,readings3, col='green', pch=16)
points(x4,readings4, col='blue', pch=16)
points(x5,readings5, col='orange', pch=16)
abline(v=(seq(-1,4 ,1)), col="lightgray", lty="dotted");
abline(h=(seq( -1,25 ,1)), col="lightgray", lty="dotted")
lines(x1,FixedEff[1]+ (RIaS[6] + FixedEff[2])* x1+ RIaS[1], col='black')
lines(x2,FixedEff[1]+ (RIaS[7] + FixedEff[2])* x2+ RIaS[2], col='red')
lines(x3,FixedEff[1]+ (RIaS[8] + FixedEff[2])* x3+ RIaS[3], col='green')
lines(x4,FixedEff[1]+ (RIaS[9] + FixedEff[2])* x4+ RIaS[4], col='blue')
lines(x5,FixedEff[1]+ (RIaS[10]+ FixedEff[2])* x5+ RIaS[5], col='orange')
legend(0, 24, c("Group1","Group2","Group3","Group4","Group5" ), lty=c(1,1), col=c('black','red', 'green','blue','orange'))
dev.off()
|
What would be an illustrative picture for linear mixed models?
So something not "extremely elegant" but showing random intercepts and slopes too with R. (I guess it would be even cooler if if showed the actual equations also)
N =100; set.seed(123);
x1 = runif(
|
13,915
|
What would be an illustrative picture for linear mixed models?
|
This graph taken from the Matlab documentation of nlmefit strikes me as one really exemplifying the concept of random intercepts and slopes quite obviously. Probably something showing groups of heteroskedasticity in the residuals of an OLS plot would be also pretty standard but I wouldn't give a "solution".
|
What would be an illustrative picture for linear mixed models?
|
This graph taken from the Matlab documentation of nlmefit strikes me as one really exemplifying the concept of random intercepts and slopes quite obviously. Probably something showing groups of hetero
|
What would be an illustrative picture for linear mixed models?
This graph taken from the Matlab documentation of nlmefit strikes me as one really exemplifying the concept of random intercepts and slopes quite obviously. Probably something showing groups of heteroskedasticity in the residuals of an OLS plot would be also pretty standard but I wouldn't give a "solution".
|
What would be an illustrative picture for linear mixed models?
This graph taken from the Matlab documentation of nlmefit strikes me as one really exemplifying the concept of random intercepts and slopes quite obviously. Probably something showing groups of hetero
|
13,916
|
Logistic regression with binary dependent and independent variables
|
There is no reason not to do this, but two cautionary thoughts:
Keep careful track during the analysis of which is which. In large projects, it can be easy to get lost, and produce errant results.
If you choose to report regression estimates, rather than odds ratios, make your coding scheme clear in your report, so readers don't produce inaccurate ORs on their own assuming they were both coded 0,1.
May seem basic, but I've seen both problems make it into published papers.
|
Logistic regression with binary dependent and independent variables
|
There is no reason not to do this, but two cautionary thoughts:
Keep careful track during the analysis of which is which. In large projects, it can be easy to get lost, and produce errant results.
If
|
Logistic regression with binary dependent and independent variables
There is no reason not to do this, but two cautionary thoughts:
Keep careful track during the analysis of which is which. In large projects, it can be easy to get lost, and produce errant results.
If you choose to report regression estimates, rather than odds ratios, make your coding scheme clear in your report, so readers don't produce inaccurate ORs on their own assuming they were both coded 0,1.
May seem basic, but I've seen both problems make it into published papers.
|
Logistic regression with binary dependent and independent variables
There is no reason not to do this, but two cautionary thoughts:
Keep careful track during the analysis of which is which. In large projects, it can be easy to get lost, and produce errant results.
If
|
13,917
|
Logistic regression with binary dependent and independent variables
|
For, clarity: the term "binary" is usually reserved to 1 vs 0 coding only. More general word suitable for any 2-value coding is "dichotomous". Dichotomous predictors are of course welcome to logistic regression, like to linear regression, and, because they have only 2 values, it makes no difference whether to input them as factors or as covariates.
|
Logistic regression with binary dependent and independent variables
|
For, clarity: the term "binary" is usually reserved to 1 vs 0 coding only. More general word suitable for any 2-value coding is "dichotomous". Dichotomous predictors are of course welcome to logistic
|
Logistic regression with binary dependent and independent variables
For, clarity: the term "binary" is usually reserved to 1 vs 0 coding only. More general word suitable for any 2-value coding is "dichotomous". Dichotomous predictors are of course welcome to logistic regression, like to linear regression, and, because they have only 2 values, it makes no difference whether to input them as factors or as covariates.
|
Logistic regression with binary dependent and independent variables
For, clarity: the term "binary" is usually reserved to 1 vs 0 coding only. More general word suitable for any 2-value coding is "dichotomous". Dichotomous predictors are of course welcome to logistic
|
13,918
|
Logistic regression with binary dependent and independent variables
|
Typically it helps interpretation if you code your predictors 0-1, but apart from that (and noting that it is not required), there is nothing wrong with this. There are some other (contingency-table based) approaches, but if I recall correctly, these turn out to be equivalent to (some form of) logistic regression.
So in short: I see no reason not to do this.
|
Logistic regression with binary dependent and independent variables
|
Typically it helps interpretation if you code your predictors 0-1, but apart from that (and noting that it is not required), there is nothing wrong with this. There are some other (contingency-table b
|
Logistic regression with binary dependent and independent variables
Typically it helps interpretation if you code your predictors 0-1, but apart from that (and noting that it is not required), there is nothing wrong with this. There are some other (contingency-table based) approaches, but if I recall correctly, these turn out to be equivalent to (some form of) logistic regression.
So in short: I see no reason not to do this.
|
Logistic regression with binary dependent and independent variables
Typically it helps interpretation if you code your predictors 0-1, but apart from that (and noting that it is not required), there is nothing wrong with this. There are some other (contingency-table b
|
13,919
|
Logistic regression with binary dependent and independent variables
|
In addition, if you have more than two predictors, then it is more likely that there would be a problem of multi-collinearity even for logistic or multiple regression. However, there is no harm to use logistic regression with all binary variables (i.e., coded (0,1)).
|
Logistic regression with binary dependent and independent variables
|
In addition, if you have more than two predictors, then it is more likely that there would be a problem of multi-collinearity even for logistic or multiple regression. However, there is no harm to us
|
Logistic regression with binary dependent and independent variables
In addition, if you have more than two predictors, then it is more likely that there would be a problem of multi-collinearity even for logistic or multiple regression. However, there is no harm to use logistic regression with all binary variables (i.e., coded (0,1)).
|
Logistic regression with binary dependent and independent variables
In addition, if you have more than two predictors, then it is more likely that there would be a problem of multi-collinearity even for logistic or multiple regression. However, there is no harm to us
|
13,920
|
Article about misuse of statistical method in NYTimes
|
I will answer the first question in detail.
With a fair coin, the chances of
getting 527 or more heads in 1,000
flips is less than 1 in 20, or 5
percent, the conventional cutoff.
For a fair coin the number of heads in 1000 trials follows the binomial distribution with number of trials $n=1000$ and probability $p=1/2$. The probability of getting more than 527 heads is then
$$P(B(1000,1/2)>=527)$$
This can be calculated with any statistical software package. R gives us
> pbinom(526,1000,1/2,lower.tail=FALSE)
0.04684365
So the probability that with fair coin we will get more than 526 heads is approximately 0.047, which is close to 5% cuttoff mentioned in the article.
The following statement
To put it another way: the experiment
finds evidence of a weighted coin
“with 95 percent confidence.”
is debatable. I would be reluctant to say it, since 95% confidence can be interpreted in several ways.
Next we turn to
But the experiment did not find all of
the numbers in that range; it found
just one — 527. It is thus more
accurate, these experts say, to
calculate the probability of getting
that one number — 527 — if the coin is
weighted, and compare it with the
probability of getting the same number
if the coin is fair.
Here we compare two events $B(1000,1/2)=527$ -- fair coin, and $B(1000,p)=527$ -- weighted coin. Substituting the formulas for probabilities of these events and noting that the binomial coefficient cancels out we get
$$\frac{P(B(1000,p)=527)}{P(B(1000,1/2)=527)}=\frac{p^{527}(1-p)^{473}}{(1/2)^{1000}}.$$
This is a function of $p$, thus we cand find minima or maxima of it. From the article we may infer that we need maxima:
Statisticians can show that this ratio
cannot be higher than about 4 to 1,
according to Paul Speckman, a
statistician, who, with Jeff Rouder, a
psychologist, provided the example.
To make maximisation easier take logarithm of ratio, calculate the derivative with respect to $p$ and equate it to zero. The solution will be
$$p=\frac{527}{1000}.$$
We can check that it is really a maximum using second derivative test for example. Substituting it to the formula we get
$$\frac{(527/1000)^{527}(473/1000)^{473}}{(1/2)^{1000}}\approx 4.3$$
So the ratio is 4.3 to 1, which agrees with the article.
|
Article about misuse of statistical method in NYTimes
|
I will answer the first question in detail.
With a fair coin, the chances of
getting 527 or more heads in 1,000
flips is less than 1 in 20, or 5
percent, the conventional cutoff.
For a fair co
|
Article about misuse of statistical method in NYTimes
I will answer the first question in detail.
With a fair coin, the chances of
getting 527 or more heads in 1,000
flips is less than 1 in 20, or 5
percent, the conventional cutoff.
For a fair coin the number of heads in 1000 trials follows the binomial distribution with number of trials $n=1000$ and probability $p=1/2$. The probability of getting more than 527 heads is then
$$P(B(1000,1/2)>=527)$$
This can be calculated with any statistical software package. R gives us
> pbinom(526,1000,1/2,lower.tail=FALSE)
0.04684365
So the probability that with fair coin we will get more than 526 heads is approximately 0.047, which is close to 5% cuttoff mentioned in the article.
The following statement
To put it another way: the experiment
finds evidence of a weighted coin
“with 95 percent confidence.”
is debatable. I would be reluctant to say it, since 95% confidence can be interpreted in several ways.
Next we turn to
But the experiment did not find all of
the numbers in that range; it found
just one — 527. It is thus more
accurate, these experts say, to
calculate the probability of getting
that one number — 527 — if the coin is
weighted, and compare it with the
probability of getting the same number
if the coin is fair.
Here we compare two events $B(1000,1/2)=527$ -- fair coin, and $B(1000,p)=527$ -- weighted coin. Substituting the formulas for probabilities of these events and noting that the binomial coefficient cancels out we get
$$\frac{P(B(1000,p)=527)}{P(B(1000,1/2)=527)}=\frac{p^{527}(1-p)^{473}}{(1/2)^{1000}}.$$
This is a function of $p$, thus we cand find minima or maxima of it. From the article we may infer that we need maxima:
Statisticians can show that this ratio
cannot be higher than about 4 to 1,
according to Paul Speckman, a
statistician, who, with Jeff Rouder, a
psychologist, provided the example.
To make maximisation easier take logarithm of ratio, calculate the derivative with respect to $p$ and equate it to zero. The solution will be
$$p=\frac{527}{1000}.$$
We can check that it is really a maximum using second derivative test for example. Substituting it to the formula we get
$$\frac{(527/1000)^{527}(473/1000)^{473}}{(1/2)^{1000}}\approx 4.3$$
So the ratio is 4.3 to 1, which agrees with the article.
|
Article about misuse of statistical method in NYTimes
I will answer the first question in detail.
With a fair coin, the chances of
getting 527 or more heads in 1,000
flips is less than 1 in 20, or 5
percent, the conventional cutoff.
For a fair co
|
13,921
|
Why do t-test and ANOVA give different p-values for two-group comparison?
|
By default the argument var.equal of t.test() equals FALSE.
In lm(), the residuals are supposed to have constant variance.
Thus, by setting var.equal = TRUE in t.test(), you should get the same result.
var.equals indicates whether to treat the two variances as being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch (or Satterthwaite) approximation to the degrees of freedom is used.
|
Why do t-test and ANOVA give different p-values for two-group comparison?
|
By default the argument var.equal of t.test() equals FALSE.
In lm(), the residuals are supposed to have constant variance.
Thus, by setting var.equal = TRUE in t.test(), you should get the same result
|
Why do t-test and ANOVA give different p-values for two-group comparison?
By default the argument var.equal of t.test() equals FALSE.
In lm(), the residuals are supposed to have constant variance.
Thus, by setting var.equal = TRUE in t.test(), you should get the same result.
var.equals indicates whether to treat the two variances as being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch (or Satterthwaite) approximation to the degrees of freedom is used.
|
Why do t-test and ANOVA give different p-values for two-group comparison?
By default the argument var.equal of t.test() equals FALSE.
In lm(), the residuals are supposed to have constant variance.
Thus, by setting var.equal = TRUE in t.test(), you should get the same result
|
13,922
|
Should we teach kurtosis in an applied statistics course? If so, how?
|
Kurtosis is really pretty simple ... and useful. It is simply a measure of outliers, or tails. It has nothing to do with the peak whatsoever - that definition must be abandoned.
Here is a data set:
0, 3, 4, 1, 2, 3, 0, 2, 1, 3, 2, 0, 2, 2, 3, 2, 5, 2, 3, 999
Notice that '999' is an outlier.
Here are the $z^4$ values from the data set:
0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00,0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 360.98
Notice that only the outlier gives a $z^4$ that is noticeably different from 0.
The average of these $z^4$ values is the kurtosis of the empirical distribution (subtract 3 if you like, it doesn't matter for the point I am making): 18.05
It should be obvious from this calculation that the data near the "peak" (the non-outlier data) contribute almost nothing to the kurtosis statistic.
Kurtosis is useful as a measure of outliers. Outliers are important to elementary students and therefore kurtosis should be taught. But kurtosis has virtually nothing to do with the peak, whether it is pointy, flat, bimodal or infinite. You can have all the above with small kurtosis and all of the above with large kurtosis. So it should NEVER be presented as having anything to do with the peak, because that will be teaching incorrect information. It also makes the material needless confusing, and seemingly less useful.
Summary:
kurtosis is useful as a measures of tails (outliers).
kurtosis has nothing to do with the peak.
kurtosis is practically useful and should be taught, but only as a measure of outliers. Do not mention peak when teaching kurtosis.
This article explains clearly why the "Peakedness" definition is now officially dead.
Westfall, P.H. (2014). "Kurtosis as Peakedness, 1905 – 2014. R.I.P." The American Statistician, 68(3), 191–195.
|
Should we teach kurtosis in an applied statistics course? If so, how?
|
Kurtosis is really pretty simple ... and useful. It is simply a measure of outliers, or tails. It has nothing to do with the peak whatsoever - that definition must be abandoned.
Here is a data set:
|
Should we teach kurtosis in an applied statistics course? If so, how?
Kurtosis is really pretty simple ... and useful. It is simply a measure of outliers, or tails. It has nothing to do with the peak whatsoever - that definition must be abandoned.
Here is a data set:
0, 3, 4, 1, 2, 3, 0, 2, 1, 3, 2, 0, 2, 2, 3, 2, 5, 2, 3, 999
Notice that '999' is an outlier.
Here are the $z^4$ values from the data set:
0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00,0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 360.98
Notice that only the outlier gives a $z^4$ that is noticeably different from 0.
The average of these $z^4$ values is the kurtosis of the empirical distribution (subtract 3 if you like, it doesn't matter for the point I am making): 18.05
It should be obvious from this calculation that the data near the "peak" (the non-outlier data) contribute almost nothing to the kurtosis statistic.
Kurtosis is useful as a measure of outliers. Outliers are important to elementary students and therefore kurtosis should be taught. But kurtosis has virtually nothing to do with the peak, whether it is pointy, flat, bimodal or infinite. You can have all the above with small kurtosis and all of the above with large kurtosis. So it should NEVER be presented as having anything to do with the peak, because that will be teaching incorrect information. It also makes the material needless confusing, and seemingly less useful.
Summary:
kurtosis is useful as a measures of tails (outliers).
kurtosis has nothing to do with the peak.
kurtosis is practically useful and should be taught, but only as a measure of outliers. Do not mention peak when teaching kurtosis.
This article explains clearly why the "Peakedness" definition is now officially dead.
Westfall, P.H. (2014). "Kurtosis as Peakedness, 1905 – 2014. R.I.P." The American Statistician, 68(3), 191–195.
|
Should we teach kurtosis in an applied statistics course? If so, how?
Kurtosis is really pretty simple ... and useful. It is simply a measure of outliers, or tails. It has nothing to do with the peak whatsoever - that definition must be abandoned.
Here is a data set:
|
13,923
|
Should we teach kurtosis in an applied statistics course? If so, how?
|
While the question is somewhat vague, it is interesting. At what levels is kurtosis taught? I remember it being mentioned in a (master's level) course in linear models (long time ago, based on first edition of Seber's book). It was not an important topic, but it enters in topics like studying the (lack of) robustness of the Likelihood ratio test (F-test) of equality of variances, where (from memory) correct level asymptotically depends on having same kurtosis as the normal distribution, which is too much to assume! We saw a paper (but I never read it with details) http://www.jstor.org/stable/4615828?seq=1#page_scan_tab_contents by Oja, which tries to find out what skewness, kurtosis and such really measures.
Why do I find this interesting? Because I have been teaching in latin america, where it seems that skewness & kurtosis are taught by many as important topics, and trying to tell post-graduate students (many from economy) that kurtosis is a bad measure of form of a distribution (mainly because sampling variability of fourth powers simply is to large), was difficult. I was trying getting them to use QQplots instead. So, to some of the commenters, yes, this is taught someplaces, probably to much!
By the way, this is not only my opinion. The following blog post https://www.spcforexcel.com/knowledge/basic-statistics/are-skewness-and-kurtosis-useful-statistics contains this citation (attributed to Dr. Wheeler):
In short, skewness and kurtosis are practically worthless. Shewhart
made this observation in his first book. The statistics for skewness
and kurtosis simply do not provide any useful information beyond that
already given by the measures of location and dispersion.
We should teach better techniques to study forms of distributions! such as QQplots (or relative distribution plots). And, if somebody still needs numerical measures, measures based on L-moments are better. I will quote one passage from the paper J R Statist Soc B (1990) 52, No 1, pp 105--124 by J R M Hosking: "L-moments: Analysis and Estimation of Distribution using Linear Combination of Order Statistics", page 109:
An alternative justification of these interpretations of L-moments
may be based on the work of Oja (1981), Oja defined intuitively
reasonable criteria for one probability distribution on the real line
to be located further to the right (more dispersed, more skew, more
kurtotic) than another. A real-valued functional of a distribution
that preserves the partial ordering of distributions implied by these
criteria may then reasonably be called a 'measure of location
(dispersion, skewness, kurtosis)'. It follows immediately from Oja's
work that $\lambda_1$ and $\lambda_2$ , in Oja's notation, $\mu(F)$
and $\frac12 \sigma_1(F)$, are measures of location and scale
respectively. Hosking (1989) shows that $\tau_3$ and $\tau_4$ are, by
Oja's criteria, measures of skewness and kurtosis respectively.
(For the moment, I refer to the paper for the definitions of these measures, they are all based on L-moments.) The interesting thing is that, the traditional measure of kurtosis, based on fourth moments, is not a measure of kurtosis in the sense of Oja! (I will edit in references for that claim when I can find it).
|
Should we teach kurtosis in an applied statistics course? If so, how?
|
While the question is somewhat vague, it is interesting. At what levels is kurtosis taught? I remember it being mentioned in a (master's level) course in linear models (long time ago, based on first
|
Should we teach kurtosis in an applied statistics course? If so, how?
While the question is somewhat vague, it is interesting. At what levels is kurtosis taught? I remember it being mentioned in a (master's level) course in linear models (long time ago, based on first edition of Seber's book). It was not an important topic, but it enters in topics like studying the (lack of) robustness of the Likelihood ratio test (F-test) of equality of variances, where (from memory) correct level asymptotically depends on having same kurtosis as the normal distribution, which is too much to assume! We saw a paper (but I never read it with details) http://www.jstor.org/stable/4615828?seq=1#page_scan_tab_contents by Oja, which tries to find out what skewness, kurtosis and such really measures.
Why do I find this interesting? Because I have been teaching in latin america, where it seems that skewness & kurtosis are taught by many as important topics, and trying to tell post-graduate students (many from economy) that kurtosis is a bad measure of form of a distribution (mainly because sampling variability of fourth powers simply is to large), was difficult. I was trying getting them to use QQplots instead. So, to some of the commenters, yes, this is taught someplaces, probably to much!
By the way, this is not only my opinion. The following blog post https://www.spcforexcel.com/knowledge/basic-statistics/are-skewness-and-kurtosis-useful-statistics contains this citation (attributed to Dr. Wheeler):
In short, skewness and kurtosis are practically worthless. Shewhart
made this observation in his first book. The statistics for skewness
and kurtosis simply do not provide any useful information beyond that
already given by the measures of location and dispersion.
We should teach better techniques to study forms of distributions! such as QQplots (or relative distribution plots). And, if somebody still needs numerical measures, measures based on L-moments are better. I will quote one passage from the paper J R Statist Soc B (1990) 52, No 1, pp 105--124 by J R M Hosking: "L-moments: Analysis and Estimation of Distribution using Linear Combination of Order Statistics", page 109:
An alternative justification of these interpretations of L-moments
may be based on the work of Oja (1981), Oja defined intuitively
reasonable criteria for one probability distribution on the real line
to be located further to the right (more dispersed, more skew, more
kurtotic) than another. A real-valued functional of a distribution
that preserves the partial ordering of distributions implied by these
criteria may then reasonably be called a 'measure of location
(dispersion, skewness, kurtosis)'. It follows immediately from Oja's
work that $\lambda_1$ and $\lambda_2$ , in Oja's notation, $\mu(F)$
and $\frac12 \sigma_1(F)$, are measures of location and scale
respectively. Hosking (1989) shows that $\tau_3$ and $\tau_4$ are, by
Oja's criteria, measures of skewness and kurtosis respectively.
(For the moment, I refer to the paper for the definitions of these measures, they are all based on L-moments.) The interesting thing is that, the traditional measure of kurtosis, based on fourth moments, is not a measure of kurtosis in the sense of Oja! (I will edit in references for that claim when I can find it).
|
Should we teach kurtosis in an applied statistics course? If so, how?
While the question is somewhat vague, it is interesting. At what levels is kurtosis taught? I remember it being mentioned in a (master's level) course in linear models (long time ago, based on first
|
13,924
|
Should we teach kurtosis in an applied statistics course? If so, how?
|
I my opinion, the skewness coefficient is useful to motivate the terms: positively skewed and negatively skewed. But, that is where it stops, if your goal is to assess normality. Classical measures of skewness and kurtosis often fail to capture various types of deviation away from normality. I usually advocate to my students to use graphical techniques to assess it is reasonable to assess normality, such as a qq-plot or a normal probability plot. Also with an adequately sized sample, a histogram can also be used. Boxplots are also useful to identify outliers or even heavy tails.
This is inline with the recommendations a 1999 task force of the APA:
"Assumptions. You should take efforts to assure that the underlying assumptions required for the analysis are reasonable given the data. Examine residuals carefully. Do not use distributional tests and statistical indexes of
shape (e.g., skewness, kurtosis) as a substitute for examining your residuals 'graphically. Using a statistical test to diagnose problems in model
fitting has several shortcomings. First, diagnostic significance tests based on summary statistics (such as tests for homogeneity of variance) are often impractically sensitive; our statistical tests of models are often more robust than our statistical tests of assumptions. Second, statistics such as skewness and kurtosis often fail to detect distributional irregularities in the residuals. Third, statistical tests depend on sample size, and as sample size increases, the tests often will reject innocuous assumptions. In general, there is no substitute for graphical analysis of assumptions."
Reference:
Wilkinson, L., & Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.
|
Should we teach kurtosis in an applied statistics course? If so, how?
|
I my opinion, the skewness coefficient is useful to motivate the terms: positively skewed and negatively skewed. But, that is where it stops, if your goal is to assess normality. Classical measures of
|
Should we teach kurtosis in an applied statistics course? If so, how?
I my opinion, the skewness coefficient is useful to motivate the terms: positively skewed and negatively skewed. But, that is where it stops, if your goal is to assess normality. Classical measures of skewness and kurtosis often fail to capture various types of deviation away from normality. I usually advocate to my students to use graphical techniques to assess it is reasonable to assess normality, such as a qq-plot or a normal probability plot. Also with an adequately sized sample, a histogram can also be used. Boxplots are also useful to identify outliers or even heavy tails.
This is inline with the recommendations a 1999 task force of the APA:
"Assumptions. You should take efforts to assure that the underlying assumptions required for the analysis are reasonable given the data. Examine residuals carefully. Do not use distributional tests and statistical indexes of
shape (e.g., skewness, kurtosis) as a substitute for examining your residuals 'graphically. Using a statistical test to diagnose problems in model
fitting has several shortcomings. First, diagnostic significance tests based on summary statistics (such as tests for homogeneity of variance) are often impractically sensitive; our statistical tests of models are often more robust than our statistical tests of assumptions. Second, statistics such as skewness and kurtosis often fail to detect distributional irregularities in the residuals. Third, statistical tests depend on sample size, and as sample size increases, the tests often will reject innocuous assumptions. In general, there is no substitute for graphical analysis of assumptions."
Reference:
Wilkinson, L., & Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.
|
Should we teach kurtosis in an applied statistics course? If so, how?
I my opinion, the skewness coefficient is useful to motivate the terms: positively skewed and negatively skewed. But, that is where it stops, if your goal is to assess normality. Classical measures of
|
13,925
|
Should we teach kurtosis in an applied statistics course? If so, how?
|
Depending on how applied the course is, the question of accuracy of estimates might come up. The accuracy of the variance estimate depends strongly on kurtosis. The reason this happens is that with high kurtosis, the distribution allows rare, extreme potentially observable data. Thus the data-generating process will produce very extreme values in some samples, and not so extreme values in others. In the former case, you get a very large variance estimate, and in the latter, a small variance estimate.
If the outdated and incorrect "peakedness" interpretation were eliminated, and the focus given entirely to outliers (i.e., rare, extreme observables) instead, then it would be easier to teach kurtosis in introductory courses. But people twist themselves into knots trying to justify "peakedness" because it is (incorrectly) stated that way in their textbooks, and they miss the real applications of kurtosis. These applications mostly relate to outliers, and of course outliers are important in applied statistics courses.
|
Should we teach kurtosis in an applied statistics course? If so, how?
|
Depending on how applied the course is, the question of accuracy of estimates might come up. The accuracy of the variance estimate depends strongly on kurtosis. The reason this happens is that with hi
|
Should we teach kurtosis in an applied statistics course? If so, how?
Depending on how applied the course is, the question of accuracy of estimates might come up. The accuracy of the variance estimate depends strongly on kurtosis. The reason this happens is that with high kurtosis, the distribution allows rare, extreme potentially observable data. Thus the data-generating process will produce very extreme values in some samples, and not so extreme values in others. In the former case, you get a very large variance estimate, and in the latter, a small variance estimate.
If the outdated and incorrect "peakedness" interpretation were eliminated, and the focus given entirely to outliers (i.e., rare, extreme observables) instead, then it would be easier to teach kurtosis in introductory courses. But people twist themselves into knots trying to justify "peakedness" because it is (incorrectly) stated that way in their textbooks, and they miss the real applications of kurtosis. These applications mostly relate to outliers, and of course outliers are important in applied statistics courses.
|
Should we teach kurtosis in an applied statistics course? If so, how?
Depending on how applied the course is, the question of accuracy of estimates might come up. The accuracy of the variance estimate depends strongly on kurtosis. The reason this happens is that with hi
|
13,926
|
Should we teach kurtosis in an applied statistics course? If so, how?
|
Frankly, I don't understand why people want to complicate simple things. Why not just show the definition (stolen from Wikipedia):
$$\operatorname{Kurt}[X] = \operatorname{E}\left[\left(\frac{X - \mu}{\sigma}\right)^4\right] = \frac{\mu_4}{\sigma^4} = \frac{\operatorname{E}[(X-\mu)^4]}{(\operatorname{E}[(X-\mu)^2])^2},
$$
You can replace the expectation operator with sum based estimators $\frac 1 n \sum_{i=1}^n$, of course. It helps to discuss the units of measure of $\mu,\sigma^2,\mu_4$, and show why the fourth moment should be scaled by the square of the variance to make kurtosis the dimensionless measure, i.e. a shape parameter. So, we have now location $\mu$, scale $\sigma^2$ and any number of parameters to describe the shape such as skew and kurtosis. I'd always start with equations. Supposedly easy to understand explanations in plain English only make everything more confusing. Verbosity $\ne$ clarity.
|
Should we teach kurtosis in an applied statistics course? If so, how?
|
Frankly, I don't understand why people want to complicate simple things. Why not just show the definition (stolen from Wikipedia):
$$\operatorname{Kurt}[X] = \operatorname{E}\left[\left(\frac{X - \mu}
|
Should we teach kurtosis in an applied statistics course? If so, how?
Frankly, I don't understand why people want to complicate simple things. Why not just show the definition (stolen from Wikipedia):
$$\operatorname{Kurt}[X] = \operatorname{E}\left[\left(\frac{X - \mu}{\sigma}\right)^4\right] = \frac{\mu_4}{\sigma^4} = \frac{\operatorname{E}[(X-\mu)^4]}{(\operatorname{E}[(X-\mu)^2])^2},
$$
You can replace the expectation operator with sum based estimators $\frac 1 n \sum_{i=1}^n$, of course. It helps to discuss the units of measure of $\mu,\sigma^2,\mu_4$, and show why the fourth moment should be scaled by the square of the variance to make kurtosis the dimensionless measure, i.e. a shape parameter. So, we have now location $\mu$, scale $\sigma^2$ and any number of parameters to describe the shape such as skew and kurtosis. I'd always start with equations. Supposedly easy to understand explanations in plain English only make everything more confusing. Verbosity $\ne$ clarity.
|
Should we teach kurtosis in an applied statistics course? If so, how?
Frankly, I don't understand why people want to complicate simple things. Why not just show the definition (stolen from Wikipedia):
$$\operatorname{Kurt}[X] = \operatorname{E}\left[\left(\frac{X - \mu}
|
13,927
|
PCA and exploratory Factor Analysis on the same dataset: differences and similarities; factor model vs PCA
|
Both models - principal-component and common-factor - are similar straightforward linear regressional models predicting observed variables by latent variables. Let us have centered variables V1 V2 ... Vp and we chose to extract 2 components/factors FI and FII. Then the model is the system of equations:
$V_1 = a_{1I}F_I + a_{1II}F_{II} + E_1$
$V_2 = a_{2I}F_I + a_{2II}F_{II} + E_2$
$...$
$V_p = …$
where coefficient a is a loading, F is a factor or a component, and variable E is regression residuals. Here, FA model differs from PCA model exactly by that FA imposes the requirement: variables E1 E2 ... Ep (the error terms which are uncorrelated with the Fs) must not correlate with each other (See pictures) . These error variables FA calls "unique factors"; their variances are known ("uniquenesses") but their casewise values are not. Therefore, factor scores F are computed as good approximations only, they are not exact.
(A matrix algebra presentation of this common factor analysis model is in Footnote $^1$.)
Whereas in PCA the error variables from predicting different variables may freely correlate: nothing is imposed on them. They represent that "dross" we've taken the left-out p-2 dimensions for. We know the values of E and so we can compute component scores F as exact values.
That was the difference between PCA model and FA model.
It is due to that above outlined difference, that FA is able to explain pairwise correlations (covariances). PCA generally cannot do it (unless the number of extracted components = p); it can only explain multivariate variance$^2$. So, as long as "Factor analysis" term is defined via the aim to explain correlations, PCA is not factor analysis. If "Factor analysis" is defined more broadly as a method providing or suggesting latent "traits" which could be interpreted, PCA can be seen is a special and simplest form of factor analysis.
Sometimes - in some datasets under certain conditions - PCA leaves E terms which almost do not intercorrelate. Then PCA can explain correlations and become like FA. It is not very uncommon with datasets with many variables. This made some observers to claim that PCA results become close to FA results as data grows. I don't think it is a rule, but the tendency may indeed be. Anyway, given their theoretical differences, it is always good to select the method consciously. FA is a more realistic model if you want to reduce variables down to latents which you're going to regard as real latent traits standing behind the variables and making them correlate.
But if you have another aim - reduce dimensionality while keeping the distances between the points of the data cloud as much as possible - PCA is better than FA. (However, iterative Multidimensional scaling (MDS) procedure will be even better then. PCA amounts to noniterative metric MDS.) If you further don't bother with the distances much and are interested only in preserving as much of the overall variance of the data as possible, by few dimensions - PCA is an optimal choice.
$^1$ Factor analysis data model: $\mathbf {V=FA'+E}diag \bf(u)$, where $\bf V$ is n cases x p variables analyzed data (columns centered or standardized), $\bf F$ is n x m common factor values (the unknown true ones, not factor scores) with unit variance, $\bf A$ is p x m matrix of common factor loadings (pattern matrix), $\bf E$ is n x p unique factor values (unknown), $\bf u$ is the p vector of the unique factor loadings equal to the sq. root of the uniquenesses ($\bf u^2$). Portion $\mathbf E diag \bf(u)$ could be just labeled as "E" for simplicity, as it is in the formulas opening the answer.
Principal assumptions of the model:
$\bf F$ and $\bf E$ variables (common and unique factors, respectively) have zero means and unit variances;
$\bf E$ is typically assumed multivariate normal but $\bf F$ in general case needs not be multivariate normal (if both are assumed multivariate normal then $\bf V$ are so, too);
$\bf E$ variables are uncorrelated with each other and are uncorrelated with $\bf F$ variables.
$^2$ It follows from the common factor analysis model that loadings $\bf A$ of m common factors (m<p variables), also denoted $\bf A_{(m)}$, should closely reproduce observed covariances (or correlations) between the variables, $\bf \Sigma$. So that if factors are orthogonal, the fundamental factor theorem states that
$\bf \hat{\Sigma} = AA'$ and $\bf \Sigma \approx \hat{\Sigma} + \it diag \bf (u^2)$,
where $\bf \hat{\Sigma}$ is the matrix of reproduced covariances (or correlations) with common variances ("communalities") on its diagonal; and unique variances ("uniquenesses") - which are variances minus communalities - are the vector $\bf u^2$. The off-diagonal discrepancy ($\approx$) is due to that factors is a theoretical model generating data, and as such it is simpler than the observed data it was built on. The main causes of the discrepancy between the observed and the reproduced covariances (or correlations) may be: (1) number of factors m is not statistically optimal; (2) partial correlations (these are p(p-1)/2 factors that do not belong to common factors) are pronounced; (3) communalities not well assesed, their initial values had been poor; (4) relationships are not linear, using linear model is questionnable; (5) model "subtype" produced by the extraction method is not optimal for the data (see about different extraction methods). In other words, some FA data assumptions are not fully met.
As for plain PCA, it reproduces covariances by the loadings exactly when m=p (all components are used) and it usually fails to do it if m<p (only few 1st components retained). Factor theorem for PCA is:
$\bf \Sigma= AA'_{(p)} = AA'_{(m)} + AA'_{(p-m)}$,
so both $\bf A_{(m)}$ loadings and dropped $\bf A_{(p-m)}$ loadings are mixtures of communalities and uniquenesses and neither individually can help restore covariances. The closer m is to p, the better PCA restores covariances, as a rule, but small m (which often is of our interest) don't help. This is different from FA, which is intended to restore covariances with quite small optimal number of factors. If $\bf AA'_{(p-m)}$ approaches diagonality PCA becomes like FA, with $\bf A_{(m)}$ restoring all the covariances. It happens occasionally with PCA, as I've already mentioned. But PCA lacks algorithmic ability to force such diagonalization. It is FA algorithms who do it.
FA, not PCA, is a data generative model: it presumes few "true" common factors (of usually unknown number, so you try out m within a range) which generate "true" values for covariances. Observed covariances are the "true" ones + small random noise. (It is due to the performed diagonalization that leaved $\bf A_{(m)}$ the sole restorer of all covariances, that the above noise can be small and random.) Trying to fit more factors than optimal amounts to overfitting attempt, and not necessarily efficient overfitting attempt.
Both FA and PCA aim to maximize $trace(\bf A'A_{(m)})$, but for PCA it is the only goal; for FA it is the concomitant goal, the other being to diagonalize off uniquenesses. That trace is the sum of eigenvalues in PCA. Some methods of extraction in FA add more concomitant goals at the expense of maximizing the trace, so it is not of principal importance.
To summarize the explicated differences between the two methods. FA aims (directly or indirectly) at minimizing differences between individual corresponding off-diagonal elements of $\bf \Sigma$ and $\bf AA'$. A successful FA model is the one that leaves errors for the covariances small and random-like (normal or uniform about 0, no outliers/fat tails). PCA only maximizes $trace(\bf AA')$ which is equal to $trace(\bf A'A)$ (and $\bf A'A$ is equal to the covariance matrix of the principal components, which is diagonal matrix). Thus PCA isn't "busy" with all the individual covariances: it simply cannot, being merely a form of orthogonal rotation of data.
Thanks to maximizing the trace - the variance explained by m components - PCA is accounting for covariances, since covariance is shared variance. In this sense PCA is "low-rank approximation" of the whole covariance matrix of variables. And when seen from the viewpoint of observations this approximation is the approximation of the Euclidean-distance matrix of observations (which is why PCA is metric MDS called "Principal coordinate analysis). This fact should not screen us from the reality that PCA does not model covariance matrix (each covariance) as generated by few living latent traits that are imaginable as transcendent towards our variables; the PCA approximation remains immanent, even if it is good: it is simplification of the data.
If you want to see step-by-step computations done in PCA and FA, commented and compared, please look in here.
|
PCA and exploratory Factor Analysis on the same dataset: differences and similarities; factor model
|
Both models - principal-component and common-factor - are similar straightforward linear regressional models predicting observed variables by latent variables. Let us have centered variables V1 V2 ...
|
PCA and exploratory Factor Analysis on the same dataset: differences and similarities; factor model vs PCA
Both models - principal-component and common-factor - are similar straightforward linear regressional models predicting observed variables by latent variables. Let us have centered variables V1 V2 ... Vp and we chose to extract 2 components/factors FI and FII. Then the model is the system of equations:
$V_1 = a_{1I}F_I + a_{1II}F_{II} + E_1$
$V_2 = a_{2I}F_I + a_{2II}F_{II} + E_2$
$...$
$V_p = …$
where coefficient a is a loading, F is a factor or a component, and variable E is regression residuals. Here, FA model differs from PCA model exactly by that FA imposes the requirement: variables E1 E2 ... Ep (the error terms which are uncorrelated with the Fs) must not correlate with each other (See pictures) . These error variables FA calls "unique factors"; their variances are known ("uniquenesses") but their casewise values are not. Therefore, factor scores F are computed as good approximations only, they are not exact.
(A matrix algebra presentation of this common factor analysis model is in Footnote $^1$.)
Whereas in PCA the error variables from predicting different variables may freely correlate: nothing is imposed on them. They represent that "dross" we've taken the left-out p-2 dimensions for. We know the values of E and so we can compute component scores F as exact values.
That was the difference between PCA model and FA model.
It is due to that above outlined difference, that FA is able to explain pairwise correlations (covariances). PCA generally cannot do it (unless the number of extracted components = p); it can only explain multivariate variance$^2$. So, as long as "Factor analysis" term is defined via the aim to explain correlations, PCA is not factor analysis. If "Factor analysis" is defined more broadly as a method providing or suggesting latent "traits" which could be interpreted, PCA can be seen is a special and simplest form of factor analysis.
Sometimes - in some datasets under certain conditions - PCA leaves E terms which almost do not intercorrelate. Then PCA can explain correlations and become like FA. It is not very uncommon with datasets with many variables. This made some observers to claim that PCA results become close to FA results as data grows. I don't think it is a rule, but the tendency may indeed be. Anyway, given their theoretical differences, it is always good to select the method consciously. FA is a more realistic model if you want to reduce variables down to latents which you're going to regard as real latent traits standing behind the variables and making them correlate.
But if you have another aim - reduce dimensionality while keeping the distances between the points of the data cloud as much as possible - PCA is better than FA. (However, iterative Multidimensional scaling (MDS) procedure will be even better then. PCA amounts to noniterative metric MDS.) If you further don't bother with the distances much and are interested only in preserving as much of the overall variance of the data as possible, by few dimensions - PCA is an optimal choice.
$^1$ Factor analysis data model: $\mathbf {V=FA'+E}diag \bf(u)$, where $\bf V$ is n cases x p variables analyzed data (columns centered or standardized), $\bf F$ is n x m common factor values (the unknown true ones, not factor scores) with unit variance, $\bf A$ is p x m matrix of common factor loadings (pattern matrix), $\bf E$ is n x p unique factor values (unknown), $\bf u$ is the p vector of the unique factor loadings equal to the sq. root of the uniquenesses ($\bf u^2$). Portion $\mathbf E diag \bf(u)$ could be just labeled as "E" for simplicity, as it is in the formulas opening the answer.
Principal assumptions of the model:
$\bf F$ and $\bf E$ variables (common and unique factors, respectively) have zero means and unit variances;
$\bf E$ is typically assumed multivariate normal but $\bf F$ in general case needs not be multivariate normal (if both are assumed multivariate normal then $\bf V$ are so, too);
$\bf E$ variables are uncorrelated with each other and are uncorrelated with $\bf F$ variables.
$^2$ It follows from the common factor analysis model that loadings $\bf A$ of m common factors (m<p variables), also denoted $\bf A_{(m)}$, should closely reproduce observed covariances (or correlations) between the variables, $\bf \Sigma$. So that if factors are orthogonal, the fundamental factor theorem states that
$\bf \hat{\Sigma} = AA'$ and $\bf \Sigma \approx \hat{\Sigma} + \it diag \bf (u^2)$,
where $\bf \hat{\Sigma}$ is the matrix of reproduced covariances (or correlations) with common variances ("communalities") on its diagonal; and unique variances ("uniquenesses") - which are variances minus communalities - are the vector $\bf u^2$. The off-diagonal discrepancy ($\approx$) is due to that factors is a theoretical model generating data, and as such it is simpler than the observed data it was built on. The main causes of the discrepancy between the observed and the reproduced covariances (or correlations) may be: (1) number of factors m is not statistically optimal; (2) partial correlations (these are p(p-1)/2 factors that do not belong to common factors) are pronounced; (3) communalities not well assesed, their initial values had been poor; (4) relationships are not linear, using linear model is questionnable; (5) model "subtype" produced by the extraction method is not optimal for the data (see about different extraction methods). In other words, some FA data assumptions are not fully met.
As for plain PCA, it reproduces covariances by the loadings exactly when m=p (all components are used) and it usually fails to do it if m<p (only few 1st components retained). Factor theorem for PCA is:
$\bf \Sigma= AA'_{(p)} = AA'_{(m)} + AA'_{(p-m)}$,
so both $\bf A_{(m)}$ loadings and dropped $\bf A_{(p-m)}$ loadings are mixtures of communalities and uniquenesses and neither individually can help restore covariances. The closer m is to p, the better PCA restores covariances, as a rule, but small m (which often is of our interest) don't help. This is different from FA, which is intended to restore covariances with quite small optimal number of factors. If $\bf AA'_{(p-m)}$ approaches diagonality PCA becomes like FA, with $\bf A_{(m)}$ restoring all the covariances. It happens occasionally with PCA, as I've already mentioned. But PCA lacks algorithmic ability to force such diagonalization. It is FA algorithms who do it.
FA, not PCA, is a data generative model: it presumes few "true" common factors (of usually unknown number, so you try out m within a range) which generate "true" values for covariances. Observed covariances are the "true" ones + small random noise. (It is due to the performed diagonalization that leaved $\bf A_{(m)}$ the sole restorer of all covariances, that the above noise can be small and random.) Trying to fit more factors than optimal amounts to overfitting attempt, and not necessarily efficient overfitting attempt.
Both FA and PCA aim to maximize $trace(\bf A'A_{(m)})$, but for PCA it is the only goal; for FA it is the concomitant goal, the other being to diagonalize off uniquenesses. That trace is the sum of eigenvalues in PCA. Some methods of extraction in FA add more concomitant goals at the expense of maximizing the trace, so it is not of principal importance.
To summarize the explicated differences between the two methods. FA aims (directly or indirectly) at minimizing differences between individual corresponding off-diagonal elements of $\bf \Sigma$ and $\bf AA'$. A successful FA model is the one that leaves errors for the covariances small and random-like (normal or uniform about 0, no outliers/fat tails). PCA only maximizes $trace(\bf AA')$ which is equal to $trace(\bf A'A)$ (and $\bf A'A$ is equal to the covariance matrix of the principal components, which is diagonal matrix). Thus PCA isn't "busy" with all the individual covariances: it simply cannot, being merely a form of orthogonal rotation of data.
Thanks to maximizing the trace - the variance explained by m components - PCA is accounting for covariances, since covariance is shared variance. In this sense PCA is "low-rank approximation" of the whole covariance matrix of variables. And when seen from the viewpoint of observations this approximation is the approximation of the Euclidean-distance matrix of observations (which is why PCA is metric MDS called "Principal coordinate analysis). This fact should not screen us from the reality that PCA does not model covariance matrix (each covariance) as generated by few living latent traits that are imaginable as transcendent towards our variables; the PCA approximation remains immanent, even if it is good: it is simplification of the data.
If you want to see step-by-step computations done in PCA and FA, commented and compared, please look in here.
|
PCA and exploratory Factor Analysis on the same dataset: differences and similarities; factor model
Both models - principal-component and common-factor - are similar straightforward linear regressional models predicting observed variables by latent variables. Let us have centered variables V1 V2 ...
|
13,928
|
PCA and exploratory Factor Analysis on the same dataset: differences and similarities; factor model vs PCA
|
I provided my own account of the similarities and differences between PCA and FA in the following thread: Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis?
Note that my account is somewhat different from the one by @ttnphns (as presented in his answer above). My main claim is that PCA and FA are not as different as is often thought. They can indeed strongly differ when the number of variables is very low, but tend to yield quite similar results once the number of variables is over around a dozen. See my [long!] answer in the linked thread for mathematical details and Monte Carlo simulations. For a much more concise version of my argument see here: Under which conditions do PCA and FA yield similar results?
Here I would like to explicitly answer your main question: Is there anything wrong with performing PCA and FA on the same data set? My answer to this is: No.
When running PCA or FA, you are not testing any hypothesis. Both of them are exploratory techniques that are used to get a better understanding of the data. So why not explore the data with two different tools? In fact, let's do it!
Example: wine data set
As an illustration, I used a fairly well-known wine dataset with $n=178$ wines from three different grapes described by $p=13$ variables. See my answer here: What are the differences between Factor Analysis and Principal Component Analysis? for mode details, but briefly -- I ran both PCA and FA analysis and made 2D biplots for both of them. One can easily see that the difference is minimal:
|
PCA and exploratory Factor Analysis on the same dataset: differences and similarities; factor model
|
I provided my own account of the similarities and differences between PCA and FA in the following thread: Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor
|
PCA and exploratory Factor Analysis on the same dataset: differences and similarities; factor model vs PCA
I provided my own account of the similarities and differences between PCA and FA in the following thread: Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis?
Note that my account is somewhat different from the one by @ttnphns (as presented in his answer above). My main claim is that PCA and FA are not as different as is often thought. They can indeed strongly differ when the number of variables is very low, but tend to yield quite similar results once the number of variables is over around a dozen. See my [long!] answer in the linked thread for mathematical details and Monte Carlo simulations. For a much more concise version of my argument see here: Under which conditions do PCA and FA yield similar results?
Here I would like to explicitly answer your main question: Is there anything wrong with performing PCA and FA on the same data set? My answer to this is: No.
When running PCA or FA, you are not testing any hypothesis. Both of them are exploratory techniques that are used to get a better understanding of the data. So why not explore the data with two different tools? In fact, let's do it!
Example: wine data set
As an illustration, I used a fairly well-known wine dataset with $n=178$ wines from three different grapes described by $p=13$ variables. See my answer here: What are the differences between Factor Analysis and Principal Component Analysis? for mode details, but briefly -- I ran both PCA and FA analysis and made 2D biplots for both of them. One can easily see that the difference is minimal:
|
PCA and exploratory Factor Analysis on the same dataset: differences and similarities; factor model
I provided my own account of the similarities and differences between PCA and FA in the following thread: Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor
|
13,929
|
Modern neural networks that build their own topology
|
The implicit question here is how can you determine the topology/structure of a neural network or machine learning model so that the model is "of the right size" and not overfitting/underfitting.
Since cascade correlation back in 1990, there has been a whole host of methods for doing this now, many of them with much better statistical or computational properties:
boosting: train a weak learner at a time, with each weak learner given a reweighted training set so that it learns things that past learners haven't learnt.
sparsity inducing regularization like lasso or automatic relevance determination: start with a large model/network, and use a regularizer that encourages the unneeded units to get "turned off", leaving those that are useful active.
Bayesian nonparametrics: forget trying to find the "right" model size. Just use one big model, and be careful with regularizing/being Bayesian, so you don't overfit. For example, a neural network with an infinite number of units and Gaussian priors can be derived to be a Gaussian process, which turns out to be much simpler to train.
Deep learning: as noted in another answer, train a deep network one layer at a time. This doesn't actually solve the problem of determining the number of units per layer - often this is still set by hand or cross-validation.
|
Modern neural networks that build their own topology
|
The implicit question here is how can you determine the topology/structure of a neural network or machine learning model so that the model is "of the right size" and not overfitting/underfitting.
Si
|
Modern neural networks that build their own topology
The implicit question here is how can you determine the topology/structure of a neural network or machine learning model so that the model is "of the right size" and not overfitting/underfitting.
Since cascade correlation back in 1990, there has been a whole host of methods for doing this now, many of them with much better statistical or computational properties:
boosting: train a weak learner at a time, with each weak learner given a reweighted training set so that it learns things that past learners haven't learnt.
sparsity inducing regularization like lasso or automatic relevance determination: start with a large model/network, and use a regularizer that encourages the unneeded units to get "turned off", leaving those that are useful active.
Bayesian nonparametrics: forget trying to find the "right" model size. Just use one big model, and be careful with regularizing/being Bayesian, so you don't overfit. For example, a neural network with an infinite number of units and Gaussian priors can be derived to be a Gaussian process, which turns out to be much simpler to train.
Deep learning: as noted in another answer, train a deep network one layer at a time. This doesn't actually solve the problem of determining the number of units per layer - often this is still set by hand or cross-validation.
|
Modern neural networks that build their own topology
The implicit question here is how can you determine the topology/structure of a neural network or machine learning model so that the model is "of the right size" and not overfitting/underfitting.
Si
|
13,930
|
Modern neural networks that build their own topology
|
How about NeuroEvolution of Augmenting Topologies (NEAT) http://www.cs.ucf.edu/~kstanley/neat.html
It seems to work for simple problems, but is INCREDIBLY slow to converge.
|
Modern neural networks that build their own topology
|
How about NeuroEvolution of Augmenting Topologies (NEAT) http://www.cs.ucf.edu/~kstanley/neat.html
It seems to work for simple problems, but is INCREDIBLY slow to converge.
|
Modern neural networks that build their own topology
How about NeuroEvolution of Augmenting Topologies (NEAT) http://www.cs.ucf.edu/~kstanley/neat.html
It seems to work for simple problems, but is INCREDIBLY slow to converge.
|
Modern neural networks that build their own topology
How about NeuroEvolution of Augmenting Topologies (NEAT) http://www.cs.ucf.edu/~kstanley/neat.html
It seems to work for simple problems, but is INCREDIBLY slow to converge.
|
13,931
|
Modern neural networks that build their own topology
|
As I understand the top of the art today is" Unsupervised Feature Learning and Deep Learning". at the nutshell: the network is being trained in unsupervised manner, each layer at a time:
http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial
http://www.youtube.com/watch?v=ZmNOAtZIgIk&feature=player_embedded
|
Modern neural networks that build their own topology
|
As I understand the top of the art today is" Unsupervised Feature Learning and Deep Learning". at the nutshell: the network is being trained in unsupervised manner, each layer at a time:
http://ufldl
|
Modern neural networks that build their own topology
As I understand the top of the art today is" Unsupervised Feature Learning and Deep Learning". at the nutshell: the network is being trained in unsupervised manner, each layer at a time:
http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial
http://www.youtube.com/watch?v=ZmNOAtZIgIk&feature=player_embedded
|
Modern neural networks that build their own topology
As I understand the top of the art today is" Unsupervised Feature Learning and Deep Learning". at the nutshell: the network is being trained in unsupervised manner, each layer at a time:
http://ufldl
|
13,932
|
Modern neural networks that build their own topology
|
There's already been a mention of NEAT (Neural Evolution with Augmenting Topologies). There are advances on this including speciation and HyperNEAT. HyperNEAT uses a 'meta' network to optimise the weighting of a fully connected phenotype. This gives a network 'spacial awareness' which is invaluable in image recognition and board game type problems. You aren't limited to 2D either. I'm using it in 1D for signal analysis and 2D upward is possible but gets heavy on processing requirement. Look for papers by Ken Stanley, and theres a group on Yahoo. If you have a problem that's tractable with a network, then NEAT and/or HyperNEAT may well be applicable.
|
Modern neural networks that build their own topology
|
There's already been a mention of NEAT (Neural Evolution with Augmenting Topologies). There are advances on this including speciation and HyperNEAT. HyperNEAT uses a 'meta' network to optimise the wei
|
Modern neural networks that build their own topology
There's already been a mention of NEAT (Neural Evolution with Augmenting Topologies). There are advances on this including speciation and HyperNEAT. HyperNEAT uses a 'meta' network to optimise the weighting of a fully connected phenotype. This gives a network 'spacial awareness' which is invaluable in image recognition and board game type problems. You aren't limited to 2D either. I'm using it in 1D for signal analysis and 2D upward is possible but gets heavy on processing requirement. Look for papers by Ken Stanley, and theres a group on Yahoo. If you have a problem that's tractable with a network, then NEAT and/or HyperNEAT may well be applicable.
|
Modern neural networks that build their own topology
There's already been a mention of NEAT (Neural Evolution with Augmenting Topologies). There are advances on this including speciation and HyperNEAT. HyperNEAT uses a 'meta' network to optimise the wei
|
13,933
|
Modern neural networks that build their own topology
|
There is a somewhat recent paper on this topic:
R. P. Adams, H. Wallach, and Zoubin Ghahramani. Learning the structure of deep sparse graphical models.
This is a bit outside the usual neural network community and more on the machine learning side.
The paper uses non-parametric Bayesian inference on the network structure.
|
Modern neural networks that build their own topology
|
There is a somewhat recent paper on this topic:
R. P. Adams, H. Wallach, and Zoubin Ghahramani. Learning the structure of deep sparse graphical models.
This is a bit outside the usual neural network c
|
Modern neural networks that build their own topology
There is a somewhat recent paper on this topic:
R. P. Adams, H. Wallach, and Zoubin Ghahramani. Learning the structure of deep sparse graphical models.
This is a bit outside the usual neural network community and more on the machine learning side.
The paper uses non-parametric Bayesian inference on the network structure.
|
Modern neural networks that build their own topology
There is a somewhat recent paper on this topic:
R. P. Adams, H. Wallach, and Zoubin Ghahramani. Learning the structure of deep sparse graphical models.
This is a bit outside the usual neural network c
|
13,934
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
In statistics there are tests of equivalence as well as the more common test the Null and decide if sufficient evidence against it. The equivalence test turn this on its head and posits that effects are different as the Null and we determine if there is sufficient evidence against this Null.
I'm not clear on your drug example. If the response is a value/indicator of the effect, then an effect of 0 would indicate not effective. One would set that as the Null and evaluate the evidence against this. If the effect is sufficiently different from zero we would conclude that the no-effectiveness hypothesis is inconsistent with the data. A two-tailed test would count sufficiently negative values of effect as evidence against the Null. A one tailed test, the effect is positive and sufficiently different from zero, might be a more interesting test.
If you want to test if the effect is 0, then we'd need to flip this around and use an equivalence test where the H0 is the effect is not equal to zero, and the alternative is that H1 = the effect = 0. That would evaluate the evidence against the idea that effect was different from 0.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
In statistics there are tests of equivalence as well as the more common test the Null and decide if sufficient evidence against it. The equivalence test turn this on its head and posits that effects a
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
In statistics there are tests of equivalence as well as the more common test the Null and decide if sufficient evidence against it. The equivalence test turn this on its head and posits that effects are different as the Null and we determine if there is sufficient evidence against this Null.
I'm not clear on your drug example. If the response is a value/indicator of the effect, then an effect of 0 would indicate not effective. One would set that as the Null and evaluate the evidence against this. If the effect is sufficiently different from zero we would conclude that the no-effectiveness hypothesis is inconsistent with the data. A two-tailed test would count sufficiently negative values of effect as evidence against the Null. A one tailed test, the effect is positive and sufficiently different from zero, might be a more interesting test.
If you want to test if the effect is 0, then we'd need to flip this around and use an equivalence test where the H0 is the effect is not equal to zero, and the alternative is that H1 = the effect = 0. That would evaluate the evidence against the idea that effect was different from 0.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
In statistics there are tests of equivalence as well as the more common test the Null and decide if sufficient evidence against it. The equivalence test turn this on its head and posits that effects a
|
13,935
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
I think this is another case where frequentist statistics can't give a direct answer to the question you actually want to ask, and so answers a (no so) subtly different question, and it is easy to misinterpret this as a direct answer to the question you actually wanted to ask.
What we would really like to ask is normally what is the probability that the alternative hypothesis is true (or perhaps how much more likely to be true is it than the null hypothesis). However a frequentist analysis fundamentally cannot answer this question, as to a frequentist a probability is a long run frequency, and in this case we are interested in the truth of a particular hypothesis, which doesn't have a long run frequency - it is either true or it isn't. A Bayesian on the other hand can answer this question directly, as to a a Bayesian a probability is a measure of the plausibility of some proposition, so it is perfectly reasonable in a Bayesian analysis to assign a probability to the truth of a particular hypothesis.
The way frequentists deal will particular events is to treat them as a sample from some (possibly fictitious) population and make a statement about that population in place of a statement about the particular sample. For example, if you want to know the probability that a particular coin is biased, after observing N flips and observing h heads and t tails, a frequentist analysis cannot answer that question, however they could tell you the proportion of coins from a distribution of unbiased coins that would give h or more heads when flipped N times. As the natural definition of a probability that we use in everyday life is generally a Bayesian one, rather than a frequentist one, it is all too easy to treat this as the pobability that the null hypothesis (the coin is unbiased) is true.
Essentially frequentist hypothesis tests have an implicit subjectivist Bayesian component lurking at its heart. The frequentist test can tell you the likelihood of observing a statistic at least as extreme under the null hypothesis, however the decision to reject the null hypothesis on those grounds is entirely subjective, there is no rational requirement for you to do so. Essentiall experience has shown that we are generally on reasonably solid ground to reject the null if the p-value is suffciently small (again the threshold is subjective), so that is the tradition. AFAICS it doesn't fit well into the philosophy or theory of science, it is essentially a heuristic.
That doesn't mean it is a bad thing though, despite its imperfections frequentist hypothesis testing provides a hurdle that our research must get over, which helps us as scientists to keep our self-skepticism and not get carried away with enthusiasm for our theories. So while I am a Bayesian at heart, I still use frequentists hypothesis tests on a regular basis (at least until journal reviewers are comfortable with the Bayesain alternatives).
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
I think this is another case where frequentist statistics can't give a direct answer to the question you actually want to ask, and so answers a (no so) subtly different question, and it is easy to mis
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
I think this is another case where frequentist statistics can't give a direct answer to the question you actually want to ask, and so answers a (no so) subtly different question, and it is easy to misinterpret this as a direct answer to the question you actually wanted to ask.
What we would really like to ask is normally what is the probability that the alternative hypothesis is true (or perhaps how much more likely to be true is it than the null hypothesis). However a frequentist analysis fundamentally cannot answer this question, as to a frequentist a probability is a long run frequency, and in this case we are interested in the truth of a particular hypothesis, which doesn't have a long run frequency - it is either true or it isn't. A Bayesian on the other hand can answer this question directly, as to a a Bayesian a probability is a measure of the plausibility of some proposition, so it is perfectly reasonable in a Bayesian analysis to assign a probability to the truth of a particular hypothesis.
The way frequentists deal will particular events is to treat them as a sample from some (possibly fictitious) population and make a statement about that population in place of a statement about the particular sample. For example, if you want to know the probability that a particular coin is biased, after observing N flips and observing h heads and t tails, a frequentist analysis cannot answer that question, however they could tell you the proportion of coins from a distribution of unbiased coins that would give h or more heads when flipped N times. As the natural definition of a probability that we use in everyday life is generally a Bayesian one, rather than a frequentist one, it is all too easy to treat this as the pobability that the null hypothesis (the coin is unbiased) is true.
Essentially frequentist hypothesis tests have an implicit subjectivist Bayesian component lurking at its heart. The frequentist test can tell you the likelihood of observing a statistic at least as extreme under the null hypothesis, however the decision to reject the null hypothesis on those grounds is entirely subjective, there is no rational requirement for you to do so. Essentiall experience has shown that we are generally on reasonably solid ground to reject the null if the p-value is suffciently small (again the threshold is subjective), so that is the tradition. AFAICS it doesn't fit well into the philosophy or theory of science, it is essentially a heuristic.
That doesn't mean it is a bad thing though, despite its imperfections frequentist hypothesis testing provides a hurdle that our research must get over, which helps us as scientists to keep our self-skepticism and not get carried away with enthusiasm for our theories. So while I am a Bayesian at heart, I still use frequentists hypothesis tests on a regular basis (at least until journal reviewers are comfortable with the Bayesain alternatives).
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
I think this is another case where frequentist statistics can't give a direct answer to the question you actually want to ask, and so answers a (no so) subtly different question, and it is easy to mis
|
13,936
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
To add to Gavin's answer, a couple of things:
First, I've heard this idea that propositions can only be falsified, but never proven. Could you post a link to a discussion of this, because with our wording here it doesn't seem to hold up very well - if X is a proposition, then not(X) is a proposition too. If disproving propositions is possible, then disproving X is the same as proving not(X), and we've proven a proposition.
Second, your analogy between the P(effective|$test_+$) and P(dog|4 legs) is interesting. The wording should be changed a little bit though:
The drug is effective (i.e.: iff the drug is effective you will see an effect).
In fact, P(effective|$test_+$) is often greater than P($test_+$|effective), as long as you use hypothesis testing and the right statistical model. Hypothesis testing formalizes the unlikelihood of positive test results under $H_0$. But an effective drug doesn't guarentee a positive test; when the drug is effective and variance is high the effect can be masked in the test.
If you observe $test_+$ you can infer effectiveness, because the alternative is $H_0$, and the hypothesis testing is set up so that P($test_+$|$H_0$) < 0.05.
So the difference between the dog case and the effectiveness case is in the appropriateness of the inference from the evidence to the conclusion. In the dog case, you have observed some evidence that doesn't strongly imply a dog. But in the clinical trial case you have observed some evidence that does strongly imply efficacy.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
To add to Gavin's answer, a couple of things:
First, I've heard this idea that propositions can only be falsified, but never proven. Could you post a link to a discussion of this, because with our wo
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
To add to Gavin's answer, a couple of things:
First, I've heard this idea that propositions can only be falsified, but never proven. Could you post a link to a discussion of this, because with our wording here it doesn't seem to hold up very well - if X is a proposition, then not(X) is a proposition too. If disproving propositions is possible, then disproving X is the same as proving not(X), and we've proven a proposition.
Second, your analogy between the P(effective|$test_+$) and P(dog|4 legs) is interesting. The wording should be changed a little bit though:
The drug is effective (i.e.: iff the drug is effective you will see an effect).
In fact, P(effective|$test_+$) is often greater than P($test_+$|effective), as long as you use hypothesis testing and the right statistical model. Hypothesis testing formalizes the unlikelihood of positive test results under $H_0$. But an effective drug doesn't guarentee a positive test; when the drug is effective and variance is high the effect can be masked in the test.
If you observe $test_+$ you can infer effectiveness, because the alternative is $H_0$, and the hypothesis testing is set up so that P($test_+$|$H_0$) < 0.05.
So the difference between the dog case and the effectiveness case is in the appropriateness of the inference from the evidence to the conclusion. In the dog case, you have observed some evidence that doesn't strongly imply a dog. But in the clinical trial case you have observed some evidence that does strongly imply efficacy.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
To add to Gavin's answer, a couple of things:
First, I've heard this idea that propositions can only be falsified, but never proven. Could you post a link to a discussion of this, because with our wo
|
13,937
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
You are right that, in a sense, frequentist hypothesis testing has it backwards. I'm not saying that that approach is wrong, but rather that the results are often not designed to answer the questions that the researcher is most interested in. If you want a technique more similar to the scientific method, try Bayesian inference.
Instead of talking about a "null hypothesis" that you can reject or fail to reject, with Bayesian inference you begin with a prior probability distribution based upon your understanding of the situation at hand. When you acquire new evidence, Bayesian inference provides a framework for you to update your belief with the evidence taken into account. I think this is how more similar to how science works.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
You are right that, in a sense, frequentist hypothesis testing has it backwards. I'm not saying that that approach is wrong, but rather that the results are often not designed to answer the questions
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
You are right that, in a sense, frequentist hypothesis testing has it backwards. I'm not saying that that approach is wrong, but rather that the results are often not designed to answer the questions that the researcher is most interested in. If you want a technique more similar to the scientific method, try Bayesian inference.
Instead of talking about a "null hypothesis" that you can reject or fail to reject, with Bayesian inference you begin with a prior probability distribution based upon your understanding of the situation at hand. When you acquire new evidence, Bayesian inference provides a framework for you to update your belief with the evidence taken into account. I think this is how more similar to how science works.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
You are right that, in a sense, frequentist hypothesis testing has it backwards. I'm not saying that that approach is wrong, but rather that the results are often not designed to answer the questions
|
13,938
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
I think you've got a fundamental error here (not that the whole area of hypothesis testing is clear!) but you say the alternative is what we try to prove. But this is not right. We attempt to reject (falsify) the null.
If the results we obtain would be very unlikely if the null were true, we reject the null.
Now, as others said, this is not usually the question we want to ask: We don't usually care how likely the results are if the null is true, we care how likely the null is, given the results.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
I think you've got a fundamental error here (not that the whole area of hypothesis testing is clear!) but you say the alternative is what we try to prove. But this is not right. We attempt to reject (
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
I think you've got a fundamental error here (not that the whole area of hypothesis testing is clear!) but you say the alternative is what we try to prove. But this is not right. We attempt to reject (falsify) the null.
If the results we obtain would be very unlikely if the null were true, we reject the null.
Now, as others said, this is not usually the question we want to ask: We don't usually care how likely the results are if the null is true, we care how likely the null is, given the results.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
I think you've got a fundamental error here (not that the whole area of hypothesis testing is clear!) but you say the alternative is what we try to prove. But this is not right. We attempt to reject (
|
13,939
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
If I'm understanding you correctly, you're in agreement with the late, great Paul Meehl. See
Meehl, P.E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34:103-115.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
If I'm understanding you correctly, you're in agreement with the late, great Paul Meehl. See
Meehl, P.E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Scie
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
If I'm understanding you correctly, you're in agreement with the late, great Paul Meehl. See
Meehl, P.E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34:103-115.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
If I'm understanding you correctly, you're in agreement with the late, great Paul Meehl. See
Meehl, P.E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Scie
|
13,940
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
I'll expand on the mention of Paul Meehl by @Doc:
1) Testing the opposite of your research hypothesis as the null hypothesis makes it so you can only affirm the consequent which is a "formally invalid" argument. The conclusions do not necessarily follow from the premise.
If Bill Gates owns Fort Knox, then he is rich.
Bill Gates is rich.
Therefore, Bill Gates owns Fort Knox.
http://rationalwiki.org/wiki/Affirming_the_consequent
If the theory is "This drug will improve recovery" and you observe improved recovery this does not mean you can say your theory is true. The appearance of improved recovery could have occurred for some other reason. No two groups of patients or animals will be exactly the same at baseline and will change further over time during the study. This is a greater problem for observational than experimental research because randomization "defends" against severe imbalances of unknown confounding factors at baseline. However, randomization does not really resolve the problem. If the confounds are unknown we have no way to tell the extent to which the "randomization defense" has been successful.
Also see table 14.1 and the discussion of why no theory can be tested on it's own (there are always auxiliary factors that tag along) in:
Paul Meehl. "The Problem Is Epistemology, Not Statistics: Replace Significance Tests by Confidence Intervals and Quantify Accuracy of Risky Numerical Predictions" In L. L. Harlow, S. A. Mulaik, & J. H. Steiger (Eds.), What If There Were No Significance Tests? (pp. 393–425) Mahwah, NJ : Erlbaum, 1997.
2) If some type of bias is introduced (e.g., imbalance on some confounding factors) we do not know which direction this bias will lie or how strong it is. The best guess we can give is that there is a 50% chance of biasing the treatment group in the direction of higher recovery. As sample sizes get large there is also 50% chance that your significance test will detect this difference and you will interpret the data as corroborating your theory.
This situation is totally different from the case of a null hypothesis that "This drug will improve recovery by x%". In this case the presence of any bias (which I would say always exist in comparing groups of animals and humans) makes it more likely for you to reject your theory.
Think of the "space" (Meehl calls it the "Spielraum") of possible results bounded by the most extreme measurements possible. Perhaps there can be 0-100% recovery, and you can measure with resolution of 1%. In the common significance testing case, the space consistent with your theory will be 99% of the possible outcomes you could observe. In the case when you predict a specific difference the space consistent with your theory will be 1% of the possible outcomes.
Another way of putting it is that finding evidence against a null hypothesis of mean1=mean2 is not a severe test of the research hypothesis that a drug does something. A null of mean1 < mean2 is better but still not very good.
See figure 3 and 4 here:
(1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant using it. Psychological Inquiry, 1, 108-141, 173-180
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
I'll expand on the mention of Paul Meehl by @Doc:
1) Testing the opposite of your research hypothesis as the null hypothesis makes it so you can only affirm the consequent which is a "formally invalid
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
I'll expand on the mention of Paul Meehl by @Doc:
1) Testing the opposite of your research hypothesis as the null hypothesis makes it so you can only affirm the consequent which is a "formally invalid" argument. The conclusions do not necessarily follow from the premise.
If Bill Gates owns Fort Knox, then he is rich.
Bill Gates is rich.
Therefore, Bill Gates owns Fort Knox.
http://rationalwiki.org/wiki/Affirming_the_consequent
If the theory is "This drug will improve recovery" and you observe improved recovery this does not mean you can say your theory is true. The appearance of improved recovery could have occurred for some other reason. No two groups of patients or animals will be exactly the same at baseline and will change further over time during the study. This is a greater problem for observational than experimental research because randomization "defends" against severe imbalances of unknown confounding factors at baseline. However, randomization does not really resolve the problem. If the confounds are unknown we have no way to tell the extent to which the "randomization defense" has been successful.
Also see table 14.1 and the discussion of why no theory can be tested on it's own (there are always auxiliary factors that tag along) in:
Paul Meehl. "The Problem Is Epistemology, Not Statistics: Replace Significance Tests by Confidence Intervals and Quantify Accuracy of Risky Numerical Predictions" In L. L. Harlow, S. A. Mulaik, & J. H. Steiger (Eds.), What If There Were No Significance Tests? (pp. 393–425) Mahwah, NJ : Erlbaum, 1997.
2) If some type of bias is introduced (e.g., imbalance on some confounding factors) we do not know which direction this bias will lie or how strong it is. The best guess we can give is that there is a 50% chance of biasing the treatment group in the direction of higher recovery. As sample sizes get large there is also 50% chance that your significance test will detect this difference and you will interpret the data as corroborating your theory.
This situation is totally different from the case of a null hypothesis that "This drug will improve recovery by x%". In this case the presence of any bias (which I would say always exist in comparing groups of animals and humans) makes it more likely for you to reject your theory.
Think of the "space" (Meehl calls it the "Spielraum") of possible results bounded by the most extreme measurements possible. Perhaps there can be 0-100% recovery, and you can measure with resolution of 1%. In the common significance testing case, the space consistent with your theory will be 99% of the possible outcomes you could observe. In the case when you predict a specific difference the space consistent with your theory will be 1% of the possible outcomes.
Another way of putting it is that finding evidence against a null hypothesis of mean1=mean2 is not a severe test of the research hypothesis that a drug does something. A null of mean1 < mean2 is better but still not very good.
See figure 3 and 4 here:
(1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant using it. Psychological Inquiry, 1, 108-141, 173-180
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
I'll expand on the mention of Paul Meehl by @Doc:
1) Testing the opposite of your research hypothesis as the null hypothesis makes it so you can only affirm the consequent which is a "formally invalid
|
13,941
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
Isn't all statistics premised on the assumption that nothing is certain in the natural world (as distinct from the man-made world of games &c). In other words, the only way we can get near to understanding it is by measuring the probability that one thing correlates with another and this varies between 0 and 1 but can only be 1 if we could test the hypothesis an infinite number of times in an infinite number of different circumstances, which of course is impossible. And we can never know it was zero for the same reason. It's a more reliable approach to understanding the reality of nature, than mathematics, which deal in absolutes and mostly relies on equations, which we know are idealistic because if, literally, the LH side of an equation really = the RH side, the two sides could be reversed and we wouldn't learn anything. Strictly speaking it applies only to a static world, not a 'natural' one which is intrinsically turbulent. Hence, the null hypothesis should even underwrite mathematics - whenever it is used to understand nature itself.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
Isn't all statistics premised on the assumption that nothing is certain in the natural world (as distinct from the man-made world of games &c). In other words, the only way we can get near to underst
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
Isn't all statistics premised on the assumption that nothing is certain in the natural world (as distinct from the man-made world of games &c). In other words, the only way we can get near to understanding it is by measuring the probability that one thing correlates with another and this varies between 0 and 1 but can only be 1 if we could test the hypothesis an infinite number of times in an infinite number of different circumstances, which of course is impossible. And we can never know it was zero for the same reason. It's a more reliable approach to understanding the reality of nature, than mathematics, which deal in absolutes and mostly relies on equations, which we know are idealistic because if, literally, the LH side of an equation really = the RH side, the two sides could be reversed and we wouldn't learn anything. Strictly speaking it applies only to a static world, not a 'natural' one which is intrinsically turbulent. Hence, the null hypothesis should even underwrite mathematics - whenever it is used to understand nature itself.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
Isn't all statistics premised on the assumption that nothing is certain in the natural world (as distinct from the man-made world of games &c). In other words, the only way we can get near to underst
|
13,942
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
I think the problem is in the word 'true'. The reality of the natural world is innately un-knowable as it's infinitely complex and infinitely variable over time, so 'truth' applied to nature is always conditional. All we can do is try to find levels of probable correspondence between variables by repeated experiment. In our attempt to make sense of reality, we look for what seems like order in it and construct conceptually-conscious models of it in our mind to help us make sensible decisions BUT it's very much a hit-and-miss affair because there's always the unexpected. The null hypothesis is the only reliable starting point in our attempt to make sense of reality.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
I think the problem is in the word 'true'. The reality of the natural world is innately un-knowable as it's infinitely complex and infinitely variable over time, so 'truth' applied to nature is alway
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
I think the problem is in the word 'true'. The reality of the natural world is innately un-knowable as it's infinitely complex and infinitely variable over time, so 'truth' applied to nature is always conditional. All we can do is try to find levels of probable correspondence between variables by repeated experiment. In our attempt to make sense of reality, we look for what seems like order in it and construct conceptually-conscious models of it in our mind to help us make sensible decisions BUT it's very much a hit-and-miss affair because there's always the unexpected. The null hypothesis is the only reliable starting point in our attempt to make sense of reality.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
I think the problem is in the word 'true'. The reality of the natural world is innately un-knowable as it's infinitely complex and infinitely variable over time, so 'truth' applied to nature is alway
|
13,943
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
We must select null hypothesis the one which we want to reject.
Because in our hypothesis testing scenario, there is a critical region, if the region under hypothesis come in critical region , we reject the hypothesis otherwise we accept the hypothesis.
So suppose we select the null hypothesis , the one we want to accept. And the region under null hypothesis does not come under critical region, So we will accept the null hypothesis. But the problem here is if region under null hypothesis come under acceptable region, then it does not mean that the region under alternate hypothesis will not come under acceptable region. And if this is the case then our interpretation about result will be wrong. So we must only take that hypothesis as a null hypothesis which we want to reject. If we are able to reject null hypothesis, then it means that alternate hypothesis is true. But if we are not able to reject null hypothesis, then it means that any of the two hypothesis can be correct. May be we can then take another test, in which we can can take our alternate hypothesis as null hypothesis, and then we can attempt to reject it. If we are able to reject the alternate hypothesis(which now is null hypothesis.) , then we can say that our initial null hypothesis was true.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
|
We must select null hypothesis the one which we want to reject.
Because in our hypothesis testing scenario, there is a critical region, if the region under hypothesis come in critical region , we reje
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
We must select null hypothesis the one which we want to reject.
Because in our hypothesis testing scenario, there is a critical region, if the region under hypothesis come in critical region , we reject the hypothesis otherwise we accept the hypothesis.
So suppose we select the null hypothesis , the one we want to accept. And the region under null hypothesis does not come under critical region, So we will accept the null hypothesis. But the problem here is if region under null hypothesis come under acceptable region, then it does not mean that the region under alternate hypothesis will not come under acceptable region. And if this is the case then our interpretation about result will be wrong. So we must only take that hypothesis as a null hypothesis which we want to reject. If we are able to reject null hypothesis, then it means that alternate hypothesis is true. But if we are not able to reject null hypothesis, then it means that any of the two hypothesis can be correct. May be we can then take another test, in which we can can take our alternate hypothesis as null hypothesis, and then we can attempt to reject it. If we are able to reject the alternate hypothesis(which now is null hypothesis.) , then we can say that our initial null hypothesis was true.
|
Which one is the null hypothesis? Conflict between science theory, logic and statistics?
We must select null hypothesis the one which we want to reject.
Because in our hypothesis testing scenario, there is a critical region, if the region under hypothesis come in critical region , we reje
|
13,944
|
Within the frequentist "school of thought" how are beliefs updated?
|
If you're representing beliefs coherently with numbers you're Bayesian by definition. There are at least 46656 different kinds of Bayesian (counted here: http://fitelson.org/probability/good_bayes.pdf) but "quantitatively updating beliefs" is the one thing that unites them; if you do that, you're in the Bayesian club. Also, if you want to update beliefs, you have to update using Bayes rule; otherwise you'll be incoherent and get dutch-booked. Kinda funny how the one true path to normative rationality still admits so many varieties though.
Even though Bayesians have a monopoly on 'belief' (by definition) they don't have a monopoly on "strength of evidence". There's other ways you can quantify that, motivating the kind of language given in your example. Deborah Mayo goes into this in detail in "Statistical Inference as Severe Testing". Her preferred option is "severity". In the severity framework you don't ever quantify your beliefs, but you do get to say "this claim has been severely tested" or "this claim has not been severely tested" and you can add to severity incrementally by applying multiple tests over time. That sure feels a lot like strengthening belief; you just don't get to use that exact word to describe it (because the Bayesians own the word 'belief' now). And it really is a different thing, so it's good to avoid the possible terminology collision: what you get from high severity is good error control rates, not 'true(er) beliefs'. It behaves a lot like belief in the way it is open to continual updating though! Being picky about not calling it 'belief' is purely on the (important) technicality of not dealing in states-of-knowledge, distinguishing it from the thing Bayesians do.
Mayo writes and links to plenty more on this at https://errorstatistics.com/
Sounds like you might enjoy "Bernoulli's Fallacy" by Aubrey Clayton: it's pretty accessible popsci but really cuts to the roots of this question. Discussed in podcast form here https://www.learnbayesstats.com/episode/51-bernoullis-fallacy-crisis-modern-science-aubrey-clayton
|
Within the frequentist "school of thought" how are beliefs updated?
|
If you're representing beliefs coherently with numbers you're Bayesian by definition. There are at least 46656 different kinds of Bayesian (counted here: http://fitelson.org/probability/good_bayes.pdf
|
Within the frequentist "school of thought" how are beliefs updated?
If you're representing beliefs coherently with numbers you're Bayesian by definition. There are at least 46656 different kinds of Bayesian (counted here: http://fitelson.org/probability/good_bayes.pdf) but "quantitatively updating beliefs" is the one thing that unites them; if you do that, you're in the Bayesian club. Also, if you want to update beliefs, you have to update using Bayes rule; otherwise you'll be incoherent and get dutch-booked. Kinda funny how the one true path to normative rationality still admits so many varieties though.
Even though Bayesians have a monopoly on 'belief' (by definition) they don't have a monopoly on "strength of evidence". There's other ways you can quantify that, motivating the kind of language given in your example. Deborah Mayo goes into this in detail in "Statistical Inference as Severe Testing". Her preferred option is "severity". In the severity framework you don't ever quantify your beliefs, but you do get to say "this claim has been severely tested" or "this claim has not been severely tested" and you can add to severity incrementally by applying multiple tests over time. That sure feels a lot like strengthening belief; you just don't get to use that exact word to describe it (because the Bayesians own the word 'belief' now). And it really is a different thing, so it's good to avoid the possible terminology collision: what you get from high severity is good error control rates, not 'true(er) beliefs'. It behaves a lot like belief in the way it is open to continual updating though! Being picky about not calling it 'belief' is purely on the (important) technicality of not dealing in states-of-knowledge, distinguishing it from the thing Bayesians do.
Mayo writes and links to plenty more on this at https://errorstatistics.com/
Sounds like you might enjoy "Bernoulli's Fallacy" by Aubrey Clayton: it's pretty accessible popsci but really cuts to the roots of this question. Discussed in podcast form here https://www.learnbayesstats.com/episode/51-bernoullis-fallacy-crisis-modern-science-aubrey-clayton
|
Within the frequentist "school of thought" how are beliefs updated?
If you're representing beliefs coherently with numbers you're Bayesian by definition. There are at least 46656 different kinds of Bayesian (counted here: http://fitelson.org/probability/good_bayes.pdf
|
13,945
|
Within the frequentist "school of thought" how are beliefs updated?
|
Disclaimer: I have a Bayesian bias.
The purpose of frequentist hypothesis testing is to reject the null hypothesis: that is not the same as proving the alternative hypothesis. Such experiments don't give you the "evidence that $H$ is true", even if you can hear people making claims like this. The $p$-value is the probability of observing the data $D$ more extreme than observed given that the hypothesis $H$ is true $P(D>d|H)$.
The Bayesian posterior probability is the other way around, the probability that $H$ is true given the data $P(H|D) = P(D|H) P(H) / P(D)$. So in fact in the Bayesian setting, you do update your prior belief $P(H)$ about $H$ given the observed data, while in the frequentist setting you don't. Frequentist experiments don't tell you how likely $H$ is and because of that, it doesn't give you a direct framework for updating your beliefs about it. If you rejected the hypothesis that $H=5$, it still can be anything (just not $5$), and you still don't know what it is.
There is also maximum likelihood, "find $H$ such that the likelihood of observing $D$ is highest under it", but again, it doesn't tell you what $P(H|D)$ is.
Finally, if you consider probabilities to be a measure of beliefs, you are already taking a Bayesian side. Thanks to adopting such a viewpoint, you can measure how true it is given the data. In a frequentist setting, you can only make the "assuming that it is true" claims.
|
Within the frequentist "school of thought" how are beliefs updated?
|
Disclaimer: I have a Bayesian bias.
The purpose of frequentist hypothesis testing is to reject the null hypothesis: that is not the same as proving the alternative hypothesis. Such experiments don't g
|
Within the frequentist "school of thought" how are beliefs updated?
Disclaimer: I have a Bayesian bias.
The purpose of frequentist hypothesis testing is to reject the null hypothesis: that is not the same as proving the alternative hypothesis. Such experiments don't give you the "evidence that $H$ is true", even if you can hear people making claims like this. The $p$-value is the probability of observing the data $D$ more extreme than observed given that the hypothesis $H$ is true $P(D>d|H)$.
The Bayesian posterior probability is the other way around, the probability that $H$ is true given the data $P(H|D) = P(D|H) P(H) / P(D)$. So in fact in the Bayesian setting, you do update your prior belief $P(H)$ about $H$ given the observed data, while in the frequentist setting you don't. Frequentist experiments don't tell you how likely $H$ is and because of that, it doesn't give you a direct framework for updating your beliefs about it. If you rejected the hypothesis that $H=5$, it still can be anything (just not $5$), and you still don't know what it is.
There is also maximum likelihood, "find $H$ such that the likelihood of observing $D$ is highest under it", but again, it doesn't tell you what $P(H|D)$ is.
Finally, if you consider probabilities to be a measure of beliefs, you are already taking a Bayesian side. Thanks to adopting such a viewpoint, you can measure how true it is given the data. In a frequentist setting, you can only make the "assuming that it is true" claims.
|
Within the frequentist "school of thought" how are beliefs updated?
Disclaimer: I have a Bayesian bias.
The purpose of frequentist hypothesis testing is to reject the null hypothesis: that is not the same as proving the alternative hypothesis. Such experiments don't g
|
13,946
|
Within the frequentist "school of thought" how are beliefs updated?
|
No there is not a formal method that frequentists follow to update their beliefs. My very-abridged explanation for why this is the case is as follows, and focuses just on frequentist testing methods. We can regard hypothesis testing as addressing the question: Given the assumed probability model, is the data (statistically) consistent with the null hypothesis? To take a concrete example, suppose a claim has been made that aspirin reduces acne. A group of scientists decide to test that claim and a large well-designed experiment is undertaken. The null hypothesis is that aspirin does not reduce acne. A p-value of 0.3 is observed, and the null hypothesis is not rejected. The scientists may or may not have had views (let alone ‘beliefs’) about the null hypothesis before or after the experiment, but who cares if they did? What matters is the evidence produced by the experiment. Science progresses by testing the consensus until sufficient evidence arises to change that consensus.
|
Within the frequentist "school of thought" how are beliefs updated?
|
No there is not a formal method that frequentists follow to update their beliefs. My very-abridged explanation for why this is the case is as follows, and focuses just on frequentist testing methods.
|
Within the frequentist "school of thought" how are beliefs updated?
No there is not a formal method that frequentists follow to update their beliefs. My very-abridged explanation for why this is the case is as follows, and focuses just on frequentist testing methods. We can regard hypothesis testing as addressing the question: Given the assumed probability model, is the data (statistically) consistent with the null hypothesis? To take a concrete example, suppose a claim has been made that aspirin reduces acne. A group of scientists decide to test that claim and a large well-designed experiment is undertaken. The null hypothesis is that aspirin does not reduce acne. A p-value of 0.3 is observed, and the null hypothesis is not rejected. The scientists may or may not have had views (let alone ‘beliefs’) about the null hypothesis before or after the experiment, but who cares if they did? What matters is the evidence produced by the experiment. Science progresses by testing the consensus until sufficient evidence arises to change that consensus.
|
Within the frequentist "school of thought" how are beliefs updated?
No there is not a formal method that frequentists follow to update their beliefs. My very-abridged explanation for why this is the case is as follows, and focuses just on frequentist testing methods.
|
13,947
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of dot product
|
Dot products describe part of how neural nets work, conceptually. I'll describe the concept first using scalars, and then show how this can be re-written using the dot product.
Let's take a look at a single unit in a typical neural net. It receives inputs $\{x_1, \dots, x_n\}$ from other units, and produces an output $y$. To compute the output, we multiply each input by a corresponding weight $\{w_1, \dots, w_n\}$. The weights determine the strength of the connection from each input. We sum the weighted inputs to obtain the total amount of input, then add a bias term $b$. The final output is obtained by running this sum through an activation function $f$, which describes the way that the unit responds to the total input. The activation function is typically nonlinear, e.g. a sigmoidal or rectified linear function. So we have:
$$y = f \left ( \sum_{i=1}^n w_i x_i + b \right )$$
The weighted sum can be re-written as a dot product, which is more convenient notation, and can be computed more efficiently. Let the vector $x = [x_1, \dots, x_n]$ contain the inputs, and the vector $w = [w_1, \dots, w_n]$ contain the corresponding weights. By the definition of the dot product:
$$\sum_{i=1}^n w_i x_i = w \cdot x$$
Plug this back into the equation for the output:
$$y = f \left ( w \cdot x + b \right )$$
In practice, you wouldn't compute the outputs one by one, but for an entire layer of the neural net simultaneously. This would use matrix multiplication rather than individual dot products, which can be implemented more efficiently using numerical linear algebra libraries.
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of
|
Dot products describe part of how neural nets work, conceptually. I'll describe the concept first using scalars, and then show how this can be re-written using the dot product.
Let's take a look at a
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of dot product
Dot products describe part of how neural nets work, conceptually. I'll describe the concept first using scalars, and then show how this can be re-written using the dot product.
Let's take a look at a single unit in a typical neural net. It receives inputs $\{x_1, \dots, x_n\}$ from other units, and produces an output $y$. To compute the output, we multiply each input by a corresponding weight $\{w_1, \dots, w_n\}$. The weights determine the strength of the connection from each input. We sum the weighted inputs to obtain the total amount of input, then add a bias term $b$. The final output is obtained by running this sum through an activation function $f$, which describes the way that the unit responds to the total input. The activation function is typically nonlinear, e.g. a sigmoidal or rectified linear function. So we have:
$$y = f \left ( \sum_{i=1}^n w_i x_i + b \right )$$
The weighted sum can be re-written as a dot product, which is more convenient notation, and can be computed more efficiently. Let the vector $x = [x_1, \dots, x_n]$ contain the inputs, and the vector $w = [w_1, \dots, w_n]$ contain the corresponding weights. By the definition of the dot product:
$$\sum_{i=1}^n w_i x_i = w \cdot x$$
Plug this back into the equation for the output:
$$y = f \left ( w \cdot x + b \right )$$
In practice, you wouldn't compute the outputs one by one, but for an entire layer of the neural net simultaneously. This would use matrix multiplication rather than individual dot products, which can be implemented more efficiently using numerical linear algebra libraries.
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of
Dot products describe part of how neural nets work, conceptually. I'll describe the concept first using scalars, and then show how this can be re-written using the dot product.
Let's take a look at a
|
13,948
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of dot product
|
To answer this question we need to go back to one of the earliest neural networks the Rosenblatt’s perceptron where using vectors and the property of dot product to split hyperplanes of feature vectors were first used.
This may be familiar to many, but it for some a refresher may help
Q. What does a vector mean?
A Vector is meaningless¹ unless you specify the context - Vector Space. Assume we are thinking about something like force vector, the context is a 2D or 3D Euclidean world
Source: 3Blue1Brown’s video on Vectors
From https://towardsdatascience.com/perceptron-learning-algorithm-d5db0deab975
¹Maths is really abstract and meaningless unless you apply it to a context- this is a reason why you will get tripped if you try to get just a mathematical intuition about the neural network
The easiest way to understand it is in a geometric context, say 2D or 3D cartesian coordinates, and then extrapolate it. This is what we will try to do here.
Q. What is the connection between Matrices and Vectors?
Vectors are represented as matrices. Example here is a Euclidean Vector in three-dimensional Euclidean space (or $R^{3}$), represented as a column vector (usually) or row vector
$$
a = \begin{bmatrix}
a_{1}\\a_{2}\\a_{3}\
\end{bmatrix} = \begin{bmatrix} a_{1} & a_{2} &a_{3}\end{bmatrix}
$$
Q. What is a Dot product? and what does it signify ?
First the dry definitions.
Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers.
if $ \vec a = \left\langle {{a_1},{a_2},{a_3}} \right\rangle $ and $\vec b = \left\langle {{b_1},{b_2},{b_3}} \right\rangle $, Then
$$
\begin{equation}\vec a\centerdot \vec b = {a_1}{b_1} + {a_2}{b_2} + {a_3}{b_3}\label{eq:eq1}\end{equation}
$$
Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them
$$
\begin{equation}\vec a\centerdot \vec b = \left\| {\vec a} \right\|\,\,\left\| {\vec b} \right\|\cos \theta \label{eq:eq2} \end{equation}
$$
These definitions are equivalent when using Cartesian coordinates.
Here is a simple proof that follows from trigonometry -
http://tutorial.math.lamar.edu/Classes/CalcII/DotProduct.aspx
(You may need this article too -https://sergedesmedt.github.io/MathOfNeuralNetworks/VectorMath.html#learn_vector_math_diff)
Now to the meat of the answer, the intuition part.
If two vectors are in the same direction the dot product is positive and if they are in the opposite direction the dot product is negative.
(Try it here -
https://sergedesmedt.github.io/MathOfNeuralNetworks/DotProduct.html#learn_dotproduct)
So you could use the dot product as a way to find out if two vectors are aligned or not.
That is for any two distinct sets of input feature vectors in a vector space (say we are classifying if a leaf is healthy or not based on certain features of the leaf), we can have a weight vector, whose dot product with one input feature vector of the set of input vectors of a certain class (say leaf is healthy) is positive and with the other set is negative. In essence, we are using the weight vectors to split the hyper-plane into two distinctive sets.
The initial neural network - the Rosenblatt's perceptron was doing this and could only do this - that is finding a solution if and only if the input set was linearly separable. (that constraint led to an AI winter and frosted the hopes/hype generated by the Perceptron when it was proved that it could not solve for XNOR not linearly separable)
Here is how the Rosenblatt's perceptron is modelled
Image source https://maelfabien.github.io/deeplearning/Perceptron/#the-classic-model
Inputs are $x_1$ to $x_n$ , weights are some values that are learned $w_1$ to $w_n$. There is also a bias (b) which in above is -$\theta$
If we take the bias term out, the equation is
$$
\begin{equation}
f(x) =
\begin{cases}
1, & \text{if}\ \textbf{w}\cdot\textbf{x}+b ≥ 0 \\
0, & \text{otherwise} \\
\end{cases}
\end{equation}
$$
If we take a dummy input $x_0$ as 1, then we can add the bias as a weight $w_0$ and then this bias can also fit cleanly to the sigma rule
$ y = 1 \textbf{if } \sum_i w_i x_i ≥ 0 \text{ Else } y=0 $
This is the dot product of weight and input vector w.x
(Note that dot product of two matrices (representing vectors), can be written as that transpose of one multiplied by another - you will see this way in some articles)
$$
\begin{equation}
\sigma(w^Tx + b)=
\begin{cases}
1, & \text{if}\ w^Tx + b ≥ 0 \\
0, & \text{otherwise} \\
\end{cases}
\end{equation}
$$
Basically, all three equations are the same.
Taking from https://sergedesmedt.github.io/MathOfNeuralNetworks/RosenblattPerceptronArticle.html
So, the equation $ \bf w⋅x>b $ defines all the points on one side of
the hyperplane, and $ \bf w⋅x<=b$ all the points on the other side of
the hyperplane and on the hyperplane itself. This happens to be the
very definition of “linear separability” Thus, the perceptron allows
us to separate our feature space in two convex half-spaces
Please also see the above article from sergedesmedt. It explains also how the weights are trained.
So you can see how important dot product, and the representation of inputs and weights as vectors, are in neural networks.
This concept comes into play in modern neural networks as well.
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of
|
To answer this question we need to go back to one of the earliest neural networks the Rosenblatt’s perceptron where using vectors and the property of dot product to split hyperplanes of feature vector
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of dot product
To answer this question we need to go back to one of the earliest neural networks the Rosenblatt’s perceptron where using vectors and the property of dot product to split hyperplanes of feature vectors were first used.
This may be familiar to many, but it for some a refresher may help
Q. What does a vector mean?
A Vector is meaningless¹ unless you specify the context - Vector Space. Assume we are thinking about something like force vector, the context is a 2D or 3D Euclidean world
Source: 3Blue1Brown’s video on Vectors
From https://towardsdatascience.com/perceptron-learning-algorithm-d5db0deab975
¹Maths is really abstract and meaningless unless you apply it to a context- this is a reason why you will get tripped if you try to get just a mathematical intuition about the neural network
The easiest way to understand it is in a geometric context, say 2D or 3D cartesian coordinates, and then extrapolate it. This is what we will try to do here.
Q. What is the connection between Matrices and Vectors?
Vectors are represented as matrices. Example here is a Euclidean Vector in three-dimensional Euclidean space (or $R^{3}$), represented as a column vector (usually) or row vector
$$
a = \begin{bmatrix}
a_{1}\\a_{2}\\a_{3}\
\end{bmatrix} = \begin{bmatrix} a_{1} & a_{2} &a_{3}\end{bmatrix}
$$
Q. What is a Dot product? and what does it signify ?
First the dry definitions.
Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers.
if $ \vec a = \left\langle {{a_1},{a_2},{a_3}} \right\rangle $ and $\vec b = \left\langle {{b_1},{b_2},{b_3}} \right\rangle $, Then
$$
\begin{equation}\vec a\centerdot \vec b = {a_1}{b_1} + {a_2}{b_2} + {a_3}{b_3}\label{eq:eq1}\end{equation}
$$
Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them
$$
\begin{equation}\vec a\centerdot \vec b = \left\| {\vec a} \right\|\,\,\left\| {\vec b} \right\|\cos \theta \label{eq:eq2} \end{equation}
$$
These definitions are equivalent when using Cartesian coordinates.
Here is a simple proof that follows from trigonometry -
http://tutorial.math.lamar.edu/Classes/CalcII/DotProduct.aspx
(You may need this article too -https://sergedesmedt.github.io/MathOfNeuralNetworks/VectorMath.html#learn_vector_math_diff)
Now to the meat of the answer, the intuition part.
If two vectors are in the same direction the dot product is positive and if they are in the opposite direction the dot product is negative.
(Try it here -
https://sergedesmedt.github.io/MathOfNeuralNetworks/DotProduct.html#learn_dotproduct)
So you could use the dot product as a way to find out if two vectors are aligned or not.
That is for any two distinct sets of input feature vectors in a vector space (say we are classifying if a leaf is healthy or not based on certain features of the leaf), we can have a weight vector, whose dot product with one input feature vector of the set of input vectors of a certain class (say leaf is healthy) is positive and with the other set is negative. In essence, we are using the weight vectors to split the hyper-plane into two distinctive sets.
The initial neural network - the Rosenblatt's perceptron was doing this and could only do this - that is finding a solution if and only if the input set was linearly separable. (that constraint led to an AI winter and frosted the hopes/hype generated by the Perceptron when it was proved that it could not solve for XNOR not linearly separable)
Here is how the Rosenblatt's perceptron is modelled
Image source https://maelfabien.github.io/deeplearning/Perceptron/#the-classic-model
Inputs are $x_1$ to $x_n$ , weights are some values that are learned $w_1$ to $w_n$. There is also a bias (b) which in above is -$\theta$
If we take the bias term out, the equation is
$$
\begin{equation}
f(x) =
\begin{cases}
1, & \text{if}\ \textbf{w}\cdot\textbf{x}+b ≥ 0 \\
0, & \text{otherwise} \\
\end{cases}
\end{equation}
$$
If we take a dummy input $x_0$ as 1, then we can add the bias as a weight $w_0$ and then this bias can also fit cleanly to the sigma rule
$ y = 1 \textbf{if } \sum_i w_i x_i ≥ 0 \text{ Else } y=0 $
This is the dot product of weight and input vector w.x
(Note that dot product of two matrices (representing vectors), can be written as that transpose of one multiplied by another - you will see this way in some articles)
$$
\begin{equation}
\sigma(w^Tx + b)=
\begin{cases}
1, & \text{if}\ w^Tx + b ≥ 0 \\
0, & \text{otherwise} \\
\end{cases}
\end{equation}
$$
Basically, all three equations are the same.
Taking from https://sergedesmedt.github.io/MathOfNeuralNetworks/RosenblattPerceptronArticle.html
So, the equation $ \bf w⋅x>b $ defines all the points on one side of
the hyperplane, and $ \bf w⋅x<=b$ all the points on the other side of
the hyperplane and on the hyperplane itself. This happens to be the
very definition of “linear separability” Thus, the perceptron allows
us to separate our feature space in two convex half-spaces
Please also see the above article from sergedesmedt. It explains also how the weights are trained.
So you can see how important dot product, and the representation of inputs and weights as vectors, are in neural networks.
This concept comes into play in modern neural networks as well.
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of
To answer this question we need to go back to one of the earliest neural networks the Rosenblatt’s perceptron where using vectors and the property of dot product to split hyperplanes of feature vector
|
13,949
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of dot product
|
The reason we use dot products is because lots of things are lines.
One way of seeing it is that the use of dot product in a neural network originally came
from the idea of using dot product in linear regression.
The most frequently used definition of a line is $y = ax+b$. But this is the same as saying $b = y-ax$, which is the same as saying $b = (y,x) \cdot (1,-a)$.
So mathematically, a line is expressed with a dot product between the coordinate axes $y,x$ and some other vector. And lines are useful for linear regression. And you can view neural networks as a linear model with a nonlinear activation tacked on top.
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of
|
The reason we use dot products is because lots of things are lines.
One way of seeing it is that the use of dot product in a neural network originally came
from the idea of using dot product in line
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of dot product
The reason we use dot products is because lots of things are lines.
One way of seeing it is that the use of dot product in a neural network originally came
from the idea of using dot product in linear regression.
The most frequently used definition of a line is $y = ax+b$. But this is the same as saying $b = y-ax$, which is the same as saying $b = (y,x) \cdot (1,-a)$.
So mathematically, a line is expressed with a dot product between the coordinate axes $y,x$ and some other vector. And lines are useful for linear regression. And you can view neural networks as a linear model with a nonlinear activation tacked on top.
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of
The reason we use dot products is because lots of things are lines.
One way of seeing it is that the use of dot product in a neural network originally came
from the idea of using dot product in line
|
13,950
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of dot product
|
It's very rough and imprecise, but I think of the dot product between two matrices or vectors as: "how much are they pulling in the same direction".
If the dot product is 0, they are pulling at a 90 degree angle. If the dot product is positive, then are pulling in the same general direction. If the dot product is negative, they are pulling away from each other.
If the dot product of normalized vectors is 1, they are the same.
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of
|
It's very rough and imprecise, but I think of the dot product between two matrices or vectors as: "how much are they pulling in the same direction".
If the dot product is 0, they are pulling at a 90 d
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of dot product
It's very rough and imprecise, but I think of the dot product between two matrices or vectors as: "how much are they pulling in the same direction".
If the dot product is 0, they are pulling at a 90 degree angle. If the dot product is positive, then are pulling in the same general direction. If the dot product is negative, they are pulling away from each other.
If the dot product of normalized vectors is 1, they are the same.
|
Can any one explain why dot product is used in neural network and what is the intitutive thought of
It's very rough and imprecise, but I think of the dot product between two matrices or vectors as: "how much are they pulling in the same direction".
If the dot product is 0, they are pulling at a 90 d
|
13,951
|
R: calculate p-value given Chi Squared and Degrees of Freedom
|
In applied statistics, chisquared test statistics arise as sums of squared residuals, or from sums of squared effects or from log-likelihood differences. In all of these applications, the aim is to test whether some vector parameter is zero vs the alternative that it is non-zero and the chisquare statistic is related to the squared size of the observed effect. The required p-value is the right tail probability for the chisquare value, which in R for your example is:
> pchisq(15, df=2, lower.tail=FALSE)
[1] 0.0005530844
For other df or statistic values, you obviously just substitute them into the above code.
All cumulative probability functions in R compute left tail probabilities by default. However they also have a lower.tail argument, and you can always set this FALSE to get the right tail probability. It is good practice to do this rather than to compute $1-p$ as you might see in some elementary textbooks.
The function qchisq does the reverse calculation, finding the value ("q" is for quantile) of the chisquare statistic corresponding to any given tail probability.
For example, the chisquare statistic corresponding to a p-value of 0.05 is given by
> qchisq(0.05, df=2, lower.tail=FALSE)
[1] 5.991465
|
R: calculate p-value given Chi Squared and Degrees of Freedom
|
In applied statistics, chisquared test statistics arise as sums of squared residuals, or from sums of squared effects or from log-likelihood differences. In all of these applications, the aim is to te
|
R: calculate p-value given Chi Squared and Degrees of Freedom
In applied statistics, chisquared test statistics arise as sums of squared residuals, or from sums of squared effects or from log-likelihood differences. In all of these applications, the aim is to test whether some vector parameter is zero vs the alternative that it is non-zero and the chisquare statistic is related to the squared size of the observed effect. The required p-value is the right tail probability for the chisquare value, which in R for your example is:
> pchisq(15, df=2, lower.tail=FALSE)
[1] 0.0005530844
For other df or statistic values, you obviously just substitute them into the above code.
All cumulative probability functions in R compute left tail probabilities by default. However they also have a lower.tail argument, and you can always set this FALSE to get the right tail probability. It is good practice to do this rather than to compute $1-p$ as you might see in some elementary textbooks.
The function qchisq does the reverse calculation, finding the value ("q" is for quantile) of the chisquare statistic corresponding to any given tail probability.
For example, the chisquare statistic corresponding to a p-value of 0.05 is given by
> qchisq(0.05, df=2, lower.tail=FALSE)
[1] 5.991465
|
R: calculate p-value given Chi Squared and Degrees of Freedom
In applied statistics, chisquared test statistics arise as sums of squared residuals, or from sums of squared effects or from log-likelihood differences. In all of these applications, the aim is to te
|
13,952
|
R: calculate p-value given Chi Squared and Degrees of Freedom
|
R has a suite of probability functions for density or mass in the form d* (e.g., dbeta, dchisq), and distribution in the form p* (e.g., pf, pgamma). You might wish to start there.
|
R: calculate p-value given Chi Squared and Degrees of Freedom
|
R has a suite of probability functions for density or mass in the form d* (e.g., dbeta, dchisq), and distribution in the form p* (e.g., pf, pgamma). You might wish to start there.
|
R: calculate p-value given Chi Squared and Degrees of Freedom
R has a suite of probability functions for density or mass in the form d* (e.g., dbeta, dchisq), and distribution in the form p* (e.g., pf, pgamma). You might wish to start there.
|
R: calculate p-value given Chi Squared and Degrees of Freedom
R has a suite of probability functions for density or mass in the form d* (e.g., dbeta, dchisq), and distribution in the form p* (e.g., pf, pgamma). You might wish to start there.
|
13,953
|
R: calculate p-value given Chi Squared and Degrees of Freedom
|
Yes, it is possible to calculate the chi-square value for a given p-value (p) and degrees of freedom (df). Below is how to go about it:
For the sake of verification, I first calculate p for a given chi-square value = 1.1 and df=1:
Solution:
pchisq(1.1, df=1, lower.tail=FALSE)# the answer is p=0.2942661
Now, to go backward by using p and df to calculate chi-square value, I used the p=0.2942661 I obtained from above and df=1 above:
Solution:
qchisq(0.2942661, 1, lower.tail=FALSE) # the answer is 1.1 as in the first solution.
So using your example of Chi Squared = 15 with df = 2, the solutions are below:
Solution: calculate p-value
pchisq(15, df=2, lower.tail=FALSE)# answer: p= 0.0005530844
use the p= 0.0005530844 and df=2 to get back the chi-square value
qchisq(0.0005530844, 2, lower.tail=FALSE)# answer: chi-square = 15
Hope this helps!!!
|
R: calculate p-value given Chi Squared and Degrees of Freedom
|
Yes, it is possible to calculate the chi-square value for a given p-value (p) and degrees of freedom (df). Below is how to go about it:
For the sake of verification, I first calculate p for a given ch
|
R: calculate p-value given Chi Squared and Degrees of Freedom
Yes, it is possible to calculate the chi-square value for a given p-value (p) and degrees of freedom (df). Below is how to go about it:
For the sake of verification, I first calculate p for a given chi-square value = 1.1 and df=1:
Solution:
pchisq(1.1, df=1, lower.tail=FALSE)# the answer is p=0.2942661
Now, to go backward by using p and df to calculate chi-square value, I used the p=0.2942661 I obtained from above and df=1 above:
Solution:
qchisq(0.2942661, 1, lower.tail=FALSE) # the answer is 1.1 as in the first solution.
So using your example of Chi Squared = 15 with df = 2, the solutions are below:
Solution: calculate p-value
pchisq(15, df=2, lower.tail=FALSE)# answer: p= 0.0005530844
use the p= 0.0005530844 and df=2 to get back the chi-square value
qchisq(0.0005530844, 2, lower.tail=FALSE)# answer: chi-square = 15
Hope this helps!!!
|
R: calculate p-value given Chi Squared and Degrees of Freedom
Yes, it is possible to calculate the chi-square value for a given p-value (p) and degrees of freedom (df). Below is how to go about it:
For the sake of verification, I first calculate p for a given ch
|
13,954
|
R: calculate p-value given Chi Squared and Degrees of Freedom
|
Try,
pchisq(chi,df)
in your example,
pchisq(15,2)
[1] 0.9994469
|
R: calculate p-value given Chi Squared and Degrees of Freedom
|
Try,
pchisq(chi,df)
in your example,
pchisq(15,2)
[1] 0.9994469
|
R: calculate p-value given Chi Squared and Degrees of Freedom
Try,
pchisq(chi,df)
in your example,
pchisq(15,2)
[1] 0.9994469
|
R: calculate p-value given Chi Squared and Degrees of Freedom
Try,
pchisq(chi,df)
in your example,
pchisq(15,2)
[1] 0.9994469
|
13,955
|
Utility of the Frisch-Waugh theorem
|
Consider the fixed effects panel data model, also known as Least Squares Dummy Variables (LSDV) model.
$b_{LSDV}$ can be computed by directly applying OLS to the model $$y=X\beta+D\alpha+\epsilon,$$
where $D$ is a $NT\times N$ matrix of dummies and $\alpha$ represent the individual-specific fixed effects.
Another way to compute $b_{LSDV}$ is to apply the so called within transformation to the usual model in order to obtain a demeaned version of it, i.e. $$M_{[D]}y=M_{[D]}X\beta+M_{[D]}\epsilon.$$
Here, $M_{[D]}=I-D(D'D)^{-1}D'$, the residual maker matrix of a regression on $D$.
By the Frisch-Waugh-Lovell theorem, the two are equivalent, as FWL says that you can compute a subset of regression coefficients of a regression (here, $\hat\beta$) by
regressing $y$ on the other regressors (here, $D$), saving the residuals (here, the time-demeaned $y$ or $M_{[D]}y$, because regression on a constant just demeans the variables), then
regressing the $X$ on $D$ and saving the residuals $M_{[D]}X$, and
regressing the residuals onto each other, $M_{[D]}y$ on $M_{[D]}X$.
The second version is much more widely used, because typical panel data sets may have thousands of panel units $N$, so that the first approach would require you to run a regression with thousands of regressors, which is not a good idea numerically even nowadays with fast computers, as computing the inverse of $(D :X)'(D: X)$ would be very expensive, whereas time-demeaning $y$ and $X$ is of little cost.
|
Utility of the Frisch-Waugh theorem
|
Consider the fixed effects panel data model, also known as Least Squares Dummy Variables (LSDV) model.
$b_{LSDV}$ can be computed by directly applying OLS to the model $$y=X\beta+D\alpha+\epsilon,$$
|
Utility of the Frisch-Waugh theorem
Consider the fixed effects panel data model, also known as Least Squares Dummy Variables (LSDV) model.
$b_{LSDV}$ can be computed by directly applying OLS to the model $$y=X\beta+D\alpha+\epsilon,$$
where $D$ is a $NT\times N$ matrix of dummies and $\alpha$ represent the individual-specific fixed effects.
Another way to compute $b_{LSDV}$ is to apply the so called within transformation to the usual model in order to obtain a demeaned version of it, i.e. $$M_{[D]}y=M_{[D]}X\beta+M_{[D]}\epsilon.$$
Here, $M_{[D]}=I-D(D'D)^{-1}D'$, the residual maker matrix of a regression on $D$.
By the Frisch-Waugh-Lovell theorem, the two are equivalent, as FWL says that you can compute a subset of regression coefficients of a regression (here, $\hat\beta$) by
regressing $y$ on the other regressors (here, $D$), saving the residuals (here, the time-demeaned $y$ or $M_{[D]}y$, because regression on a constant just demeans the variables), then
regressing the $X$ on $D$ and saving the residuals $M_{[D]}X$, and
regressing the residuals onto each other, $M_{[D]}y$ on $M_{[D]}X$.
The second version is much more widely used, because typical panel data sets may have thousands of panel units $N$, so that the first approach would require you to run a regression with thousands of regressors, which is not a good idea numerically even nowadays with fast computers, as computing the inverse of $(D :X)'(D: X)$ would be very expensive, whereas time-demeaning $y$ and $X$ is of little cost.
|
Utility of the Frisch-Waugh theorem
Consider the fixed effects panel data model, also known as Least Squares Dummy Variables (LSDV) model.
$b_{LSDV}$ can be computed by directly applying OLS to the model $$y=X\beta+D\alpha+\epsilon,$$
|
13,956
|
Utility of the Frisch-Waugh theorem
|
Here is a simplified version of my first answer, which I believe is less practically relevant, but possibly easier to "sell" for classroom use.
The regressions $$y_i = \beta_1 + \sum_{j=2}^K\beta_jx_{ij} + \epsilon_i$$ and $$y_i-\bar{y} = \sum^K_{j=2}\beta_j(x_{ij} - \bar{x}_j) + \tilde{\epsilon}_i$$ yield identical $\widehat{\beta}_j$, $j=2,\ldots,K$.
This can be seen as follows: take $\mathbf{x}_1=\mathbf{1}:=(1,\ldots,1)'$ and hence
$$
M_\mathbf{1}=I-\mathbf{1}(\mathbf{1}'\mathbf{1})^{-1}\mathbf{1}'=I-\frac{\mathbf{1}\mathbf{1}'}{n},
$$
so that
$$M_{\mathbf{1}}\mathbf{x}_j=\mathbf{x}_j-\mathbf{1} n^{-1}\mathbf{1}'\mathbf{x}_j=\mathbf{x}_j-\mathbf{1}\bar{x}_j=:\mathbf{x}_j-\bar{\mathbf{x}}_j.
$$
Hence, the residuals of a regression of variables on a constant, $M_{\mathbf{1}}\mathbf{x}_j$, are just the demeaned variables (the same logic of course applies to $y_i$).
|
Utility of the Frisch-Waugh theorem
|
Here is a simplified version of my first answer, which I believe is less practically relevant, but possibly easier to "sell" for classroom use.
The regressions $$y_i = \beta_1 + \sum_{j=2}^K\beta_jx_{
|
Utility of the Frisch-Waugh theorem
Here is a simplified version of my first answer, which I believe is less practically relevant, but possibly easier to "sell" for classroom use.
The regressions $$y_i = \beta_1 + \sum_{j=2}^K\beta_jx_{ij} + \epsilon_i$$ and $$y_i-\bar{y} = \sum^K_{j=2}\beta_j(x_{ij} - \bar{x}_j) + \tilde{\epsilon}_i$$ yield identical $\widehat{\beta}_j$, $j=2,\ldots,K$.
This can be seen as follows: take $\mathbf{x}_1=\mathbf{1}:=(1,\ldots,1)'$ and hence
$$
M_\mathbf{1}=I-\mathbf{1}(\mathbf{1}'\mathbf{1})^{-1}\mathbf{1}'=I-\frac{\mathbf{1}\mathbf{1}'}{n},
$$
so that
$$M_{\mathbf{1}}\mathbf{x}_j=\mathbf{x}_j-\mathbf{1} n^{-1}\mathbf{1}'\mathbf{x}_j=\mathbf{x}_j-\mathbf{1}\bar{x}_j=:\mathbf{x}_j-\bar{\mathbf{x}}_j.
$$
Hence, the residuals of a regression of variables on a constant, $M_{\mathbf{1}}\mathbf{x}_j$, are just the demeaned variables (the same logic of course applies to $y_i$).
|
Utility of the Frisch-Waugh theorem
Here is a simplified version of my first answer, which I believe is less practically relevant, but possibly easier to "sell" for classroom use.
The regressions $$y_i = \beta_1 + \sum_{j=2}^K\beta_jx_{
|
13,957
|
Utility of the Frisch-Waugh theorem
|
Here is another, more indirect, but I believe interesting one, namely the connection between different approaches to computing the partial autocorrelation coefficient of a stationary time series.
Definition 1
Consider the projection
\begin{equation}
\hat{Y}_{t}-\mu=\alpha^{(m)}_1(Y_{t-1}-\mu)+\alpha^{(m)}_2(Y_{t-2}-\mu)+\ldots+\alpha^{(m)}_m(Y_{t-m}-\mu)
\end{equation}
The $m$th partial autocorrelation equals $\alpha^{(m)}_m$.
It thus gives the influence of the $m$th lag on $Y_t$ \emph{after controlling for} $Y_{t-1},\ldots,Y_{t-m+1}$. Contrast this with $\rho_m$, that gives the `raw' correlation of $Y_t$ and $Y_{t-m}$.
How do we find the $\alpha^{(m)}_j$? Recall that a fundamental property of a regression of $Z_t$ on regressors $X_t$ is that the coefficients are such that regressors and residuals are uncorrelated. In a population regression this condition is then stated in terms of population correlations. Then:
\begin{equation}
E[X_t(Z_t-X_t^\top\mathbf{\alpha}^{(m)})]=0
\end{equation}
Solving for $\mathbf{\alpha}^{(m)}$ we find the linear projection coefficients
\begin{equation}
\mathbf{\alpha}^{(m)}=[E(X_tX_t^\top)]^{-1}E[X_tZ_t]
\end{equation}
Applying this formula to $Z_t=Y_t-\mu$ and $$X_t=[(Y_{t-1}-\mu),(Y_{t-2}-\mu),\ldots,(Y_{t-m}-\mu)]^\top$$ we have
$$
E(X_tX_t^\top)=\left(\begin{array}{cccc}
\gamma_{0} & \gamma_{1}&\cdots& \gamma_{m-1}\\
\gamma_{1}& \gamma_{0} & \cdots &\gamma_{m-2}\\
\vdots & \vdots & \ddots &\vdots\\
\gamma_{m-1}&\gamma_{m-2} & \cdots & \gamma_{0}\\
\end{array}
\right)
$$
Also,
$$
E(X_tZ_t)=\left(
\begin{array}{c}
\gamma_1 \\
\vdots \\
\gamma_m \\
\end{array}
\right)
$$
Hence,
\begin{equation}
\mathbf{\alpha}^{(m)}=\left(\begin{array}{cccc}
\gamma_{0} & \gamma_{1}&\cdots& \gamma_{m-1}\\
\gamma_{1}& \gamma_{0} & \cdots &\gamma_{m-2}\\
\vdots & \vdots & \ddots &\vdots\\
\gamma_{m-1}&\gamma_{m-2} & \cdots & \gamma_{0}\\
\end{array}
\right)^{-1}\left(
\begin{array}{c}
\gamma_1 \\
\vdots \\
\gamma_m \\
\end{array}
\right)\end{equation}
The $m$th partial correlation then is the last element of the vector $\mathbf{\alpha}^{(m)}$.
So, we sort of run a multiple regression and find one coefficient of interest while controlling for the others.
Definition 2
The $m$th partial correlation is the correlation of the prediction error of $Y_{t+m}$ predicted with $Y_{t-1},\ldots,Y_{t-m+1}$ with the prediction error of $Y_{t}$ predicted with $Y_{t-1},\ldots,Y_{t-m+1}$.
So, we sort of first control for the intermediate lags and then compute the correlation of the residuals.
|
Utility of the Frisch-Waugh theorem
|
Here is another, more indirect, but I believe interesting one, namely the connection between different approaches to computing the partial autocorrelation coefficient of a stationary time series.
Defi
|
Utility of the Frisch-Waugh theorem
Here is another, more indirect, but I believe interesting one, namely the connection between different approaches to computing the partial autocorrelation coefficient of a stationary time series.
Definition 1
Consider the projection
\begin{equation}
\hat{Y}_{t}-\mu=\alpha^{(m)}_1(Y_{t-1}-\mu)+\alpha^{(m)}_2(Y_{t-2}-\mu)+\ldots+\alpha^{(m)}_m(Y_{t-m}-\mu)
\end{equation}
The $m$th partial autocorrelation equals $\alpha^{(m)}_m$.
It thus gives the influence of the $m$th lag on $Y_t$ \emph{after controlling for} $Y_{t-1},\ldots,Y_{t-m+1}$. Contrast this with $\rho_m$, that gives the `raw' correlation of $Y_t$ and $Y_{t-m}$.
How do we find the $\alpha^{(m)}_j$? Recall that a fundamental property of a regression of $Z_t$ on regressors $X_t$ is that the coefficients are such that regressors and residuals are uncorrelated. In a population regression this condition is then stated in terms of population correlations. Then:
\begin{equation}
E[X_t(Z_t-X_t^\top\mathbf{\alpha}^{(m)})]=0
\end{equation}
Solving for $\mathbf{\alpha}^{(m)}$ we find the linear projection coefficients
\begin{equation}
\mathbf{\alpha}^{(m)}=[E(X_tX_t^\top)]^{-1}E[X_tZ_t]
\end{equation}
Applying this formula to $Z_t=Y_t-\mu$ and $$X_t=[(Y_{t-1}-\mu),(Y_{t-2}-\mu),\ldots,(Y_{t-m}-\mu)]^\top$$ we have
$$
E(X_tX_t^\top)=\left(\begin{array}{cccc}
\gamma_{0} & \gamma_{1}&\cdots& \gamma_{m-1}\\
\gamma_{1}& \gamma_{0} & \cdots &\gamma_{m-2}\\
\vdots & \vdots & \ddots &\vdots\\
\gamma_{m-1}&\gamma_{m-2} & \cdots & \gamma_{0}\\
\end{array}
\right)
$$
Also,
$$
E(X_tZ_t)=\left(
\begin{array}{c}
\gamma_1 \\
\vdots \\
\gamma_m \\
\end{array}
\right)
$$
Hence,
\begin{equation}
\mathbf{\alpha}^{(m)}=\left(\begin{array}{cccc}
\gamma_{0} & \gamma_{1}&\cdots& \gamma_{m-1}\\
\gamma_{1}& \gamma_{0} & \cdots &\gamma_{m-2}\\
\vdots & \vdots & \ddots &\vdots\\
\gamma_{m-1}&\gamma_{m-2} & \cdots & \gamma_{0}\\
\end{array}
\right)^{-1}\left(
\begin{array}{c}
\gamma_1 \\
\vdots \\
\gamma_m \\
\end{array}
\right)\end{equation}
The $m$th partial correlation then is the last element of the vector $\mathbf{\alpha}^{(m)}$.
So, we sort of run a multiple regression and find one coefficient of interest while controlling for the others.
Definition 2
The $m$th partial correlation is the correlation of the prediction error of $Y_{t+m}$ predicted with $Y_{t-1},\ldots,Y_{t-m+1}$ with the prediction error of $Y_{t}$ predicted with $Y_{t-1},\ldots,Y_{t-m+1}$.
So, we sort of first control for the intermediate lags and then compute the correlation of the residuals.
|
Utility of the Frisch-Waugh theorem
Here is another, more indirect, but I believe interesting one, namely the connection between different approaches to computing the partial autocorrelation coefficient of a stationary time series.
Defi
|
13,958
|
Why scaling is important for the linear SVM classification?
|
SVM tries to maximize the distance between the separating plane and the support vectors. If one feature (i.e. one dimension in this space) has very large values, it will dominate the other features when calculating the distance. If you rescale all features (e.g. to [0, 1]), they all have the same influence on the distance metric.
|
Why scaling is important for the linear SVM classification?
|
SVM tries to maximize the distance between the separating plane and the support vectors. If one feature (i.e. one dimension in this space) has very large values, it will dominate the other features wh
|
Why scaling is important for the linear SVM classification?
SVM tries to maximize the distance between the separating plane and the support vectors. If one feature (i.e. one dimension in this space) has very large values, it will dominate the other features when calculating the distance. If you rescale all features (e.g. to [0, 1]), they all have the same influence on the distance metric.
|
Why scaling is important for the linear SVM classification?
SVM tries to maximize the distance between the separating plane and the support vectors. If one feature (i.e. one dimension in this space) has very large values, it will dominate the other features wh
|
13,959
|
Why scaling is important for the linear SVM classification?
|
I think it can be made more clear through an example. Let's say you have two input vectors: X1 and X2. and let's say X1 has range(0.1 to 0.8) and X2 has range(3000 to 50000). Now your SVM classifier will be a linear boundary lying in X1-X2 plane. My claim is that the slope of linear decision boundary should not depend on the range of X1 and X2, but instead upon the distribution of points.
Now let make a prediction on the point (0.1, 4000) and (0.8, 4000). There will be hardly any difference in the value of the function, thus making SVM less accurate since it will have less sensitivity to points in the X1 direction.
|
Why scaling is important for the linear SVM classification?
|
I think it can be made more clear through an example. Let's say you have two input vectors: X1 and X2. and let's say X1 has range(0.1 to 0.8) and X2 has range(3000 to 50000). Now your SVM classifier w
|
Why scaling is important for the linear SVM classification?
I think it can be made more clear through an example. Let's say you have two input vectors: X1 and X2. and let's say X1 has range(0.1 to 0.8) and X2 has range(3000 to 50000). Now your SVM classifier will be a linear boundary lying in X1-X2 plane. My claim is that the slope of linear decision boundary should not depend on the range of X1 and X2, but instead upon the distribution of points.
Now let make a prediction on the point (0.1, 4000) and (0.8, 4000). There will be hardly any difference in the value of the function, thus making SVM less accurate since it will have less sensitivity to points in the X1 direction.
|
Why scaling is important for the linear SVM classification?
I think it can be made more clear through an example. Let's say you have two input vectors: X1 and X2. and let's say X1 has range(0.1 to 0.8) and X2 has range(3000 to 50000). Now your SVM classifier w
|
13,960
|
How do I interpret my regression with first differenced variables?
|
Suppose that we have the model
$$\begin{equation*} y_t = \beta_0 + \beta_1 x_t + \beta_2 t + \epsilon_t. \end{equation*}$$
You say that these coefficients are easier to interpret. Let's subtract $y_{t-1}$ from the lefthand side and $\beta_0 + \beta_1 x_{t-1} + \beta_2 ({t-1}) + \epsilon_{t-1}$, which equals $y_{t-1}$, from the righthand side. We have
$$\begin{equation*} \Delta y_t = \beta_1 \Delta x_t + \beta_2 + \Delta \epsilon_t. \end{equation*}$$
The intercept in the difference equation is the time trend. And the coefficient on $\Delta x$ has the same interpretation as $\beta_1$ in the original model.
If the errors were non-stationary such that
$$\begin{equation*} \epsilon_t = \sum_{s=0}^{t-1}{\nu_s}, \end{equation*}$$
such that $\nu_s$ is white noise, the the differenced error is white noise.
If the errors have a stationary AR(p) distribution, say, then the differenced error term would have a more complicated distribution and, notably, would retain serial correlation. Or if the original $\epsilon$ are already white noise (An AR(1) with a correlation coefficient of 0 if you like), then differencing induces serial correlation between the errors.
For these reasons, it is important to only difference processes that are non-stationary due to unit roots and use detrending for so-called trend stationary ones.
(A unit root causes the variance of a series to change and it actually explode over time; the expected value of this series is constant, however. A trend stationary process has the opposite properties.)
|
How do I interpret my regression with first differenced variables?
|
Suppose that we have the model
$$\begin{equation*} y_t = \beta_0 + \beta_1 x_t + \beta_2 t + \epsilon_t. \end{equation*}$$
You say that these coefficients are easier to interpret. Let's subtract $y_{t
|
How do I interpret my regression with first differenced variables?
Suppose that we have the model
$$\begin{equation*} y_t = \beta_0 + \beta_1 x_t + \beta_2 t + \epsilon_t. \end{equation*}$$
You say that these coefficients are easier to interpret. Let's subtract $y_{t-1}$ from the lefthand side and $\beta_0 + \beta_1 x_{t-1} + \beta_2 ({t-1}) + \epsilon_{t-1}$, which equals $y_{t-1}$, from the righthand side. We have
$$\begin{equation*} \Delta y_t = \beta_1 \Delta x_t + \beta_2 + \Delta \epsilon_t. \end{equation*}$$
The intercept in the difference equation is the time trend. And the coefficient on $\Delta x$ has the same interpretation as $\beta_1$ in the original model.
If the errors were non-stationary such that
$$\begin{equation*} \epsilon_t = \sum_{s=0}^{t-1}{\nu_s}, \end{equation*}$$
such that $\nu_s$ is white noise, the the differenced error is white noise.
If the errors have a stationary AR(p) distribution, say, then the differenced error term would have a more complicated distribution and, notably, would retain serial correlation. Or if the original $\epsilon$ are already white noise (An AR(1) with a correlation coefficient of 0 if you like), then differencing induces serial correlation between the errors.
For these reasons, it is important to only difference processes that are non-stationary due to unit roots and use detrending for so-called trend stationary ones.
(A unit root causes the variance of a series to change and it actually explode over time; the expected value of this series is constant, however. A trend stationary process has the opposite properties.)
|
How do I interpret my regression with first differenced variables?
Suppose that we have the model
$$\begin{equation*} y_t = \beta_0 + \beta_1 x_t + \beta_2 t + \epsilon_t. \end{equation*}$$
You say that these coefficients are easier to interpret. Let's subtract $y_{t
|
13,961
|
How do I interpret my regression with first differenced variables?
|
First differencing removes linear trends that seem to persist in your original residuals. It looks like the first differencing removed the trend in the residuals and you are left with basically uncorrelated residuals. I am thinking that maybe the trend in the residuals hid part of the negative relationship between ERP and risk free rate and that would be the reason why the model shows a stronger relationship after differencing.
|
How do I interpret my regression with first differenced variables?
|
First differencing removes linear trends that seem to persist in your original residuals. It looks like the first differencing removed the trend in the residuals and you are left with basically uncor
|
How do I interpret my regression with first differenced variables?
First differencing removes linear trends that seem to persist in your original residuals. It looks like the first differencing removed the trend in the residuals and you are left with basically uncorrelated residuals. I am thinking that maybe the trend in the residuals hid part of the negative relationship between ERP and risk free rate and that would be the reason why the model shows a stronger relationship after differencing.
|
How do I interpret my regression with first differenced variables?
First differencing removes linear trends that seem to persist in your original residuals. It looks like the first differencing removed the trend in the residuals and you are left with basically uncor
|
13,962
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic functions
|
You might want to check out the free software called Eureqa. It has the specific aim of automating the process of finding both the functional form and the parameters of a given functional relationship.
If you are comparing models, with different numbers of parameters, you will generally want to use a measure of fit that penalises models with more parameters. There is a rich literature on which fit measure is most appropriate for model comparison, and issues get more complicated when the models are not nested. I'd be interested to hear what others think is the most suitable model comparison index given your scenario (as a side point, there was recently a discussion on my blog about model comparison indices in the context of comparing models for curve fitting).
From my experience, non-linear regression models are used for reasons beyond pure statistical fit to the given data:
Non-linear models make more plausible predictions outside the range of the data
Non-linear models require fewer parameters for equivalent fit
Non-linear regression models are often applied in domains where there is substantial prior research and theory guiding model selection.
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic function
|
You might want to check out the free software called Eureqa. It has the specific aim of automating the process of finding both the functional form and the parameters of a given functional relationship
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic functions
You might want to check out the free software called Eureqa. It has the specific aim of automating the process of finding both the functional form and the parameters of a given functional relationship.
If you are comparing models, with different numbers of parameters, you will generally want to use a measure of fit that penalises models with more parameters. There is a rich literature on which fit measure is most appropriate for model comparison, and issues get more complicated when the models are not nested. I'd be interested to hear what others think is the most suitable model comparison index given your scenario (as a side point, there was recently a discussion on my blog about model comparison indices in the context of comparing models for curve fitting).
From my experience, non-linear regression models are used for reasons beyond pure statistical fit to the given data:
Non-linear models make more plausible predictions outside the range of the data
Non-linear models require fewer parameters for equivalent fit
Non-linear regression models are often applied in domains where there is substantial prior research and theory guiding model selection.
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic function
You might want to check out the free software called Eureqa. It has the specific aim of automating the process of finding both the functional form and the parameters of a given functional relationship
|
13,963
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic functions
|
This is a question that is valid in very diverse domains.
The best model is the one that can predict data points that were not used during the parameter estimation. Ideally one would compute model parameters with a subset of the data set, and evaluate the fit performance on another data set. If you are interested in the details make a search with "cross-validation".
So the answer to first question, is "No". You cannot simply take the best fitting model. Image you are fitting a polynomial with Nth degree to N data points. This will be a perfect fit, because all the model will exactly pass on all data points. However this model will not generalize to new data.
When you do not have enough data to go through the cross-validation procedure in a sound manner, then you can use metrics such as AIC or BIC. These metrics punishes simultaneously the amplitude of residuals and the number of parameters in your model but makes strong assumptions on the generative processes of your data. As these metrics are sensitive to over-fitting, they can be used as a proxy for model selection.
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic function
|
This is a question that is valid in very diverse domains.
The best model is the one that can predict data points that were not used during the parameter estimation. Ideally one would compute model par
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic functions
This is a question that is valid in very diverse domains.
The best model is the one that can predict data points that were not used during the parameter estimation. Ideally one would compute model parameters with a subset of the data set, and evaluate the fit performance on another data set. If you are interested in the details make a search with "cross-validation".
So the answer to first question, is "No". You cannot simply take the best fitting model. Image you are fitting a polynomial with Nth degree to N data points. This will be a perfect fit, because all the model will exactly pass on all data points. However this model will not generalize to new data.
When you do not have enough data to go through the cross-validation procedure in a sound manner, then you can use metrics such as AIC or BIC. These metrics punishes simultaneously the amplitude of residuals and the number of parameters in your model but makes strong assumptions on the generative processes of your data. As these metrics are sensitive to over-fitting, they can be used as a proxy for model selection.
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic function
This is a question that is valid in very diverse domains.
The best model is the one that can predict data points that were not used during the parameter estimation. Ideally one would compute model par
|
13,964
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic functions
|
Since plenty of people routinely explore the fit of various curves to their data, I don't know where your reservations are coming from. Granted, there is the fact that a quadratic will always fit at least as well as a linear, and a cubic, at least as well as a quadratic, so there are ways to test the statistical significance of adding such a nonlinear term and thus to avoid needless complexity. But the basic practice of testing many different forms of a relationship is just good practice. In fact, one might start with a very flexible loess regression to see what is the most plausible kind of curve to fit.
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic function
|
Since plenty of people routinely explore the fit of various curves to their data, I don't know where your reservations are coming from. Granted, there is the fact that a quadratic will always fit at
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic functions
Since plenty of people routinely explore the fit of various curves to their data, I don't know where your reservations are coming from. Granted, there is the fact that a quadratic will always fit at least as well as a linear, and a cubic, at least as well as a quadratic, so there are ways to test the statistical significance of adding such a nonlinear term and thus to avoid needless complexity. But the basic practice of testing many different forms of a relationship is just good practice. In fact, one might start with a very flexible loess regression to see what is the most plausible kind of curve to fit.
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic function
Since plenty of people routinely explore the fit of various curves to their data, I don't know where your reservations are coming from. Granted, there is the fact that a quadratic will always fit at
|
13,965
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic functions
|
You really need to find a balance between the science/theory that leads to the data and what the data tells you. Like others have said, if you let yourself fit any possible transformation (polynomials of any degree, etc.) then you will end up overfitting and getting something that is useless.
One way to convince yourself of this is through simulation. Choose one of the models (linear, exponential, log) and generate data that follows this model (with a choice of the parameters). If your conditional variance of the y values is small relative to the spread of the x variable then a simple plot will make it obvious which model was chosen and what the "truth" is. But if you choose a set of parameters such that it is not obvious from the plots (probably the case where an analytic solution is of interest) then analyze each of the 3 ways and see which gives the "best" fit. I expect that you will find that the "best" fit is often not the "true" fit.
On the other hand, sometimes we want the data to tell us as much as possible and we may not have the science/theory to fully determine the nature of the relationship. The original paper by Box and Cox (JRSS B, vol. 26, no. 2, 1964) discusses ways to compare between several transformations on the y variable, their given set of transformations have linear and log as special cases (but not exponential), but nothing in the theory of the paper limits you to only their family of transformations, the same methodology could be extended to include a comparison between the 3 models that you are interested in.
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic function
|
You really need to find a balance between the science/theory that leads to the data and what the data tells you. Like others have said, if you let yourself fit any possible transformation (polynomial
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic functions
You really need to find a balance between the science/theory that leads to the data and what the data tells you. Like others have said, if you let yourself fit any possible transformation (polynomials of any degree, etc.) then you will end up overfitting and getting something that is useless.
One way to convince yourself of this is through simulation. Choose one of the models (linear, exponential, log) and generate data that follows this model (with a choice of the parameters). If your conditional variance of the y values is small relative to the spread of the x variable then a simple plot will make it obvious which model was chosen and what the "truth" is. But if you choose a set of parameters such that it is not obvious from the plots (probably the case where an analytic solution is of interest) then analyze each of the 3 ways and see which gives the "best" fit. I expect that you will find that the "best" fit is often not the "true" fit.
On the other hand, sometimes we want the data to tell us as much as possible and we may not have the science/theory to fully determine the nature of the relationship. The original paper by Box and Cox (JRSS B, vol. 26, no. 2, 1964) discusses ways to compare between several transformations on the y variable, their given set of transformations have linear and log as special cases (but not exponential), but nothing in the theory of the paper limits you to only their family of transformations, the same methodology could be extended to include a comparison between the 3 models that you are interested in.
|
Determining best fitting curve fitting function out of linear, exponential, and logarithmic function
You really need to find a balance between the science/theory that leads to the data and what the data tells you. Like others have said, if you let yourself fit any possible transformation (polynomial
|
13,966
|
Can you say that statistics and probability is like induction and deduction?
|
I think it is the best to quickly recap the meaning of inductive and deductive reasoning before answering your question.
Deductive Reasoning: "Deductive arguments are attempts to show that a conclusion necessarily follows from a set of premises. A deductive argument is valid if the conclusion does follow necessarily from the premises, i.e., if the conclusion must be true provided that the premises are true. A deductive argument is sound if it is valid and its premises are true. Deductive arguments are valid or invalid, sound or unsound, but are never false or true." (quoted from wikipedia, emphasis added).
"Inductive reasoning, also known as induction or inductive logic, or educated guess in colloquial English, is a kind of reasoning that allows for the possibility that the conclusion is false even where all of the premises are true. The premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they do not ensure its truth." (from wikipedia, emphasis added)
To stress the main difference: Whereas deductive reasoning transfers the truth from premises to conclusions, inductive reasoning does not. That is, whereas for deductive reasoning you never broaden your knowledge (i.e., everything is in the premises, but sometimes hidden and needs to be demonstrated via proofs), inductive reasoning allows you to broaden your knowledge (i.e., you may gain new insights that are not already contained in the premises, however, for the cost of not knowing their truth).
How does this relate to probability and statistics?
In my eyes, probability is necessarily deductive. It is a branch of math. So based on some axioms or ideas (supposedly true ones) it deduces theories.
However, statistics is not necessarily inductive. Only if you try to use it for generating knowledge about unobserved entities (i.e., pursuing inferential statistics, see also onestop's answer). However, if you use statistics to describe the sample (i.e., decriptive statistics) or if you sampled the whole population, it is still deductive as you do not get any more knowledge or information as that is already present in the sample.
So, if you think about statistics as being the heroic endeavor of scientists trying to use mathematical methods to find regularities that govern the interplay of the empirical entities in the world, which is in fact never successful (i.e., we will never really know if any of our theories is true), then, yeah, this is induction. It's also the Scientific Method as articulated by Francis Bacon, upon which modern empirical science is founded. The method leads to inductive conclusions which are at best highly probable, though not certain. This in turn leads to misunderstanding among non-scientists about the meaning of a scientific theory and a scientific proof.
Update: After reading Conjugate Prior's answer (and after some overnight thinking) I would like to add something. I think the question on whether or not (inferential) statistical reasoning is deductive or inductive depends on what exactly it is that you are interested in, i.e., what kind of conclusion you are striving for.
If you are interested in probabilistic conclusions, then statistical reasoning is deductive. This means, if you want to know if e.g., in 95 out of 100 cases the population value is within a certain interval (i.e., confidence interval) , then you can get a truth value (true or not true) for this statement. You can say (if the assumptions are true) that it is the case that in 95 out of 100 cases the population value is within the interval. However, in no empirical case you will know if the population value is in your obtained CI. Either it is or not, but there is no way to be sure. The same reasoning applies for probabilities in classical p-value and Bayesian statistics. You can be sure about probabilities.
However, if you are interested in conclusions about empirical entities (e.g., where is the population value) you can only argue inductive. You can use all available statistical methods to accumulate evidence that support certain propositions about empirical entities or the causal mechanisms with which they interact. But you will never be certain on any of these propositions.
To recap: The point I want to make that it is important at what you are looking. Probabilites you can deduce, but for every definite proposition about things you can only find evidence in favor for. Not more. See also onestop's link to the induction problem.
|
Can you say that statistics and probability is like induction and deduction?
|
I think it is the best to quickly recap the meaning of inductive and deductive reasoning before answering your question.
Deductive Reasoning: "Deductive arguments are attempts to show that a conclusi
|
Can you say that statistics and probability is like induction and deduction?
I think it is the best to quickly recap the meaning of inductive and deductive reasoning before answering your question.
Deductive Reasoning: "Deductive arguments are attempts to show that a conclusion necessarily follows from a set of premises. A deductive argument is valid if the conclusion does follow necessarily from the premises, i.e., if the conclusion must be true provided that the premises are true. A deductive argument is sound if it is valid and its premises are true. Deductive arguments are valid or invalid, sound or unsound, but are never false or true." (quoted from wikipedia, emphasis added).
"Inductive reasoning, also known as induction or inductive logic, or educated guess in colloquial English, is a kind of reasoning that allows for the possibility that the conclusion is false even where all of the premises are true. The premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they do not ensure its truth." (from wikipedia, emphasis added)
To stress the main difference: Whereas deductive reasoning transfers the truth from premises to conclusions, inductive reasoning does not. That is, whereas for deductive reasoning you never broaden your knowledge (i.e., everything is in the premises, but sometimes hidden and needs to be demonstrated via proofs), inductive reasoning allows you to broaden your knowledge (i.e., you may gain new insights that are not already contained in the premises, however, for the cost of not knowing their truth).
How does this relate to probability and statistics?
In my eyes, probability is necessarily deductive. It is a branch of math. So based on some axioms or ideas (supposedly true ones) it deduces theories.
However, statistics is not necessarily inductive. Only if you try to use it for generating knowledge about unobserved entities (i.e., pursuing inferential statistics, see also onestop's answer). However, if you use statistics to describe the sample (i.e., decriptive statistics) or if you sampled the whole population, it is still deductive as you do not get any more knowledge or information as that is already present in the sample.
So, if you think about statistics as being the heroic endeavor of scientists trying to use mathematical methods to find regularities that govern the interplay of the empirical entities in the world, which is in fact never successful (i.e., we will never really know if any of our theories is true), then, yeah, this is induction. It's also the Scientific Method as articulated by Francis Bacon, upon which modern empirical science is founded. The method leads to inductive conclusions which are at best highly probable, though not certain. This in turn leads to misunderstanding among non-scientists about the meaning of a scientific theory and a scientific proof.
Update: After reading Conjugate Prior's answer (and after some overnight thinking) I would like to add something. I think the question on whether or not (inferential) statistical reasoning is deductive or inductive depends on what exactly it is that you are interested in, i.e., what kind of conclusion you are striving for.
If you are interested in probabilistic conclusions, then statistical reasoning is deductive. This means, if you want to know if e.g., in 95 out of 100 cases the population value is within a certain interval (i.e., confidence interval) , then you can get a truth value (true or not true) for this statement. You can say (if the assumptions are true) that it is the case that in 95 out of 100 cases the population value is within the interval. However, in no empirical case you will know if the population value is in your obtained CI. Either it is or not, but there is no way to be sure. The same reasoning applies for probabilities in classical p-value and Bayesian statistics. You can be sure about probabilities.
However, if you are interested in conclusions about empirical entities (e.g., where is the population value) you can only argue inductive. You can use all available statistical methods to accumulate evidence that support certain propositions about empirical entities or the causal mechanisms with which they interact. But you will never be certain on any of these propositions.
To recap: The point I want to make that it is important at what you are looking. Probabilites you can deduce, but for every definite proposition about things you can only find evidence in favor for. Not more. See also onestop's link to the induction problem.
|
Can you say that statistics and probability is like induction and deduction?
I think it is the best to quickly recap the meaning of inductive and deductive reasoning before answering your question.
Deductive Reasoning: "Deductive arguments are attempts to show that a conclusi
|
13,967
|
Can you say that statistics and probability is like induction and deduction?
|
Statistics is the deductive approach to induction. Consider the two main approaches to statistical inference: Frequentist and Bayesian.
Assume you are a Frequentist (in the style of Fisher, rather than Neyman for convenience). You wonder whether a parameter of substantive interest takes a particular value, so you construct a model, choose a statistic relating to the parameter, and perform a test. The p-value generated by your test indicates the probability of seeing a statistic as or more extreme than the statistic computed from the sample you have, assuming that your model is correct. You get a small enough p-value so you reject the hypothesis that the parameter does take that value. Your reasoning is deductive: Assuming the model is correct, either the parameter really does take the value of substantive interest but yours is an unlikely sample to see, or it does not take in fact that value.
Turning from hypothesis test to confidence intervals: you have a 95% confidence interval for your parameter which does not contain the value substantive interest. Your reasoning is again deductive: assuming the model is correct, either this is one of those rare intervals that will appear 1 in 20 times when the parameter really does have the value of substantive interest (because your sample is an unlikely one), or the parameter does not in fact have that value.
Now assume you are a Bayesian (in the style of Laplace rather than Gelman). Your model assumptions and calculations give you a (posterior) probability distribution over the parameter value. Most of the mass of this distribution is far from the value of substantive interest, so you conclude that the parameter probably does not have this value. Your reasoning is again deductive: assuming your model to be correct and if the prior distribution represented your beliefs about the parameter, then your beliefs about it in the light of the data are described by your posterior distribution which puts very little probability on that value. Since this distribution offers little support for the value of substantive interest, you might conclude that the parameter does not in fact have the value. (Or you might be content to state the probability it does).
In all three cases you get a logical disjunction to base your action on which is derived deductively/mathematically from assumptions. These assumptions are usually about a model of how the data is generated, but may also be prior beliefs about other quantities.
|
Can you say that statistics and probability is like induction and deduction?
|
Statistics is the deductive approach to induction. Consider the two main approaches to statistical inference: Frequentist and Bayesian.
Assume you are a Frequentist (in the style of Fisher, rather
|
Can you say that statistics and probability is like induction and deduction?
Statistics is the deductive approach to induction. Consider the two main approaches to statistical inference: Frequentist and Bayesian.
Assume you are a Frequentist (in the style of Fisher, rather than Neyman for convenience). You wonder whether a parameter of substantive interest takes a particular value, so you construct a model, choose a statistic relating to the parameter, and perform a test. The p-value generated by your test indicates the probability of seeing a statistic as or more extreme than the statistic computed from the sample you have, assuming that your model is correct. You get a small enough p-value so you reject the hypothesis that the parameter does take that value. Your reasoning is deductive: Assuming the model is correct, either the parameter really does take the value of substantive interest but yours is an unlikely sample to see, or it does not take in fact that value.
Turning from hypothesis test to confidence intervals: you have a 95% confidence interval for your parameter which does not contain the value substantive interest. Your reasoning is again deductive: assuming the model is correct, either this is one of those rare intervals that will appear 1 in 20 times when the parameter really does have the value of substantive interest (because your sample is an unlikely one), or the parameter does not in fact have that value.
Now assume you are a Bayesian (in the style of Laplace rather than Gelman). Your model assumptions and calculations give you a (posterior) probability distribution over the parameter value. Most of the mass of this distribution is far from the value of substantive interest, so you conclude that the parameter probably does not have this value. Your reasoning is again deductive: assuming your model to be correct and if the prior distribution represented your beliefs about the parameter, then your beliefs about it in the light of the data are described by your posterior distribution which puts very little probability on that value. Since this distribution offers little support for the value of substantive interest, you might conclude that the parameter does not in fact have the value. (Or you might be content to state the probability it does).
In all three cases you get a logical disjunction to base your action on which is derived deductively/mathematically from assumptions. These assumptions are usually about a model of how the data is generated, but may also be prior beliefs about other quantities.
|
Can you say that statistics and probability is like induction and deduction?
Statistics is the deductive approach to induction. Consider the two main approaches to statistical inference: Frequentist and Bayesian.
Assume you are a Frequentist (in the style of Fisher, rather
|
13,968
|
Can you say that statistics and probability is like induction and deduction?
|
Yes! Maybe statistics isn't strictly equal to induction, but statistics is the solution to the problem of induction in my opinion.
|
Can you say that statistics and probability is like induction and deduction?
|
Yes! Maybe statistics isn't strictly equal to induction, but statistics is the solution to the problem of induction in my opinion.
|
Can you say that statistics and probability is like induction and deduction?
Yes! Maybe statistics isn't strictly equal to induction, but statistics is the solution to the problem of induction in my opinion.
|
Can you say that statistics and probability is like induction and deduction?
Yes! Maybe statistics isn't strictly equal to induction, but statistics is the solution to the problem of induction in my opinion.
|
13,969
|
Can you say that statistics and probability is like induction and deduction?
|
Induction: for a statistical problem, the sample along with inferential statistics allows us to draw conclusions about the population, with inferential statistics making clear use of elements of probability.
Deduction: elements in probability allow us to draw conclusions about the characteristics of hypothetical data taken from the population, based on known features of the population.
Reference: Walpole RE, Myers RH, Myers SL, Ye K. Probability & statistics for engineers & scientists. Pearson Prentice Hall; 2011.
|
Can you say that statistics and probability is like induction and deduction?
|
Induction: for a statistical problem, the sample along with inferential statistics allows us to draw conclusions about the population, with inferential statistics making clear use of elements of proba
|
Can you say that statistics and probability is like induction and deduction?
Induction: for a statistical problem, the sample along with inferential statistics allows us to draw conclusions about the population, with inferential statistics making clear use of elements of probability.
Deduction: elements in probability allow us to draw conclusions about the characteristics of hypothetical data taken from the population, based on known features of the population.
Reference: Walpole RE, Myers RH, Myers SL, Ye K. Probability & statistics for engineers & scientists. Pearson Prentice Hall; 2011.
|
Can you say that statistics and probability is like induction and deduction?
Induction: for a statistical problem, the sample along with inferential statistics allows us to draw conclusions about the population, with inferential statistics making clear use of elements of proba
|
13,970
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
Francis Bach's Machine Learning Research blog is an "easy to digest" introduction to some of his research works and related topics ("easy" as in easier than reading the original papers).
It contains many excellent in-depth writings about kernel methods, optimization algorithms, linear algebra and highlights how these topics interact with each other as well as their applications in Machine Learning/Statistical Learning Theory.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
Francis Bach's Machine Learning Research blog is an "easy to digest" introduction to some of his research works and related topics ("easy" as in easier than reading the original papers).
It contains m
|
What are some good blogs for Mathematical Statistics and Machine Learning?
Francis Bach's Machine Learning Research blog is an "easy to digest" introduction to some of his research works and related topics ("easy" as in easier than reading the original papers).
It contains many excellent in-depth writings about kernel methods, optimization algorithms, linear algebra and highlights how these topics interact with each other as well as their applications in Machine Learning/Statistical Learning Theory.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
Francis Bach's Machine Learning Research blog is an "easy to digest" introduction to some of his research works and related topics ("easy" as in easier than reading the original papers).
It contains m
|
13,971
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
Andrew Gelman: https://statmodeling.stat.columbia.edu. Gelman is a professor of statistics and political science at Columbia, and has co-authored several statistics books, including Bayesian Data Analysis and Regression and Other Stories. I strongly disagree with most of his politics, but his statistics is generally sound.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
Andrew Gelman: https://statmodeling.stat.columbia.edu. Gelman is a professor of statistics and political science at Columbia, and has co-authored several statistics books, including Bayesian Data Anal
|
What are some good blogs for Mathematical Statistics and Machine Learning?
Andrew Gelman: https://statmodeling.stat.columbia.edu. Gelman is a professor of statistics and political science at Columbia, and has co-authored several statistics books, including Bayesian Data Analysis and Regression and Other Stories. I strongly disagree with most of his politics, but his statistics is generally sound.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
Andrew Gelman: https://statmodeling.stat.columbia.edu. Gelman is a professor of statistics and political science at Columbia, and has co-authored several statistics books, including Bayesian Data Anal
|
13,972
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
https://statisticaloddsandends.wordpress.com/ reminds me of Gunderson blog, nicely written with code and clear explanations.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
https://statisticaloddsandends.wordpress.com/ reminds me of Gunderson blog, nicely written with code and clear explanations.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
https://statisticaloddsandends.wordpress.com/ reminds me of Gunderson blog, nicely written with code and clear explanations.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
https://statisticaloddsandends.wordpress.com/ reminds me of Gunderson blog, nicely written with code and clear explanations.
|
13,973
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
ICLR recently introduced its Blog Track and its taken inspiration from some blogs like Bach's. Best thing is that it's peer-reviewed and contains diverse topics from diverse authors (often a group of authors).
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
ICLR recently introduced its Blog Track and its taken inspiration from some blogs like Bach's. Best thing is that it's peer-reviewed and contains diverse topics from diverse authors (often a group of
|
What are some good blogs for Mathematical Statistics and Machine Learning?
ICLR recently introduced its Blog Track and its taken inspiration from some blogs like Bach's. Best thing is that it's peer-reviewed and contains diverse topics from diverse authors (often a group of authors).
|
What are some good blogs for Mathematical Statistics and Machine Learning?
ICLR recently introduced its Blog Track and its taken inspiration from some blogs like Bach's. Best thing is that it's peer-reviewed and contains diverse topics from diverse authors (often a group of
|
13,974
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
In the last couple of years I have warmed up to using geometry to understand deep learning models, and indeed various types of statistical models. While I recommend the book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, you can also find a list of blogs related to the topic.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
In the last couple of years I have warmed up to using geometry to understand deep learning models, and indeed various types of statistical models. While I recommend the book Geometric Deep Learning: G
|
What are some good blogs for Mathematical Statistics and Machine Learning?
In the last couple of years I have warmed up to using geometry to understand deep learning models, and indeed various types of statistical models. While I recommend the book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, you can also find a list of blogs related to the topic.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
In the last couple of years I have warmed up to using geometry to understand deep learning models, and indeed various types of statistical models. While I recommend the book Geometric Deep Learning: G
|
13,975
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
An Outsider's Tour of Reinforcement Learning by Ben Recht gives a short introduction into RL and draws connection to control theory.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
An Outsider's Tour of Reinforcement Learning by Ben Recht gives a short introduction into RL and draws connection to control theory.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
An Outsider's Tour of Reinforcement Learning by Ben Recht gives a short introduction into RL and draws connection to control theory.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
An Outsider's Tour of Reinforcement Learning by Ben Recht gives a short introduction into RL and draws connection to control theory.
|
13,976
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
This is neither really a blog nor just about statistics and many times very basic, but I found many good advices and ideas in there so I decided to add it as an answer
https://chrisalbon.com/#code_statistics
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
This is neither really a blog nor just about statistics and many times very basic, but I found many good advices and ideas in there so I decided to add it as an answer
https://chrisalbon.com/#code_sta
|
What are some good blogs for Mathematical Statistics and Machine Learning?
This is neither really a blog nor just about statistics and many times very basic, but I found many good advices and ideas in there so I decided to add it as an answer
https://chrisalbon.com/#code_statistics
|
What are some good blogs for Mathematical Statistics and Machine Learning?
This is neither really a blog nor just about statistics and many times very basic, but I found many good advices and ideas in there so I decided to add it as an answer
https://chrisalbon.com/#code_sta
|
13,977
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
Towards data science a collection of articles focussing on data science, machine learning, artificial intelligence and programming. It is written by various authors. The articles often focus on explaining some technique or area.
A quick search finds some links on the website here but possibly there are more indirect links.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
|
Towards data science a collection of articles focussing on data science, machine learning, artificial intelligence and programming. It is written by various authors. The articles often focus on explai
|
What are some good blogs for Mathematical Statistics and Machine Learning?
Towards data science a collection of articles focussing on data science, machine learning, artificial intelligence and programming. It is written by various authors. The articles often focus on explaining some technique or area.
A quick search finds some links on the website here but possibly there are more indirect links.
|
What are some good blogs for Mathematical Statistics and Machine Learning?
Towards data science a collection of articles focussing on data science, machine learning, artificial intelligence and programming. It is written by various authors. The articles often focus on explai
|
13,978
|
How do sample weights work in classification models?
|
As Frans Rodenburg already correctly stated in his comment, in most cases instance or sample weights factor into the loss function that is being optimized by the method in question.
Consider the equation the documentation provides for the primal problem of the C-SVM
$$\min_{w,b,\zeta} \frac{1}{2}w^Tw + C\sum_{i=1}^{n} \zeta_i. $$
Here $C$ is the same for each training sample, assigning equal 'cost' to each instance. In the case that there are sample weights passed to the fitting function
"The sample weighting rescales the C parameter, which means that the
classifier puts more emphasis on getting these points right."
As this example puts it, which also provides a nice visualization, showing how instances represented by bigger circles (those with larger weight) influence the decision boundary.
|
How do sample weights work in classification models?
|
As Frans Rodenburg already correctly stated in his comment, in most cases instance or sample weights factor into the loss function that is being optimized by the method in question.
Consider the equa
|
How do sample weights work in classification models?
As Frans Rodenburg already correctly stated in his comment, in most cases instance or sample weights factor into the loss function that is being optimized by the method in question.
Consider the equation the documentation provides for the primal problem of the C-SVM
$$\min_{w,b,\zeta} \frac{1}{2}w^Tw + C\sum_{i=1}^{n} \zeta_i. $$
Here $C$ is the same for each training sample, assigning equal 'cost' to each instance. In the case that there are sample weights passed to the fitting function
"The sample weighting rescales the C parameter, which means that the
classifier puts more emphasis on getting these points right."
As this example puts it, which also provides a nice visualization, showing how instances represented by bigger circles (those with larger weight) influence the decision boundary.
|
How do sample weights work in classification models?
As Frans Rodenburg already correctly stated in his comment, in most cases instance or sample weights factor into the loss function that is being optimized by the method in question.
Consider the equa
|
13,979
|
How do sample weights work in classification models?
|
Rickyfox's answer is great in explaining how the weights influence the results of a classifier, but maybe could you be also interested in why / how we would need such weights in the first place (which is more a statistical problem than a purely ML one).
Sometimes the observed data is observed with different distributions and we need to use sampling weights to account for it. You can look at Solon et. al (2015) for more details on why sampling weights matter for analyses and ML (uses mostly algorithms form econometrics literature, but the logic stays the same).
The idea is that these differences in distributions creates imbalances in classes and features. If untreated, this can affect the performance of the predictors / classifiers. I recently wrote a blog post about how you can use these weights to improve the accuracy of some algorithms (features an example with soccer data): https://nc233.com/2018/07/weighting-tricks-for-machine-learning-with-icarus-part-1/
The folowing image shows an example of feature imbalances: these teams of the dataset have not faced the same quality of opposition (elo). The prediction of the rarer types of matchups can be improved by reweighting techniques.
Another example of good use of sampling weights is the treatment of class imbalances (typically when one of the classes is very rare). See for example what is done by default in scikit-learn: http://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_sample_weight.html
Finally, despite all these statistical reasons, sometimes we just need to "manually" increase the importance of an observation for very good reasons, and we use the weights to do so :)
References
Solon, Gary, Steven J. Haider, and Jeffrey M. Wooldridge. "What are we weighting for?." Journal of Human resources 50.2 (2015): 301-316.
|
How do sample weights work in classification models?
|
Rickyfox's answer is great in explaining how the weights influence the results of a classifier, but maybe could you be also interested in why / how we would need such weights in the first place (which
|
How do sample weights work in classification models?
Rickyfox's answer is great in explaining how the weights influence the results of a classifier, but maybe could you be also interested in why / how we would need such weights in the first place (which is more a statistical problem than a purely ML one).
Sometimes the observed data is observed with different distributions and we need to use sampling weights to account for it. You can look at Solon et. al (2015) for more details on why sampling weights matter for analyses and ML (uses mostly algorithms form econometrics literature, but the logic stays the same).
The idea is that these differences in distributions creates imbalances in classes and features. If untreated, this can affect the performance of the predictors / classifiers. I recently wrote a blog post about how you can use these weights to improve the accuracy of some algorithms (features an example with soccer data): https://nc233.com/2018/07/weighting-tricks-for-machine-learning-with-icarus-part-1/
The folowing image shows an example of feature imbalances: these teams of the dataset have not faced the same quality of opposition (elo). The prediction of the rarer types of matchups can be improved by reweighting techniques.
Another example of good use of sampling weights is the treatment of class imbalances (typically when one of the classes is very rare). See for example what is done by default in scikit-learn: http://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_sample_weight.html
Finally, despite all these statistical reasons, sometimes we just need to "manually" increase the importance of an observation for very good reasons, and we use the weights to do so :)
References
Solon, Gary, Steven J. Haider, and Jeffrey M. Wooldridge. "What are we weighting for?." Journal of Human resources 50.2 (2015): 301-316.
|
How do sample weights work in classification models?
Rickyfox's answer is great in explaining how the weights influence the results of a classifier, but maybe could you be also interested in why / how we would need such weights in the first place (which
|
13,980
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
|
TL;DR
The main reasons are the following traits of BPTT:
An unrolled RNN tends to be a very deep network.
In an unrolled RNN the gradient in an early layer is a product that (also) contains many instances of the same term.
Long Version
To train an RNN, people usually use backpropagation through time (BPTT), which means that you choose a number of time steps $N$, and unroll your network so that it becomes a feedforward network made of $N$ duplicates of the original network, while each of them represents the original network in another time step.
(image source: wikipedia)
So BPTT is just unrolling your RNN, and then using backpropagation to calculate the gradient (as one would do to train a normal feedforward network).
Cause 1: The unrolled network is usually very deep
Because our feedforward network was created by unrolling, it is $N$ times as deep as the original RNN. Thus the unrolled network is often very deep.
In deep feedforward neural networks, backpropagation has "the unstable gradient problem", as Michael Nielsen explains in the chapter Why are deep neural networks hard to train? (in his book Neural Networks and Deep Learning):
[...] the gradient in early layers is the product of terms from all the later layers. When there are many layers, that's an intrinsically unstable situation. The only way all layers can learn at close to the same speed is if all those products of terms come close to balancing out.
I.e. the earlier the layer, the longer the product becomes, and the more unstable the gradient becomes. (For a more rigorous explanation, see this answer.)
Cause 2: The product that gives the gradient contains many instances of the same term
The product that gives the gradient includes the weights of every later layer.
So in a normal feedforward neural network, this product for the $d^{\text{th}}$-to-last layer might look like: $$w_1\cdot\alpha_{1}\cdot w_2\cdot\alpha_{2}\cdot\ \cdots\ \cdot w_d\cdot\alpha_{d}$$
Nielsen explains that (with regard to absolute value) this product tends to be either very big or very small (for a large $d$).
But in an unrolled RNN, this product would look like: $$w\cdot\alpha_{1}\cdot w\cdot\alpha_{2}\cdot\ \cdots\ \cdot w\cdot\alpha_{d}$$
as the unrolled network is composed of duplicates of the same network.
Whether we are dealing with numbers or matrices, the appearance of the same term $d$ times means that the product is much more unstable (as the chances are much smaller that "all those products of terms come close to balancing out").
And so the product (with regard to absolute value) tends to be either exponentially small or exponentially big (for a large $d$).
In other words, the fact that the unrolled RNN is composed of duplicates of the same network makes the unrolled network's "unstable gradient problem" more severe than in a normal deep feedforward network.
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
|
TL;DR
The main reasons are the following traits of BPTT:
An unrolled RNN tends to be a very deep network.
In an unrolled RNN the gradient in an early layer is a product that (also) contains many inst
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
TL;DR
The main reasons are the following traits of BPTT:
An unrolled RNN tends to be a very deep network.
In an unrolled RNN the gradient in an early layer is a product that (also) contains many instances of the same term.
Long Version
To train an RNN, people usually use backpropagation through time (BPTT), which means that you choose a number of time steps $N$, and unroll your network so that it becomes a feedforward network made of $N$ duplicates of the original network, while each of them represents the original network in another time step.
(image source: wikipedia)
So BPTT is just unrolling your RNN, and then using backpropagation to calculate the gradient (as one would do to train a normal feedforward network).
Cause 1: The unrolled network is usually very deep
Because our feedforward network was created by unrolling, it is $N$ times as deep as the original RNN. Thus the unrolled network is often very deep.
In deep feedforward neural networks, backpropagation has "the unstable gradient problem", as Michael Nielsen explains in the chapter Why are deep neural networks hard to train? (in his book Neural Networks and Deep Learning):
[...] the gradient in early layers is the product of terms from all the later layers. When there are many layers, that's an intrinsically unstable situation. The only way all layers can learn at close to the same speed is if all those products of terms come close to balancing out.
I.e. the earlier the layer, the longer the product becomes, and the more unstable the gradient becomes. (For a more rigorous explanation, see this answer.)
Cause 2: The product that gives the gradient contains many instances of the same term
The product that gives the gradient includes the weights of every later layer.
So in a normal feedforward neural network, this product for the $d^{\text{th}}$-to-last layer might look like: $$w_1\cdot\alpha_{1}\cdot w_2\cdot\alpha_{2}\cdot\ \cdots\ \cdot w_d\cdot\alpha_{d}$$
Nielsen explains that (with regard to absolute value) this product tends to be either very big or very small (for a large $d$).
But in an unrolled RNN, this product would look like: $$w\cdot\alpha_{1}\cdot w\cdot\alpha_{2}\cdot\ \cdots\ \cdot w\cdot\alpha_{d}$$
as the unrolled network is composed of duplicates of the same network.
Whether we are dealing with numbers or matrices, the appearance of the same term $d$ times means that the product is much more unstable (as the chances are much smaller that "all those products of terms come close to balancing out").
And so the product (with regard to absolute value) tends to be either exponentially small or exponentially big (for a large $d$).
In other words, the fact that the unrolled RNN is composed of duplicates of the same network makes the unrolled network's "unstable gradient problem" more severe than in a normal deep feedforward network.
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
TL;DR
The main reasons are the following traits of BPTT:
An unrolled RNN tends to be a very deep network.
In an unrolled RNN the gradient in an early layer is a product that (also) contains many inst
|
13,981
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
|
Because RNN is trained by backpropagation through time, and therefore unfolded into feed forward net with multiple layers. When gradient is passed back through many time steps, it tends to grow or vanish, same way as it happens in deep feedforward nets
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
|
Because RNN is trained by backpropagation through time, and therefore unfolded into feed forward net with multiple layers. When gradient is passed back through many time steps, it tends to grow or van
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
Because RNN is trained by backpropagation through time, and therefore unfolded into feed forward net with multiple layers. When gradient is passed back through many time steps, it tends to grow or vanish, same way as it happens in deep feedforward nets
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
Because RNN is trained by backpropagation through time, and therefore unfolded into feed forward net with multiple layers. When gradient is passed back through many time steps, it tends to grow or van
|
13,982
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
|
I would like to point out one point that the answers above seems to have missed about vanishing gradient in RNN.
What people mean by vanishing gradient should be understood differently from the original meaning in DNN. But first we need to make some notation.
Let $h_0 \neq 0$, the recursive formula for Elman Recurrent Neural Network is
\begin{align*}
h_t &= f_h(U_hx_t + W_hh_{t-1} + b_h) \\
\hat y_t &= f_y(W_y h_t +b_y)
\end{align*}
For $1\leq t \leq T$, as $T$ is the total of time steps.
Denote $E_t$ as the error between real value $y_t$ and $\hat y_t$, then the total loss is $L = \sum_{i=1}^T E_t$. Due to shared weight nature of RNN, finding partial derivative of $L$ w.r.t to $W_{hh}$ obliges you to find $\frac{\partial E_t}{\partial W}$ for each $W$ w.r.t each time-stamp $i<t$.
Then if you look at the paper where most of what our current understand about the exploding/vanishing gradient is based upon:
This term [$\frac{\partial E_t}{\partial W}$ at $i$] tends to become very small in comparison to terms for which $\tau$ is close to $t$. This means that even though there
might exist a change in $W$ that would allow a, to jump to another (better) basin of attraction, the gradient of the cost with respect to $W$ does not reflect that possibility.
What this means when $||W||$ is small, some partial derivative at time-stamp $i$ of some component $E_t$ might get lost due to their time distance. Resulting in a gradient descent algorithm that pays too much attention to the surrounding (usually bumpy) loss surface that not necessarily go down in the long run.
So what people usually mean by vanishing gradient in RNN is only by long component that contain distance information of RNN, not the system as a whole.
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
|
I would like to point out one point that the answers above seems to have missed about vanishing gradient in RNN.
What people mean by vanishing gradient should be understood differently from the origin
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
I would like to point out one point that the answers above seems to have missed about vanishing gradient in RNN.
What people mean by vanishing gradient should be understood differently from the original meaning in DNN. But first we need to make some notation.
Let $h_0 \neq 0$, the recursive formula for Elman Recurrent Neural Network is
\begin{align*}
h_t &= f_h(U_hx_t + W_hh_{t-1} + b_h) \\
\hat y_t &= f_y(W_y h_t +b_y)
\end{align*}
For $1\leq t \leq T$, as $T$ is the total of time steps.
Denote $E_t$ as the error between real value $y_t$ and $\hat y_t$, then the total loss is $L = \sum_{i=1}^T E_t$. Due to shared weight nature of RNN, finding partial derivative of $L$ w.r.t to $W_{hh}$ obliges you to find $\frac{\partial E_t}{\partial W}$ for each $W$ w.r.t each time-stamp $i<t$.
Then if you look at the paper where most of what our current understand about the exploding/vanishing gradient is based upon:
This term [$\frac{\partial E_t}{\partial W}$ at $i$] tends to become very small in comparison to terms for which $\tau$ is close to $t$. This means that even though there
might exist a change in $W$ that would allow a, to jump to another (better) basin of attraction, the gradient of the cost with respect to $W$ does not reflect that possibility.
What this means when $||W||$ is small, some partial derivative at time-stamp $i$ of some component $E_t$ might get lost due to their time distance. Resulting in a gradient descent algorithm that pays too much attention to the surrounding (usually bumpy) loss surface that not necessarily go down in the long run.
So what people usually mean by vanishing gradient in RNN is only by long component that contain distance information of RNN, not the system as a whole.
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
I would like to point out one point that the answers above seems to have missed about vanishing gradient in RNN.
What people mean by vanishing gradient should be understood differently from the origin
|
13,983
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
|
This chapter describes the reason for vanishing gradient problem really well. When we unfold the RNN over time it is also like a deep neural network. Therefore according to my understanding it also suffers from vanishing gradient problem as deep feedforward nets.
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
|
This chapter describes the reason for vanishing gradient problem really well. When we unfold the RNN over time it is also like a deep neural network. Therefore according to my understanding it also su
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
This chapter describes the reason for vanishing gradient problem really well. When we unfold the RNN over time it is also like a deep neural network. Therefore according to my understanding it also suffers from vanishing gradient problem as deep feedforward nets.
|
Why do RNNs have a tendency to suffer from vanishing/exploding gradient?
This chapter describes the reason for vanishing gradient problem really well. When we unfold the RNN over time it is also like a deep neural network. Therefore according to my understanding it also su
|
13,984
|
Low variance components in PCA, are they really just noise? Is there any way to test for it?
|
One way of testing the randomness of a small principal component (PC) is to treat it like a signal instead of noise: i.e., try to predict another variable of interest with it. This is essentially principal components regression (PCR).
In the predictive context of PCR, Lott (1973) recommends selecting PCs in a way that maximizes $R^2$; Gunst and Mason (1977) focus on $MSE$. PCs with small eigenvalues (even the smallest!) can improve predictions (Hotelling, 1957; Massy, 1965; Hawkins, 1973; Hadi & Ling, 1998; Jackson, 1991), and have proven very interesting in some published, predictive applications (Jolliffe, 1982, 2010). These include:
A chemical engineering model using PCs 1, 3, 4, 6, 7, and 8 of 9 total (Smith & Campbell, 1980)
A monsoon model using PCs 8, 2, and 10 (in order of importance) out of 10 (Kung & Sharif, 1980)
An economic model using PCs 4 and 5 out of 6 (Hill, Fomby, & Johnson, 1977)
The PCs in the examples listed above are numbered according to their eigenvalues' ranked sizes. Jolliffe (1982) describes a cloud model in which the last component contributes most. He concludes:
The above examples have shown that it is not necessary to find obscure or bizarre data in order for the last few principal components to be important in principal component regression. Rather it seems that such examples may be rather common in practice. Hill et al. (1977) give a thorough and useful discussion of strategies for selecting principal components which should have buried forever the idea of selection based solely on size of variance. Unfortunately this does not seem to have happened, and the idea is perhaps more widespread now than 20 years ago.
Furthermore, excluding small-eigenvalue PCs can introduce bias (Mason & Gunst, 1985). Hadi and Ling (1998) recommend considering regression $SS$ as well; they summarize their article thus:
The basic conclusion of this article is that, in general, the PCs may fail to account for the regression fit. As stated in Theorem 1, it is theoretically possible that the first $(p-1)$ PCs, which can have almost 100% of the variance, contribute nothing to the fit, while the response variable $\text{Y}$ may fit perfectly the last PC which is always ignored by the PCR methodology.
The reason for the failure of the PCR in accounting for the variation of the response variable is that the PCs are chosen based on the PCD [principal components decomposition] which depends only on $\text{X}$. Thus, if PCR is to be used, it should be used with caution and the selection of the PCs to keep should be guided not only by the variance decomposition but also by the contribution of each principal component to the regression sum of squares.
I owe this answer to @Scortchi, who corrected my own misconceptions about PC selection in PCR with some very helpful comments, including: "Jolliffe (2010) reviews other ways of selecting PCs." This reference may be a good place to look for further ideas.
References
- Gunst, R. F., & Mason, R. L. (1977). Biased estimation in regression: an evaluation using mean squared error. Journal of the American Statistical Association, 72(359), 616–628.
- Hadi, A. S., & Ling, R. F. (1998). Some cautionary notes on the use of principal components regression. The American Statistician, 52(1), 15–19. Retrieved from http://www.uvm.edu/~rsingle/stat380/F04/possible/Hadi+Ling-AmStat-1998_PCRegression.pdf.
- Hawkins, D. M. (1973). On the investigation of alternative regressions by principal component analysis. Applied Statistics, 22(3), 275–286.
- Hill, R. C., Fomby, T. B., & Johnson, S. R. (1977). Component selection norms for principal components regression. Communications in Statistics – Theory and Methods, 6(4), 309–334.
- Hotelling, H. (1957). The relations of the newer multivariate statistical methods to factor analysis. British Journal of Statistical Psychology, 10(2), 69–79.
- Jackson, E. (1991). A user's guide to principal components. New York: Wiley.
- Jolliffe, I. T. (1982). Note on the use of principal components in regression. Applied Statistics, 31(3), 300–303. Retrieved from http://automatica.dei.unipd.it/public/Schenato/PSC/2010_2011/gruppo4-Building_termo_identification/IdentificazioneTermodinamica20072008/Biblio/Articoli/PCR%20vecchio%2082.pdf.
- Jolliffe, I. T. (2010). Principal components analysis (2nd ed.). Springer.
- Kung, E. C., & Sharif, T. A. (1980). Regression forecasting of the onset of the Indian summer monsoon with antecedent upper air conditions. Journal of Applied Meteorology, 19(4), 370–380. Retrieved from http://iri.columbia.edu/~ousmane/print/Onset/ErnestSharif80_JAS.pdf.
- Lott, W. F. (1973). The optimal set of principal component restrictions on a least-squares regression. Communications in Statistics – Theory and Methods, 2(5), 449–464.
- Mason, R. L., & Gunst, R. F. (1985). Selecting principal components in regression. Statistics & Probability Letters, 3(6), 299–301.
- Massy, W. F. (1965). Principal components regression in exploratory statistical research. Journal of the American Statistical Association, 60(309), 234–256. Retrieved from http://automatica.dei.unipd.it/public/Schenato/PSC/2010_2011/gruppo4-Building_termo_identification/IdentificazioneTermodinamica20072008/Biblio/Articoli/PCR%20vecchio%2065.pdf.
- Smith, G., & Campbell, F. (1980). A critique of some ridge regression methods. Journal of the American Statistical Association, 75(369), 74–81. Retrieved from https://cowles.econ.yale.edu/P/cp/p04b/p0496.pdf.
|
Low variance components in PCA, are they really just noise? Is there any way to test for it?
|
One way of testing the randomness of a small principal component (PC) is to treat it like a signal instead of noise: i.e., try to predict another variable of interest with it. This is essentially prin
|
Low variance components in PCA, are they really just noise? Is there any way to test for it?
One way of testing the randomness of a small principal component (PC) is to treat it like a signal instead of noise: i.e., try to predict another variable of interest with it. This is essentially principal components regression (PCR).
In the predictive context of PCR, Lott (1973) recommends selecting PCs in a way that maximizes $R^2$; Gunst and Mason (1977) focus on $MSE$. PCs with small eigenvalues (even the smallest!) can improve predictions (Hotelling, 1957; Massy, 1965; Hawkins, 1973; Hadi & Ling, 1998; Jackson, 1991), and have proven very interesting in some published, predictive applications (Jolliffe, 1982, 2010). These include:
A chemical engineering model using PCs 1, 3, 4, 6, 7, and 8 of 9 total (Smith & Campbell, 1980)
A monsoon model using PCs 8, 2, and 10 (in order of importance) out of 10 (Kung & Sharif, 1980)
An economic model using PCs 4 and 5 out of 6 (Hill, Fomby, & Johnson, 1977)
The PCs in the examples listed above are numbered according to their eigenvalues' ranked sizes. Jolliffe (1982) describes a cloud model in which the last component contributes most. He concludes:
The above examples have shown that it is not necessary to find obscure or bizarre data in order for the last few principal components to be important in principal component regression. Rather it seems that such examples may be rather common in practice. Hill et al. (1977) give a thorough and useful discussion of strategies for selecting principal components which should have buried forever the idea of selection based solely on size of variance. Unfortunately this does not seem to have happened, and the idea is perhaps more widespread now than 20 years ago.
Furthermore, excluding small-eigenvalue PCs can introduce bias (Mason & Gunst, 1985). Hadi and Ling (1998) recommend considering regression $SS$ as well; they summarize their article thus:
The basic conclusion of this article is that, in general, the PCs may fail to account for the regression fit. As stated in Theorem 1, it is theoretically possible that the first $(p-1)$ PCs, which can have almost 100% of the variance, contribute nothing to the fit, while the response variable $\text{Y}$ may fit perfectly the last PC which is always ignored by the PCR methodology.
The reason for the failure of the PCR in accounting for the variation of the response variable is that the PCs are chosen based on the PCD [principal components decomposition] which depends only on $\text{X}$. Thus, if PCR is to be used, it should be used with caution and the selection of the PCs to keep should be guided not only by the variance decomposition but also by the contribution of each principal component to the regression sum of squares.
I owe this answer to @Scortchi, who corrected my own misconceptions about PC selection in PCR with some very helpful comments, including: "Jolliffe (2010) reviews other ways of selecting PCs." This reference may be a good place to look for further ideas.
References
- Gunst, R. F., & Mason, R. L. (1977). Biased estimation in regression: an evaluation using mean squared error. Journal of the American Statistical Association, 72(359), 616–628.
- Hadi, A. S., & Ling, R. F. (1998). Some cautionary notes on the use of principal components regression. The American Statistician, 52(1), 15–19. Retrieved from http://www.uvm.edu/~rsingle/stat380/F04/possible/Hadi+Ling-AmStat-1998_PCRegression.pdf.
- Hawkins, D. M. (1973). On the investigation of alternative regressions by principal component analysis. Applied Statistics, 22(3), 275–286.
- Hill, R. C., Fomby, T. B., & Johnson, S. R. (1977). Component selection norms for principal components regression. Communications in Statistics – Theory and Methods, 6(4), 309–334.
- Hotelling, H. (1957). The relations of the newer multivariate statistical methods to factor analysis. British Journal of Statistical Psychology, 10(2), 69–79.
- Jackson, E. (1991). A user's guide to principal components. New York: Wiley.
- Jolliffe, I. T. (1982). Note on the use of principal components in regression. Applied Statistics, 31(3), 300–303. Retrieved from http://automatica.dei.unipd.it/public/Schenato/PSC/2010_2011/gruppo4-Building_termo_identification/IdentificazioneTermodinamica20072008/Biblio/Articoli/PCR%20vecchio%2082.pdf.
- Jolliffe, I. T. (2010). Principal components analysis (2nd ed.). Springer.
- Kung, E. C., & Sharif, T. A. (1980). Regression forecasting of the onset of the Indian summer monsoon with antecedent upper air conditions. Journal of Applied Meteorology, 19(4), 370–380. Retrieved from http://iri.columbia.edu/~ousmane/print/Onset/ErnestSharif80_JAS.pdf.
- Lott, W. F. (1973). The optimal set of principal component restrictions on a least-squares regression. Communications in Statistics – Theory and Methods, 2(5), 449–464.
- Mason, R. L., & Gunst, R. F. (1985). Selecting principal components in regression. Statistics & Probability Letters, 3(6), 299–301.
- Massy, W. F. (1965). Principal components regression in exploratory statistical research. Journal of the American Statistical Association, 60(309), 234–256. Retrieved from http://automatica.dei.unipd.it/public/Schenato/PSC/2010_2011/gruppo4-Building_termo_identification/IdentificazioneTermodinamica20072008/Biblio/Articoli/PCR%20vecchio%2065.pdf.
- Smith, G., & Campbell, F. (1980). A critique of some ridge regression methods. Journal of the American Statistical Association, 75(369), 74–81. Retrieved from https://cowles.econ.yale.edu/P/cp/p04b/p0496.pdf.
|
Low variance components in PCA, are they really just noise? Is there any way to test for it?
One way of testing the randomness of a small principal component (PC) is to treat it like a signal instead of noise: i.e., try to predict another variable of interest with it. This is essentially prin
|
13,985
|
Low variance components in PCA, are they really just noise? Is there any way to test for it?
|
Adding to @Nick Stauner's answer, when you're dealing with subspace clustering, PCA is often a poor solution.
When using PCA, one is mostly concerned about the eigenvectors with the highest eigenvalues, which represent the directions towards which the data is 'stretched' the most. If your data is comprised of small subspaces, PCA will solemnly ignore them since they don't contribute much to the overall data variance.
So, small eigenvectors are not always pure noise.
|
Low variance components in PCA, are they really just noise? Is there any way to test for it?
|
Adding to @Nick Stauner's answer, when you're dealing with subspace clustering, PCA is often a poor solution.
When using PCA, one is mostly concerned about the eigenvectors with the highest eigenvalue
|
Low variance components in PCA, are they really just noise? Is there any way to test for it?
Adding to @Nick Stauner's answer, when you're dealing with subspace clustering, PCA is often a poor solution.
When using PCA, one is mostly concerned about the eigenvectors with the highest eigenvalues, which represent the directions towards which the data is 'stretched' the most. If your data is comprised of small subspaces, PCA will solemnly ignore them since they don't contribute much to the overall data variance.
So, small eigenvectors are not always pure noise.
|
Low variance components in PCA, are they really just noise? Is there any way to test for it?
Adding to @Nick Stauner's answer, when you're dealing with subspace clustering, PCA is often a poor solution.
When using PCA, one is mostly concerned about the eigenvectors with the highest eigenvalue
|
13,986
|
Calculate variance explained by each predictor in multiple regression using R
|
The percentage explained depends on the order entered.
If you specify a particular order, you can compute this trivially in R (e.g. via the update and anova functions, see below), but a different order of entry would yield potentially very different answers.
[One possibility might be to average across all orders or something, but it would get unwieldy and might not be answering a particularly useful question.]
--
As Stat points out, with a single model, if you're after one variable at a time, you can just use 'anova' to produce the incremental sums of squares table. This would follow on from your code:
anova(fit)
Analysis of Variance Table
Response: dv
Df Sum Sq Mean Sq F value Pr(>F)
iv1 1 0.033989 0.033989 0.7762 0.4281
iv2 1 0.022435 0.022435 0.5123 0.5137
iv3 1 0.003048 0.003048 0.0696 0.8050
iv4 1 0.115143 0.115143 2.6294 0.1802
iv5 1 0.000220 0.000220 0.0050 0.9469
Residuals 4 0.175166 0.043791
--
So there we have the incremental variance explained; how do we get the proportion?
Pretty trivially, scale them by 1 divided by their sum. (Replace the 1 with 100 for percentage variance explained.)
Here I've displayed it as an added column to the anova table:
af <- anova(fit)
afss <- af$"Sum Sq"
print(cbind(af,PctExp=afss/sum(afss)*100))
Df Sum Sq Mean Sq F value Pr(>F) PctExp
iv1 1 0.0339887640 0.0339887640 0.77615140 0.4280748 9.71107544
iv2 1 0.0224346357 0.0224346357 0.51230677 0.5137026 6.40989591
iv3 1 0.0030477233 0.0030477233 0.06959637 0.8049589 0.87077807
iv4 1 0.1151432643 0.1151432643 2.62935731 0.1802223 32.89807550
iv5 1 0.0002199726 0.0002199726 0.00502319 0.9468997 0.06284931
Residuals 4 0.1751656402 0.0437914100 NA NA 50.04732577
--
If you decide you want several particular orders of entry, you can do something even more general like this (which also allows you to enter or remove groups of variables at a time if you wish):
m5 = fit
m4 = update(m5, ~ . - iv5)
m3 = update(m4, ~ . - iv4)
m2 = update(m3, ~ . - iv3)
m1 = update(m2, ~ . - iv2)
m0 = update(m1, ~ . - iv1)
anova(m0,m1,m2,m3,m4,m5)
Analysis of Variance Table
Model 1: dv ~ 1
Model 2: dv ~ iv1
Model 3: dv ~ iv1 + iv2
Model 4: dv ~ iv1 + iv2 + iv3
Model 5: dv ~ iv1 + iv2 + iv3 + iv4
Model 6: dv ~ iv1 + iv2 + iv3 + iv4 + iv5
Res.Df RSS Df Sum of Sq F Pr(>F)
1 9 0.35000
2 8 0.31601 1 0.033989 0.7762 0.4281
3 7 0.29358 1 0.022435 0.5123 0.5137
4 6 0.29053 1 0.003048 0.0696 0.8050
5 5 0.17539 1 0.115143 2.6294 0.1802
6 4 0.17517 1 0.000220 0.0050 0.9469
(Such an approach might also be automated, e.g. via loops and the use of get. You can add and remove variables in multiple orders if needed)
... and then scale to percentages as before.
(NB. The fact that I explain how to do these things should not necessarily be taken as advocacy of everything I explain.)
|
Calculate variance explained by each predictor in multiple regression using R
|
The percentage explained depends on the order entered.
If you specify a particular order, you can compute this trivially in R (e.g. via the update and anova functions, see below), but a different ord
|
Calculate variance explained by each predictor in multiple regression using R
The percentage explained depends on the order entered.
If you specify a particular order, you can compute this trivially in R (e.g. via the update and anova functions, see below), but a different order of entry would yield potentially very different answers.
[One possibility might be to average across all orders or something, but it would get unwieldy and might not be answering a particularly useful question.]
--
As Stat points out, with a single model, if you're after one variable at a time, you can just use 'anova' to produce the incremental sums of squares table. This would follow on from your code:
anova(fit)
Analysis of Variance Table
Response: dv
Df Sum Sq Mean Sq F value Pr(>F)
iv1 1 0.033989 0.033989 0.7762 0.4281
iv2 1 0.022435 0.022435 0.5123 0.5137
iv3 1 0.003048 0.003048 0.0696 0.8050
iv4 1 0.115143 0.115143 2.6294 0.1802
iv5 1 0.000220 0.000220 0.0050 0.9469
Residuals 4 0.175166 0.043791
--
So there we have the incremental variance explained; how do we get the proportion?
Pretty trivially, scale them by 1 divided by their sum. (Replace the 1 with 100 for percentage variance explained.)
Here I've displayed it as an added column to the anova table:
af <- anova(fit)
afss <- af$"Sum Sq"
print(cbind(af,PctExp=afss/sum(afss)*100))
Df Sum Sq Mean Sq F value Pr(>F) PctExp
iv1 1 0.0339887640 0.0339887640 0.77615140 0.4280748 9.71107544
iv2 1 0.0224346357 0.0224346357 0.51230677 0.5137026 6.40989591
iv3 1 0.0030477233 0.0030477233 0.06959637 0.8049589 0.87077807
iv4 1 0.1151432643 0.1151432643 2.62935731 0.1802223 32.89807550
iv5 1 0.0002199726 0.0002199726 0.00502319 0.9468997 0.06284931
Residuals 4 0.1751656402 0.0437914100 NA NA 50.04732577
--
If you decide you want several particular orders of entry, you can do something even more general like this (which also allows you to enter or remove groups of variables at a time if you wish):
m5 = fit
m4 = update(m5, ~ . - iv5)
m3 = update(m4, ~ . - iv4)
m2 = update(m3, ~ . - iv3)
m1 = update(m2, ~ . - iv2)
m0 = update(m1, ~ . - iv1)
anova(m0,m1,m2,m3,m4,m5)
Analysis of Variance Table
Model 1: dv ~ 1
Model 2: dv ~ iv1
Model 3: dv ~ iv1 + iv2
Model 4: dv ~ iv1 + iv2 + iv3
Model 5: dv ~ iv1 + iv2 + iv3 + iv4
Model 6: dv ~ iv1 + iv2 + iv3 + iv4 + iv5
Res.Df RSS Df Sum of Sq F Pr(>F)
1 9 0.35000
2 8 0.31601 1 0.033989 0.7762 0.4281
3 7 0.29358 1 0.022435 0.5123 0.5137
4 6 0.29053 1 0.003048 0.0696 0.8050
5 5 0.17539 1 0.115143 2.6294 0.1802
6 4 0.17517 1 0.000220 0.0050 0.9469
(Such an approach might also be automated, e.g. via loops and the use of get. You can add and remove variables in multiple orders if needed)
... and then scale to percentages as before.
(NB. The fact that I explain how to do these things should not necessarily be taken as advocacy of everything I explain.)
|
Calculate variance explained by each predictor in multiple regression using R
The percentage explained depends on the order entered.
If you specify a particular order, you can compute this trivially in R (e.g. via the update and anova functions, see below), but a different ord
|
13,987
|
Calculate variance explained by each predictor in multiple regression using R
|
I proved that the percentage of variation explained by a given predictor in a multiple linear regression is the product of the slope coefficient and the correlation of the predictor with the fitted values of the dependent variable (assuming that all variables have been standardized to have mean zero and variance one; which is without loss of generality). Find it here:
https://www.researchgate.net/publication/306347340_A_Natural_Decomposition_of_R2_in_Multiple_Linear_Regression
|
Calculate variance explained by each predictor in multiple regression using R
|
I proved that the percentage of variation explained by a given predictor in a multiple linear regression is the product of the slope coefficient and the correlation of the predictor with the fitted va
|
Calculate variance explained by each predictor in multiple regression using R
I proved that the percentage of variation explained by a given predictor in a multiple linear regression is the product of the slope coefficient and the correlation of the predictor with the fitted values of the dependent variable (assuming that all variables have been standardized to have mean zero and variance one; which is without loss of generality). Find it here:
https://www.researchgate.net/publication/306347340_A_Natural_Decomposition_of_R2_in_Multiple_Linear_Regression
|
Calculate variance explained by each predictor in multiple regression using R
I proved that the percentage of variation explained by a given predictor in a multiple linear regression is the product of the slope coefficient and the correlation of the predictor with the fitted va
|
13,988
|
Calculate variance explained by each predictor in multiple regression using R
|
You can use hier.part library to have goodness of fit measures for regressions of a single dependent variable to all combinations of N independent variables
library(hier.part)
env <- D[,2:5]
all.regs(D$dv, env, fam = "gaussian", gof = "Rsqu",
print.vars = TRUE)
|
Calculate variance explained by each predictor in multiple regression using R
|
You can use hier.part library to have goodness of fit measures for regressions of a single dependent variable to all combinations of N independent variables
library(hier.part)
env <- D[,2:5]
all.regs(
|
Calculate variance explained by each predictor in multiple regression using R
You can use hier.part library to have goodness of fit measures for regressions of a single dependent variable to all combinations of N independent variables
library(hier.part)
env <- D[,2:5]
all.regs(D$dv, env, fam = "gaussian", gof = "Rsqu",
print.vars = TRUE)
|
Calculate variance explained by each predictor in multiple regression using R
You can use hier.part library to have goodness of fit measures for regressions of a single dependent variable to all combinations of N independent variables
library(hier.part)
env <- D[,2:5]
all.regs(
|
13,989
|
Calculate variance explained by each predictor in multiple regression using R
|
I am just re-posting the comment of @Phil here because this is clearly the best answer:
I would suggest looking into the relaimpo package, and its accompanying paper: jstatsoft.org/index.php/jss/article/view/v017i01/v17i01.pdf I use the "LMG" method frequently.
I have been searching the answer to this question for 5 hours now, skim-reading some papers, and indeed relaimpo::lmg seems a great solution. One can also use relaimpo::pmvd, relaimpo::pratt (the latter corresponds to @user128460 's answer, and has the problem of sometimesyielding negative shares), or methods relying on random forest. See these papers for more info: https://doi.org/10.1198/000313007X188252
https://doi.org/10.7275/5FEX-B874
https://www.sciencedirect.com/science/article/pii/S0951832015001672
https://www.jstor.org/stable/25652309
|
Calculate variance explained by each predictor in multiple regression using R
|
I am just re-posting the comment of @Phil here because this is clearly the best answer:
I would suggest looking into the relaimpo package, and its accompanying paper: jstatsoft.org/index.php/jss/arti
|
Calculate variance explained by each predictor in multiple regression using R
I am just re-posting the comment of @Phil here because this is clearly the best answer:
I would suggest looking into the relaimpo package, and its accompanying paper: jstatsoft.org/index.php/jss/article/view/v017i01/v17i01.pdf I use the "LMG" method frequently.
I have been searching the answer to this question for 5 hours now, skim-reading some papers, and indeed relaimpo::lmg seems a great solution. One can also use relaimpo::pmvd, relaimpo::pratt (the latter corresponds to @user128460 's answer, and has the problem of sometimesyielding negative shares), or methods relying on random forest. See these papers for more info: https://doi.org/10.1198/000313007X188252
https://doi.org/10.7275/5FEX-B874
https://www.sciencedirect.com/science/article/pii/S0951832015001672
https://www.jstor.org/stable/25652309
|
Calculate variance explained by each predictor in multiple regression using R
I am just re-posting the comment of @Phil here because this is clearly the best answer:
I would suggest looking into the relaimpo package, and its accompanying paper: jstatsoft.org/index.php/jss/arti
|
13,990
|
How to visualize a 3-D density function?
|
Well there are four possible approaches that come to mind (although I am sure that there are many more) but basically you could either plot the data as a perspective plot, a contour plot, a heat map or if you prefer a 3-D scatter plot (which is more or less a perspective plot when you have values of $z$ for all $(x,y)$ pairs. Here are some examples of each (from a well known 3-D data set in R):
Here are two additional plots that have nicer plotting features than the ones given prior.
So depending on your preference will dictate which way you like to visualize 3-D data sets.
Here is the `R` code used to generate these four mentioned plots.
library(fields)
library(scatterplot3d)
#Data for illistarition
x = seq(-10, 10, length= 100)
y = x
f = function(x, y) { r = sqrt(x^2+y^2); 10 * sin(r)/r }
z = outer(x, y, f)
z[is.na(z)] = 1
#Method 1
#Perspective Plot
persp(x,y,z,col="lightblue",main="Perspective Plot")
#Method 2
#Contour Plot
contour(x,y,z,main="Contour Plot")
filled.contour(x,y,z,color=terrain.colors,main="Contour Plot",)
#Method 3
#Heatmap
image(x,y,z,main="Heat Map")
image.plot(x,y,z,main="Heat Map")
#Method 4
#3-D Scatter Plot
X = expand.grid(x,y)
x = X[,1]
y = X[,2]
z = c(z)
scatterplot3d(x,y,z,color="lightblue",pch=21,main="3-D Scatter Plot")
|
How to visualize a 3-D density function?
|
Well there are four possible approaches that come to mind (although I am sure that there are many more) but basically you could either plot the data as a perspective plot, a contour plot, a heat map o
|
How to visualize a 3-D density function?
Well there are four possible approaches that come to mind (although I am sure that there are many more) but basically you could either plot the data as a perspective plot, a contour plot, a heat map or if you prefer a 3-D scatter plot (which is more or less a perspective plot when you have values of $z$ for all $(x,y)$ pairs. Here are some examples of each (from a well known 3-D data set in R):
Here are two additional plots that have nicer plotting features than the ones given prior.
So depending on your preference will dictate which way you like to visualize 3-D data sets.
Here is the `R` code used to generate these four mentioned plots.
library(fields)
library(scatterplot3d)
#Data for illistarition
x = seq(-10, 10, length= 100)
y = x
f = function(x, y) { r = sqrt(x^2+y^2); 10 * sin(r)/r }
z = outer(x, y, f)
z[is.na(z)] = 1
#Method 1
#Perspective Plot
persp(x,y,z,col="lightblue",main="Perspective Plot")
#Method 2
#Contour Plot
contour(x,y,z,main="Contour Plot")
filled.contour(x,y,z,color=terrain.colors,main="Contour Plot",)
#Method 3
#Heatmap
image(x,y,z,main="Heat Map")
image.plot(x,y,z,main="Heat Map")
#Method 4
#3-D Scatter Plot
X = expand.grid(x,y)
x = X[,1]
y = X[,2]
z = c(z)
scatterplot3d(x,y,z,color="lightblue",pch=21,main="3-D Scatter Plot")
|
How to visualize a 3-D density function?
Well there are four possible approaches that come to mind (although I am sure that there are many more) but basically you could either plot the data as a perspective plot, a contour plot, a heat map o
|
13,991
|
Is there "unsupervised regression"?
|
I've never encountered this term before. I am unsure whether it would spread light or darkness within either realm of statistics: those being machine learning (where supervised and unsupervised distinctions are central to problem solving) and inferential statistics (where regression, confirmatory analysis, and NHSTs are most often employed).
Where those two philosophies overlap, the majority of regression and associated terminology is thrown around in a strictly supervised setting. However, I think many existing concepts in unsupervised learning are closely related to regression based approaches, especially when you naively iterate over each class or feature as an outcome and pool the results. An example of this is the PCA and bivariate correlation analysis. By applying best subset regression iteratively over a number of variables, you can do a very complex sort of network estimation, as is assumed in structural equation modeling (strictly in the EFA sense). This, to me, seems like an unsupervised learning problem with regression.
However, regression parameter estimates are not reflexive. For simple linear regression, regressing $Y$ upon $X$ will give you different results, different inference, and different estimates (not even inverse necessarily), than $X$ upon $Y$. In my mind, this lack of commutativity makes most naive regression applications ineligible for unsupervised learning problems.
|
Is there "unsupervised regression"?
|
I've never encountered this term before. I am unsure whether it would spread light or darkness within either realm of statistics: those being machine learning (where supervised and unsupervised distin
|
Is there "unsupervised regression"?
I've never encountered this term before. I am unsure whether it would spread light or darkness within either realm of statistics: those being machine learning (where supervised and unsupervised distinctions are central to problem solving) and inferential statistics (where regression, confirmatory analysis, and NHSTs are most often employed).
Where those two philosophies overlap, the majority of regression and associated terminology is thrown around in a strictly supervised setting. However, I think many existing concepts in unsupervised learning are closely related to regression based approaches, especially when you naively iterate over each class or feature as an outcome and pool the results. An example of this is the PCA and bivariate correlation analysis. By applying best subset regression iteratively over a number of variables, you can do a very complex sort of network estimation, as is assumed in structural equation modeling (strictly in the EFA sense). This, to me, seems like an unsupervised learning problem with regression.
However, regression parameter estimates are not reflexive. For simple linear regression, regressing $Y$ upon $X$ will give you different results, different inference, and different estimates (not even inverse necessarily), than $X$ upon $Y$. In my mind, this lack of commutativity makes most naive regression applications ineligible for unsupervised learning problems.
|
Is there "unsupervised regression"?
I've never encountered this term before. I am unsure whether it would spread light or darkness within either realm of statistics: those being machine learning (where supervised and unsupervised distin
|
13,992
|
Is there "unsupervised regression"?
|
The closest thing I can think of is a little black magic that stirred people up when it was announced a few years ago, but I don't believe it gained any real traction in the community. The authors developed a statistic they called the "Maximal Information Coefficient (MIC)." The general idea behind their method is to take highly dimensional data, plot each variable against every other variable in pairs, and then apply an interesting window-binning algorithm to each plot (which calculates the MIC for those two variables) to determine if there is potentially a relationship between the two variables. The technique is supposed to be robust at identifying arbitrarily structured relationships, not just linear.
The technique targets pairs of variables, but I'm sure it could be extended to investigate multivariate relationships. The main problem would be that you'd have to run the technique on significantly more combinations of variables as you allow for permutations of more and more variables. I imagine it probably takes some time just with pairs: attempting to use this on even remotely high dimensional data and considering more complex relationships than pairs of variables would become intractable fast.
Reference the paper Detecting Novel Associations in Large Datasets (2011)
|
Is there "unsupervised regression"?
|
The closest thing I can think of is a little black magic that stirred people up when it was announced a few years ago, but I don't believe it gained any real traction in the community. The authors dev
|
Is there "unsupervised regression"?
The closest thing I can think of is a little black magic that stirred people up when it was announced a few years ago, but I don't believe it gained any real traction in the community. The authors developed a statistic they called the "Maximal Information Coefficient (MIC)." The general idea behind their method is to take highly dimensional data, plot each variable against every other variable in pairs, and then apply an interesting window-binning algorithm to each plot (which calculates the MIC for those two variables) to determine if there is potentially a relationship between the two variables. The technique is supposed to be robust at identifying arbitrarily structured relationships, not just linear.
The technique targets pairs of variables, but I'm sure it could be extended to investigate multivariate relationships. The main problem would be that you'd have to run the technique on significantly more combinations of variables as you allow for permutations of more and more variables. I imagine it probably takes some time just with pairs: attempting to use this on even remotely high dimensional data and considering more complex relationships than pairs of variables would become intractable fast.
Reference the paper Detecting Novel Associations in Large Datasets (2011)
|
Is there "unsupervised regression"?
The closest thing I can think of is a little black magic that stirred people up when it was announced a few years ago, but I don't believe it gained any real traction in the community. The authors dev
|
13,993
|
Is there "unsupervised regression"?
|
Auto regression is one way to compute weights of a matrix minimizing error on reconstructed input from given input.
|
Is there "unsupervised regression"?
|
Auto regression is one way to compute weights of a matrix minimizing error on reconstructed input from given input.
|
Is there "unsupervised regression"?
Auto regression is one way to compute weights of a matrix minimizing error on reconstructed input from given input.
|
Is there "unsupervised regression"?
Auto regression is one way to compute weights of a matrix minimizing error on reconstructed input from given input.
|
13,994
|
Is there "unsupervised regression"?
|
This question came to my mind while researching the difference between supervised and unsupervised methods. Coming from an econometric background I prefer to think in models, which slowed my understanding as most machine learning literature I encountered focuses on methods.
What I have found thus far is that a strict distinction should be made between clustering (unsupervised) versus classification (supervised). The continuous analogy of the relation between these model designs would be principal component analysis (unsupervised) versus linear regression (supervised).
However, I would argue that the relation between clustering and classification is purely coincidental; it exists only when we interpret both model designs as describing a geometric relation, which I find unneccesarily restrictive. All unsupervised methods that I know of (k-means, elastic map algorithms such as kohonen/neural gas, DBSCAN, PCA) can also be interpreted as latent variable models. In the case of clustering methods, this would amount to viewing belonging to a cluster as being in a state, which can be coded as a latent variable model by introducing state dummies.
Given the interpretation as latent variable models, you are free to specify any, possibly nonlinear, model that describes your features in terms of continuous latent variables.
|
Is there "unsupervised regression"?
|
This question came to my mind while researching the difference between supervised and unsupervised methods. Coming from an econometric background I prefer to think in models, which slowed my understan
|
Is there "unsupervised regression"?
This question came to my mind while researching the difference between supervised and unsupervised methods. Coming from an econometric background I prefer to think in models, which slowed my understanding as most machine learning literature I encountered focuses on methods.
What I have found thus far is that a strict distinction should be made between clustering (unsupervised) versus classification (supervised). The continuous analogy of the relation between these model designs would be principal component analysis (unsupervised) versus linear regression (supervised).
However, I would argue that the relation between clustering and classification is purely coincidental; it exists only when we interpret both model designs as describing a geometric relation, which I find unneccesarily restrictive. All unsupervised methods that I know of (k-means, elastic map algorithms such as kohonen/neural gas, DBSCAN, PCA) can also be interpreted as latent variable models. In the case of clustering methods, this would amount to viewing belonging to a cluster as being in a state, which can be coded as a latent variable model by introducing state dummies.
Given the interpretation as latent variable models, you are free to specify any, possibly nonlinear, model that describes your features in terms of continuous latent variables.
|
Is there "unsupervised regression"?
This question came to my mind while researching the difference between supervised and unsupervised methods. Coming from an econometric background I prefer to think in models, which slowed my understan
|
13,995
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
|
In general, the sample size for a random forest acts as a control on the "degree of randomness" involved, and thus as a way of adjusting the bias-variance tradeoff. Increasing the sample size results in a "less random" forest, and so has a tendency to overfit. Decreasing the sample size increases the variation in the individual trees within the forest, preventing overfitting, but usually at the expense of model performance. A useful side-effect is that lower sample sizes reduce the time needed to train the model.
The usual rule of thumb for the best sample size is a "bootstrap sample", a sample equal in size to the original dataset, but selected with replacement, so some rows are not selected, and others are selected more than once. This typically provides near-optimal performance, and is the default in the standard R implementation. However, you may find in real-world applications that adjusting the sample size can lead to improved performance. When in doubt, select the appropriate sample size (and other model parameters) using cross-validation.
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
|
In general, the sample size for a random forest acts as a control on the "degree of randomness" involved, and thus as a way of adjusting the bias-variance tradeoff. Increasing the sample size results
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
In general, the sample size for a random forest acts as a control on the "degree of randomness" involved, and thus as a way of adjusting the bias-variance tradeoff. Increasing the sample size results in a "less random" forest, and so has a tendency to overfit. Decreasing the sample size increases the variation in the individual trees within the forest, preventing overfitting, but usually at the expense of model performance. A useful side-effect is that lower sample sizes reduce the time needed to train the model.
The usual rule of thumb for the best sample size is a "bootstrap sample", a sample equal in size to the original dataset, but selected with replacement, so some rows are not selected, and others are selected more than once. This typically provides near-optimal performance, and is the default in the standard R implementation. However, you may find in real-world applications that adjusting the sample size can lead to improved performance. When in doubt, select the appropriate sample size (and other model parameters) using cross-validation.
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
In general, the sample size for a random forest acts as a control on the "degree of randomness" involved, and thus as a way of adjusting the bias-variance tradeoff. Increasing the sample size results
|
13,996
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
|
For random forests to work as well in new data as they do in training data, the required sample size is enormous, often being 200 times the number of candidate features. See here.
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
|
For random forests to work as well in new data as they do in training data, the required sample size is enormous, often being 200 times the number of candidate features. See here.
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
For random forests to work as well in new data as they do in training data, the required sample size is enormous, often being 200 times the number of candidate features. See here.
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
For random forests to work as well in new data as they do in training data, the required sample size is enormous, often being 200 times the number of candidate features. See here.
|
13,997
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
|
I ran 4500 random forests over night with some random parameter-settings:
Regression problem Ysignal = x1^2+sin(x2*pi) + x3 * x4 + x5
where any x are sampled independent from a normal distribution, sd=1, mean=1
Ytotal = Ysignal + Yerror
where Yerror = rnorm(n.observations,sd=sd(Ysignal))*noise.factor
theoretical.explainable.variance"TEV" = var(Ysignal= / var(Ytotal)
randomForest.performance = explained.variance(OOB cross-validation) / TEV
datasets were sampled from the regression problem and added noise
n.obs was a random number between 1000 and 5000
n.extra.dummy.variables between 1 and 20
ntree always 1000
sample_replacement always true
mtry is 5 to 25, limited by n.obs
noise.factor between 0 and 9
samplesize.ratio a random number between 10% and 100%, the ratio size of each bootstrap
all models were trained like rfo = randomForest(x=X, y=Ytotal, <more args>)
the randomForest.performance, its ability to explain the highest fraction of the TEV increases in general when samplesize lowers when the TEV is less than 50% and decrease when TEV is higher than 50%.
Thus, if your randomForest-modelfit reports e.g. 15% explained variance by OOB-CV, and this is an acceptable model-precision for you, then you can probably tweak the performance a little higher by lowering sampsize to a third of numbers of observations, given ntree > 1000.
Morale: For very noisy data it is better to de-correlate trees than to lower bias by growing maximal size trees.
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
|
I ran 4500 random forests over night with some random parameter-settings:
Regression problem Ysignal = x1^2+sin(x2*pi) + x3 * x4 + x5
where any x are sampled independent from a normal distribution, sd
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
I ran 4500 random forests over night with some random parameter-settings:
Regression problem Ysignal = x1^2+sin(x2*pi) + x3 * x4 + x5
where any x are sampled independent from a normal distribution, sd=1, mean=1
Ytotal = Ysignal + Yerror
where Yerror = rnorm(n.observations,sd=sd(Ysignal))*noise.factor
theoretical.explainable.variance"TEV" = var(Ysignal= / var(Ytotal)
randomForest.performance = explained.variance(OOB cross-validation) / TEV
datasets were sampled from the regression problem and added noise
n.obs was a random number between 1000 and 5000
n.extra.dummy.variables between 1 and 20
ntree always 1000
sample_replacement always true
mtry is 5 to 25, limited by n.obs
noise.factor between 0 and 9
samplesize.ratio a random number between 10% and 100%, the ratio size of each bootstrap
all models were trained like rfo = randomForest(x=X, y=Ytotal, <more args>)
the randomForest.performance, its ability to explain the highest fraction of the TEV increases in general when samplesize lowers when the TEV is less than 50% and decrease when TEV is higher than 50%.
Thus, if your randomForest-modelfit reports e.g. 15% explained variance by OOB-CV, and this is an acceptable model-precision for you, then you can probably tweak the performance a little higher by lowering sampsize to a third of numbers of observations, given ntree > 1000.
Morale: For very noisy data it is better to de-correlate trees than to lower bias by growing maximal size trees.
|
Is there a formula or rule for determining the correct sampSize for a randomForest?
I ran 4500 random forests over night with some random parameter-settings:
Regression problem Ysignal = x1^2+sin(x2*pi) + x3 * x4 + x5
where any x are sampled independent from a normal distribution, sd
|
13,998
|
Ideas for "lab notebook" software?
|
These are called Electronic Lab Notebooks (ELN).
Here are some of the open source options I've looked at:
The Sage Notebook.
The new IPython Notebook, which can now be run as a webapp on EC2 and Azure.
Leo, which can be used with IPython and in many other ways.
Various wiki, blogging, and CMS solutions.
|
Ideas for "lab notebook" software?
|
These are called Electronic Lab Notebooks (ELN).
Here are some of the open source options I've looked at:
The Sage Notebook.
The new IPython Notebook, which can now be run as a webapp on EC2 and Azur
|
Ideas for "lab notebook" software?
These are called Electronic Lab Notebooks (ELN).
Here are some of the open source options I've looked at:
The Sage Notebook.
The new IPython Notebook, which can now be run as a webapp on EC2 and Azure.
Leo, which can be used with IPython and in many other ways.
Various wiki, blogging, and CMS solutions.
|
Ideas for "lab notebook" software?
These are called Electronic Lab Notebooks (ELN).
Here are some of the open source options I've looked at:
The Sage Notebook.
The new IPython Notebook, which can now be run as a webapp on EC2 and Azur
|
13,999
|
Ideas for "lab notebook" software?
|
My favorite: Evernote. You can tag entries (e.g., 'analysis', 'idea', etc.), you can paste pictures and graphics, and you can share notebooks with collaborators. And: it's basically free (well, freemium). But the free edition is absolutely sufficient for me.
|
Ideas for "lab notebook" software?
|
My favorite: Evernote. You can tag entries (e.g., 'analysis', 'idea', etc.), you can paste pictures and graphics, and you can share notebooks with collaborators. And: it's basically free (well, freemi
|
Ideas for "lab notebook" software?
My favorite: Evernote. You can tag entries (e.g., 'analysis', 'idea', etc.), you can paste pictures and graphics, and you can share notebooks with collaborators. And: it's basically free (well, freemium). But the free edition is absolutely sufficient for me.
|
Ideas for "lab notebook" software?
My favorite: Evernote. You can tag entries (e.g., 'analysis', 'idea', etc.), you can paste pictures and graphics, and you can share notebooks with collaborators. And: it's basically free (well, freemi
|
14,000
|
Ideas for "lab notebook" software?
|
I've never used it personally, but Microsoft has a piece of software in the Office suite called OneNote that accomplishes a similar goal to your e-lab notebook specifications. Refer to their website for more information. They also offer a free trial bundled with MS Office here.
|
Ideas for "lab notebook" software?
|
I've never used it personally, but Microsoft has a piece of software in the Office suite called OneNote that accomplishes a similar goal to your e-lab notebook specifications. Refer to their website f
|
Ideas for "lab notebook" software?
I've never used it personally, but Microsoft has a piece of software in the Office suite called OneNote that accomplishes a similar goal to your e-lab notebook specifications. Refer to their website for more information. They also offer a free trial bundled with MS Office here.
|
Ideas for "lab notebook" software?
I've never used it personally, but Microsoft has a piece of software in the Office suite called OneNote that accomplishes a similar goal to your e-lab notebook specifications. Refer to their website f
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.