idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
35,001 | Interpreting the neural network output in R? | Your interpretation looks correct. You can check it yourself by calling predict on some data and comparing your calculations to predict. I first did this in a spreadsheet, and then I calculated an R neural network using metaprogramming.
By the way, the R package neuralnet draws nice diagrams, but apparently it supports only regression (not classification?). | Interpreting the neural network output in R? | Your interpretation looks correct. You can check it yourself by calling predict on some data and comparing your calculations to predict. I first did this in a spreadsheet, and then I calculated an R | Interpreting the neural network output in R?
Your interpretation looks correct. You can check it yourself by calling predict on some data and comparing your calculations to predict. I first did this in a spreadsheet, and then I calculated an R neural network using metaprogramming.
By the way, the R package neuralnet draws nice diagrams, but apparently it supports only regression (not classification?). | Interpreting the neural network output in R?
Your interpretation looks correct. You can check it yourself by calling predict on some data and comparing your calculations to predict. I first did this in a spreadsheet, and then I calculated an R |
35,002 | Interpreting the neural network output in R? | you can also use the following code for plotting nnet results
install.packages("devtools")
library(devtools)
source_url('https://gist.githubusercontent.com/fawda123/7471137/raw/466c1474d0a505ff044412703516c34f1a4684a5/nnet_plot_update.r')
#plot each model
plot.nnet(net)
reference : https://beckmw.wordpress.com/tag/nnet/ | Interpreting the neural network output in R? | you can also use the following code for plotting nnet results
install.packages("devtools")
library(devtools)
source_url('https://gist.githubusercontent.com/fawda123/7471137/raw/466c1474d0a505ff0444127 | Interpreting the neural network output in R?
you can also use the following code for plotting nnet results
install.packages("devtools")
library(devtools)
source_url('https://gist.githubusercontent.com/fawda123/7471137/raw/466c1474d0a505ff044412703516c34f1a4684a5/nnet_plot_update.r')
#plot each model
plot.nnet(net)
reference : https://beckmw.wordpress.com/tag/nnet/ | Interpreting the neural network output in R?
you can also use the following code for plotting nnet results
install.packages("devtools")
library(devtools)
source_url('https://gist.githubusercontent.com/fawda123/7471137/raw/466c1474d0a505ff0444127 |
35,003 | mtry and unbalanced use of predictor variables in Random Forest | The part of the overall random forest algorithm that uses mtry is (adapted from The Elements of Statistical Learning):
At each terminal node that is larger than minimal size,
1) Select mtry variables at random from the $p$ regressor variables,
2) From these mtry variables, pick the best variable and split point,
3) Split the node into two daughter nodes using the chosen variable and split point.
As an aside - you can use the tuneRF function in the randomForest package to select the "optimal" mtry for you, using the out-of-bag error estimate as the criterion.
The random selection of variables at each node splitting step is what makes it a random forest, as opposed to just a bagged estimator. Quoting from The Elements of Statistical Learning, p 588 in the second edition:
The idea in random forests ... is to improve the variance reduction of bagging by reducing the correlation between the trees, without increasing the variance too much. This is achieved in the tree-growing process through random selection of the input variables.
There is no incremental increase in bias due to this. Of course, if the model itself is fundamentally biased, e.g., by leaving out important predictor variables, using random forests won't make the situation any better, but it won't make it worse either.
The unbalanced use of predictor variables just reflects the fact that some are less important than others, where important is used in a heuristic rather than a formal sense, and as a consequence, for some trees, may not be used often or at all. For example, think about what would happen if you had a variable that was barely significant on the full data set, but you then generated a lot of bootstrap datasets from the full data set and ran the regression again on each bootstrap dataset. You can imagine that the variable would be insignificant on a lot of those bootstrap datasets. Now compare to a variable that was extremely highly significant on the full dataset; it would likely be significant on almost all of the bootstrap datasets too. So if you counted up the fraction of regressions for which each variable was "selected" by being significant, you'd get an unbalanced count across variables. This is somewhat (but only somewhat) analogous to what happens inside the random forest, only the variable selection is based on "best at each split" rather than "p-value < 0.05" or some such.
EDIT in response to a question by the OP: Note, however, that variable importance measures are not based solely on counts of how many times a variable is used in a split. Consequently, you can have "important" variables (as measured by "importance") that are used less often in splits than less "important" variables (as measured by "importance".) For example, consider the model:
$ y_i = I(x_i > c) + 0.25*z_i^2 + e_i$
as implemented and estimated by the following R code:
x <- runif(500)
z <- rnorm(500)
y <- (x>0.5) + z*z/4 + rnorm(500)
df <- data.frame(list(y=y,x=x,z=z,junk1=rnorm(500),junk2=runif(500),junk3=rnorm(500)))
foo <- randomForest(y~x+z+junk1+junk2+junk3,mtry=2,data=df)
importance(foo)
IncNodePurity
x 187.38456
z 144.92088
junk1 102.41875
junk2 93.61086
junk3 92.59587
varUsed(foo)
[1] 16916 17445 16883 16434 16453
Here $x$ has higher importance, but $z$ is used more frequently in splits; $x$'s importance is high but in some sense very local, while $z$ is more important over the full range of $z$ values.
For a fuller discussion of random forests, see Chap. 15 of The Elements..., which the link above allows you to download as a pdf for free. | mtry and unbalanced use of predictor variables in Random Forest | The part of the overall random forest algorithm that uses mtry is (adapted from The Elements of Statistical Learning):
At each terminal node that is larger than minimal size,
1) Select mtry variables | mtry and unbalanced use of predictor variables in Random Forest
The part of the overall random forest algorithm that uses mtry is (adapted from The Elements of Statistical Learning):
At each terminal node that is larger than minimal size,
1) Select mtry variables at random from the $p$ regressor variables,
2) From these mtry variables, pick the best variable and split point,
3) Split the node into two daughter nodes using the chosen variable and split point.
As an aside - you can use the tuneRF function in the randomForest package to select the "optimal" mtry for you, using the out-of-bag error estimate as the criterion.
The random selection of variables at each node splitting step is what makes it a random forest, as opposed to just a bagged estimator. Quoting from The Elements of Statistical Learning, p 588 in the second edition:
The idea in random forests ... is to improve the variance reduction of bagging by reducing the correlation between the trees, without increasing the variance too much. This is achieved in the tree-growing process through random selection of the input variables.
There is no incremental increase in bias due to this. Of course, if the model itself is fundamentally biased, e.g., by leaving out important predictor variables, using random forests won't make the situation any better, but it won't make it worse either.
The unbalanced use of predictor variables just reflects the fact that some are less important than others, where important is used in a heuristic rather than a formal sense, and as a consequence, for some trees, may not be used often or at all. For example, think about what would happen if you had a variable that was barely significant on the full data set, but you then generated a lot of bootstrap datasets from the full data set and ran the regression again on each bootstrap dataset. You can imagine that the variable would be insignificant on a lot of those bootstrap datasets. Now compare to a variable that was extremely highly significant on the full dataset; it would likely be significant on almost all of the bootstrap datasets too. So if you counted up the fraction of regressions for which each variable was "selected" by being significant, you'd get an unbalanced count across variables. This is somewhat (but only somewhat) analogous to what happens inside the random forest, only the variable selection is based on "best at each split" rather than "p-value < 0.05" or some such.
EDIT in response to a question by the OP: Note, however, that variable importance measures are not based solely on counts of how many times a variable is used in a split. Consequently, you can have "important" variables (as measured by "importance") that are used less often in splits than less "important" variables (as measured by "importance".) For example, consider the model:
$ y_i = I(x_i > c) + 0.25*z_i^2 + e_i$
as implemented and estimated by the following R code:
x <- runif(500)
z <- rnorm(500)
y <- (x>0.5) + z*z/4 + rnorm(500)
df <- data.frame(list(y=y,x=x,z=z,junk1=rnorm(500),junk2=runif(500),junk3=rnorm(500)))
foo <- randomForest(y~x+z+junk1+junk2+junk3,mtry=2,data=df)
importance(foo)
IncNodePurity
x 187.38456
z 144.92088
junk1 102.41875
junk2 93.61086
junk3 92.59587
varUsed(foo)
[1] 16916 17445 16883 16434 16453
Here $x$ has higher importance, but $z$ is used more frequently in splits; $x$'s importance is high but in some sense very local, while $z$ is more important over the full range of $z$ values.
For a fuller discussion of random forests, see Chap. 15 of The Elements..., which the link above allows you to download as a pdf for free. | mtry and unbalanced use of predictor variables in Random Forest
The part of the overall random forest algorithm that uses mtry is (adapted from The Elements of Statistical Learning):
At each terminal node that is larger than minimal size,
1) Select mtry variables |
35,004 | Help me understand Bayesian updating | Assume the outcome of each trial is a coin flip that only depends on some constant, unknown bias $\theta$ which you are trying to infer so you can predict the next outcome after seeing some data. Imagine that $\theta$ itself was drawn from some prior distribution, which we'll assume to be a Beta distribution with paramaters $a,b$, depicted graphically below
Then the generative model for your data can be written as
$$
\theta \sim Beta(a,b)
$$
$$
X \sim Binomial(n,\theta)
$$
where $X$ is number of successes (or 1's in your example) out of $n$ trials.
The first distribution is known as your prior and the second is your likelihood. The Beta distribution is conjugate to the Binomial distribution, which means your posterior is still a Beta distribution. I assume you're familiar with Bayes' rule at least in theory so I'll just explain practically how to update your belief distribution in this model and make predictions about upcoming trials.
The predictive distribution in the case of the Beta-Binomial model is simply the expectation (the mean) of your belief about $\theta$, which for $Beta(a,b)$ is $\frac{a}{a+b}$. So for example if you have no reason to assume a priori that any value b/t 0 and 1 is more likely than any other, you could set $a=b=1$ so so that your belief was totally uniform(see plot). Then your prediction is $\frac{1}{1+1} = 0.5$.
Say that you observe 10 trials with 8 successes and 2 failures. The posterior distribution is then $Beta(a+8,b+2)$. Notice that the paramaters $a,b$ of your $Beta(a,b)$ prior can be interpreted as "psuedo-observations", where $a$ is the number heads and $b$ the number of tails that you have in effect hallucinated, since they're treated the same as actual observations are in your posterior belief.
So you can easily calculate the predicted outcome for your examples above, but you have to assume some parameter values $a$ and $b$ for your prior. Then your prediction is simply
$\frac{a+x}{a+b+N}$, where $x$ is the number of successes observed and $N$ is total number of trials. | Help me understand Bayesian updating | Assume the outcome of each trial is a coin flip that only depends on some constant, unknown bias $\theta$ which you are trying to infer so you can predict the next outcome after seeing some data. Imag | Help me understand Bayesian updating
Assume the outcome of each trial is a coin flip that only depends on some constant, unknown bias $\theta$ which you are trying to infer so you can predict the next outcome after seeing some data. Imagine that $\theta$ itself was drawn from some prior distribution, which we'll assume to be a Beta distribution with paramaters $a,b$, depicted graphically below
Then the generative model for your data can be written as
$$
\theta \sim Beta(a,b)
$$
$$
X \sim Binomial(n,\theta)
$$
where $X$ is number of successes (or 1's in your example) out of $n$ trials.
The first distribution is known as your prior and the second is your likelihood. The Beta distribution is conjugate to the Binomial distribution, which means your posterior is still a Beta distribution. I assume you're familiar with Bayes' rule at least in theory so I'll just explain practically how to update your belief distribution in this model and make predictions about upcoming trials.
The predictive distribution in the case of the Beta-Binomial model is simply the expectation (the mean) of your belief about $\theta$, which for $Beta(a,b)$ is $\frac{a}{a+b}$. So for example if you have no reason to assume a priori that any value b/t 0 and 1 is more likely than any other, you could set $a=b=1$ so so that your belief was totally uniform(see plot). Then your prediction is $\frac{1}{1+1} = 0.5$.
Say that you observe 10 trials with 8 successes and 2 failures. The posterior distribution is then $Beta(a+8,b+2)$. Notice that the paramaters $a,b$ of your $Beta(a,b)$ prior can be interpreted as "psuedo-observations", where $a$ is the number heads and $b$ the number of tails that you have in effect hallucinated, since they're treated the same as actual observations are in your posterior belief.
So you can easily calculate the predicted outcome for your examples above, but you have to assume some parameter values $a$ and $b$ for your prior. Then your prediction is simply
$\frac{a+x}{a+b+N}$, where $x$ is the number of successes observed and $N$ is total number of trials. | Help me understand Bayesian updating
Assume the outcome of each trial is a coin flip that only depends on some constant, unknown bias $\theta$ which you are trying to infer so you can predict the next outcome after seeing some data. Imag |
35,005 | Collaborative filtering and implicit ratings; normalization? | If you are going to populate the entire userxarticle matrix with dwell-times, you are going to run in to sparsity issues very quickly.
Also, a simple average of dwell-times is prone to many problems, for example, what if you have very few records, or if one user left her browser open for a month ?
Step #1: Filling in the blanks
From my experience dealing with user dwell time, The amount of users that spend $t$ seconds viewing a site, decreases greatly as $t$ increases.
I found out that modelling user dwell-time as an Exponential curve, is a good approximation.
Using the Bayesian approach, and using the Gamma distribution as the prior distribution on the mean of each site's dwell-time, we get a familiar formula:
Harmonic mean:
$$\frac{n+m}{\frac{m}{b}+\frac{1}{t_1}+\dots++\frac{1}{t_b}}$$
Where $t_i$ is the time spent on site $i$, $b$ is the bias you introduce and $m$ is its strength.
For example, setting $b=3,m=2$ is like assuming two fictional users viewed a site for 3 seconds when we have no data for that userxarticle combination.
And note that this formula is much more immuned to outliers, since it assumes the exponential distribution (and not the Gaussian distribution like the arithmetic mean)
Step #2: Populating the matrix
Times are positive, and they have a certain bounds that make sense (for example, maximum of one day).
However, after the matrix factorization, any numeric value can appear in the matrix cells, including negative terms.
The common practice is to populate the userxarticle matrix with
$$logit(t)$$
Where logit is the inverse of the sigmoid function.
And then when interpolating the dwell time for a user $i$ and article $j$, we use:
$$sigmoid(<\vec{u_i},\vec{a_j}>)$$
Instead of only using the dot product.
This way we can be certain that the end result would be bounded to a certain range that makes sense. | Collaborative filtering and implicit ratings; normalization? | If you are going to populate the entire userxarticle matrix with dwell-times, you are going to run in to sparsity issues very quickly.
Also, a simple average of dwell-times is prone to many problems, | Collaborative filtering and implicit ratings; normalization?
If you are going to populate the entire userxarticle matrix with dwell-times, you are going to run in to sparsity issues very quickly.
Also, a simple average of dwell-times is prone to many problems, for example, what if you have very few records, or if one user left her browser open for a month ?
Step #1: Filling in the blanks
From my experience dealing with user dwell time, The amount of users that spend $t$ seconds viewing a site, decreases greatly as $t$ increases.
I found out that modelling user dwell-time as an Exponential curve, is a good approximation.
Using the Bayesian approach, and using the Gamma distribution as the prior distribution on the mean of each site's dwell-time, we get a familiar formula:
Harmonic mean:
$$\frac{n+m}{\frac{m}{b}+\frac{1}{t_1}+\dots++\frac{1}{t_b}}$$
Where $t_i$ is the time spent on site $i$, $b$ is the bias you introduce and $m$ is its strength.
For example, setting $b=3,m=2$ is like assuming two fictional users viewed a site for 3 seconds when we have no data for that userxarticle combination.
And note that this formula is much more immuned to outliers, since it assumes the exponential distribution (and not the Gaussian distribution like the arithmetic mean)
Step #2: Populating the matrix
Times are positive, and they have a certain bounds that make sense (for example, maximum of one day).
However, after the matrix factorization, any numeric value can appear in the matrix cells, including negative terms.
The common practice is to populate the userxarticle matrix with
$$logit(t)$$
Where logit is the inverse of the sigmoid function.
And then when interpolating the dwell time for a user $i$ and article $j$, we use:
$$sigmoid(<\vec{u_i},\vec{a_j}>)$$
Instead of only using the dot product.
This way we can be certain that the end result would be bounded to a certain range that makes sense. | Collaborative filtering and implicit ratings; normalization?
If you are going to populate the entire userxarticle matrix with dwell-times, you are going to run in to sparsity issues very quickly.
Also, a simple average of dwell-times is prone to many problems, |
35,006 | Collaborative filtering and implicit ratings; normalization? | Hu, Koren, and Volinsky faced a similar problem, for which they proposed the solution in Collaborative Filtering for Implicit Feedback Datasets. The example that they used was for time spent watching TV shows, but I will put in terms of time reading articles.
Their basic idea was that the most important aspect was whether or not a user looked at an article or not. Therefore, they created a matrix with binary entries, where each $(u,i)$ of the matrix represent whether or not the user $u$ looked at article $i$. The goal is to estimate this binary entry as well as possible. Feeling that there is also some value in the length of time reading, they weighted each entry of the matrix. All the $0$ entries got a weight of $1$. Letting $r_{ui}$ be the amount of time spent reading the article, they proposed a few weighting schemes for the non-zero entries:
$w_{ui} = 1 + \alpha r_{ui}$
$w_{ui} = 1 + \alpha \log (1 + r_{ui} / \epsilon)$
where $\alpha$ and $\epsilon$ are tuning parameters.
Finally, they used matrix factorization techniques to estimate the binary entries using squared error loss and the weights given above. I have a simple implementation of Hu et al.'s algorithm in R at https://github.com/andland/implicitcf.
More recently, Johnson proposed to extend this technique in Logistic Matrix Factorization for Implicit Feedback Data. The idea is basically the same but uses a logistic transformation and the negative weighted Bernoulli log likelihood as a loss function. | Collaborative filtering and implicit ratings; normalization? | Hu, Koren, and Volinsky faced a similar problem, for which they proposed the solution in Collaborative Filtering for Implicit Feedback Datasets. The example that they used was for time spent watching | Collaborative filtering and implicit ratings; normalization?
Hu, Koren, and Volinsky faced a similar problem, for which they proposed the solution in Collaborative Filtering for Implicit Feedback Datasets. The example that they used was for time spent watching TV shows, but I will put in terms of time reading articles.
Their basic idea was that the most important aspect was whether or not a user looked at an article or not. Therefore, they created a matrix with binary entries, where each $(u,i)$ of the matrix represent whether or not the user $u$ looked at article $i$. The goal is to estimate this binary entry as well as possible. Feeling that there is also some value in the length of time reading, they weighted each entry of the matrix. All the $0$ entries got a weight of $1$. Letting $r_{ui}$ be the amount of time spent reading the article, they proposed a few weighting schemes for the non-zero entries:
$w_{ui} = 1 + \alpha r_{ui}$
$w_{ui} = 1 + \alpha \log (1 + r_{ui} / \epsilon)$
where $\alpha$ and $\epsilon$ are tuning parameters.
Finally, they used matrix factorization techniques to estimate the binary entries using squared error loss and the weights given above. I have a simple implementation of Hu et al.'s algorithm in R at https://github.com/andland/implicitcf.
More recently, Johnson proposed to extend this technique in Logistic Matrix Factorization for Implicit Feedback Data. The idea is basically the same but uses a logistic transformation and the negative weighted Bernoulli log likelihood as a loss function. | Collaborative filtering and implicit ratings; normalization?
Hu, Koren, and Volinsky faced a similar problem, for which they proposed the solution in Collaborative Filtering for Implicit Feedback Datasets. The example that they used was for time spent watching |
35,007 | Collaborative filtering and implicit ratings; normalization? | There is also a different less sophisticated method to handle this. However, I recon Uri Goren's proposed methods probably work better.
I used a different method to normalize the time spent on a article page.
I divided the time by the amount of words of the article.
Also, I set a maximum of time spent on page by looking at the average reading time.
By dividing the lenght of an article by the maximum time someone would need to read that article, an upper bound was set. In that way outliers were handled.
I would also recommended utilising more implicit feedback variables such as scroll lenght. This variable can be used to enforce the time spent on a page. | Collaborative filtering and implicit ratings; normalization? | There is also a different less sophisticated method to handle this. However, I recon Uri Goren's proposed methods probably work better.
I used a different method to normalize the time spent on a arti | Collaborative filtering and implicit ratings; normalization?
There is also a different less sophisticated method to handle this. However, I recon Uri Goren's proposed methods probably work better.
I used a different method to normalize the time spent on a article page.
I divided the time by the amount of words of the article.
Also, I set a maximum of time spent on page by looking at the average reading time.
By dividing the lenght of an article by the maximum time someone would need to read that article, an upper bound was set. In that way outliers were handled.
I would also recommended utilising more implicit feedback variables such as scroll lenght. This variable can be used to enforce the time spent on a page. | Collaborative filtering and implicit ratings; normalization?
There is also a different less sophisticated method to handle this. However, I recon Uri Goren's proposed methods probably work better.
I used a different method to normalize the time spent on a arti |
35,008 | Physical meaning of correlation? | Suppose that X, Y, and Z each have 100 values for 100 different floor-sweepers.
If X, Y, and Z are independent, it means that the rate at which a person sweeps on a later date does not depend on the rate at which he/she swept at an earlier date. However, it is possible for the rate to increase systematically even with independence. If everyone sweeps faster later, and if the increase does not depend on initial speed, this will be the case.
The best way I've seen to visualize correlations of different magnitude is to graph them.
x <- rnorm(100)
y <- x + rnorm(100, 0, .5)
cor(x,y)
plot(x,y)
y <- x + rnorm(100, 0, 1)
cor(x,y)
plot(x,y)
y <- x + rnorm(100, 0, 2)
cor(x,y)
plot(x,y)
shows correlations of about .9, .7 and .5. | Physical meaning of correlation? | Suppose that X, Y, and Z each have 100 values for 100 different floor-sweepers.
If X, Y, and Z are independent, it means that the rate at which a person sweeps on a later date does not depend on the | Physical meaning of correlation?
Suppose that X, Y, and Z each have 100 values for 100 different floor-sweepers.
If X, Y, and Z are independent, it means that the rate at which a person sweeps on a later date does not depend on the rate at which he/she swept at an earlier date. However, it is possible for the rate to increase systematically even with independence. If everyone sweeps faster later, and if the increase does not depend on initial speed, this will be the case.
The best way I've seen to visualize correlations of different magnitude is to graph them.
x <- rnorm(100)
y <- x + rnorm(100, 0, .5)
cor(x,y)
plot(x,y)
y <- x + rnorm(100, 0, 1)
cor(x,y)
plot(x,y)
y <- x + rnorm(100, 0, 2)
cor(x,y)
plot(x,y)
shows correlations of about .9, .7 and .5. | Physical meaning of correlation?
Suppose that X, Y, and Z each have 100 values for 100 different floor-sweepers.
If X, Y, and Z are independent, it means that the rate at which a person sweeps on a later date does not depend on the |
35,009 | Physical meaning of correlation? | In very basic and physical sense, a positive correlation means that higher values of one variable are associated with higher values of the other variable.
A negative correlation means that bigger values of one variable tend to co-occur with smaller values of the other variable.
It is important to note that a correlation does not imply causation. That is 'X is a cause of Y' or 'Y is a cause of X', because they are highly correlated, is not true. A positive correlation only means that if X increases then Y will also increase. The value indicates the degree of this linear relationship.
For your example, a positive correlation between X and Y will mean that if the time it takes someone to sweep the floor is high today then the time it takes him tomorrow will also be high.
Was that useful? | Physical meaning of correlation? | In very basic and physical sense, a positive correlation means that higher values of one variable are associated with higher values of the other variable.
A negative correlation means that bigger valu | Physical meaning of correlation?
In very basic and physical sense, a positive correlation means that higher values of one variable are associated with higher values of the other variable.
A negative correlation means that bigger values of one variable tend to co-occur with smaller values of the other variable.
It is important to note that a correlation does not imply causation. That is 'X is a cause of Y' or 'Y is a cause of X', because they are highly correlated, is not true. A positive correlation only means that if X increases then Y will also increase. The value indicates the degree of this linear relationship.
For your example, a positive correlation between X and Y will mean that if the time it takes someone to sweep the floor is high today then the time it takes him tomorrow will also be high.
Was that useful? | Physical meaning of correlation?
In very basic and physical sense, a positive correlation means that higher values of one variable are associated with higher values of the other variable.
A negative correlation means that bigger valu |
35,010 | Watermarking data for datamining | The standard method is to put it in the least significant bits or digits; you may for instance calculate the sum of the digits get modulo 10 and append this to the end of the number, decreasing the last digit by one if this sum is larger than 5 to make all statistics almost intact, like this:
294.090842 -> sum of digits is 38, thus mark is 8 and we add it like this: 294.0908418
294.121120 -> sum of digits is 22, thus mark is 2 and we add it like this: 294.1211202
...
This trace is hard to notice (unless you store data in a proper way, i.e. with accuracy encoded as the number of significant digits), visible even in subset of the data and almost impossible to appear at random.
Personalized mark can be done by using user-specific salt and some better check sum algorithm.
However, note that this mark will be visible only in the raw data and your competitors may equally easily remove it by adding a small noise or rounding numbers. | Watermarking data for datamining | The standard method is to put it in the least significant bits or digits; you may for instance calculate the sum of the digits get modulo 10 and append this to the end of the number, decreasing the la | Watermarking data for datamining
The standard method is to put it in the least significant bits or digits; you may for instance calculate the sum of the digits get modulo 10 and append this to the end of the number, decreasing the last digit by one if this sum is larger than 5 to make all statistics almost intact, like this:
294.090842 -> sum of digits is 38, thus mark is 8 and we add it like this: 294.0908418
294.121120 -> sum of digits is 22, thus mark is 2 and we add it like this: 294.1211202
...
This trace is hard to notice (unless you store data in a proper way, i.e. with accuracy encoded as the number of significant digits), visible even in subset of the data and almost impossible to appear at random.
Personalized mark can be done by using user-specific salt and some better check sum algorithm.
However, note that this mark will be visible only in the raw data and your competitors may equally easily remove it by adding a small noise or rounding numbers. | Watermarking data for datamining
The standard method is to put it in the least significant bits or digits; you may for instance calculate the sum of the digits get modulo 10 and append this to the end of the number, decreasing the la |
35,011 | GEE with exchangeable working covariance vs. assuming independence and using Huber-White standard errors? | One way is to run a linear regression and run the robust variance estimator on top of that to guard against getting biased estimates.
An important point here is that having pockets of correlated data does not bias your estimates in a linear model - it results in having inflated standard errors. In a non-linear model (e.g. logistic regression), you can get biased estimates, since the population average effect is, in general, different from the individual specific effect, which is not the case with a linear model. More information on this distinction is in this answer
Can we take the clustering effect into account with the sandwich estimator?
From the title, I assume you're talking about using Huber White sandwich standard errors for your confidence intervals and $p$-values. These do impose a diagonal covariance matrix but are robust to the diagonal entries possibly being different - for that reason they were originally used when there is possible heteroskedasticity in your errors, which means that the error variance in non-constant. But, using a slight modification of the Huber-White standard errors where the "meat" of the sandwich is replaced with an empirical estimate of the covariance matrix within a cluster
(still called Huber-White standard errors) provides inference that is robust to non-independence within a cluster (but not between clusters!) - this modification is described pretty clearly in a 2006 paper in The American Statistician called On The So-Called “Huber Sandwich Estimator” and “Robust Standard Errors” by David Freedman.
This procedure robust to non-independence within a cluster in the sense that they will still give you asymptotically unbiased inference (i.e. the confidence levels and $p$-values will be right) even if there is correlation within a cluster. I suspect this is what your code labeled 'Empirical Estimator' code is doing.
I've fitted two separate GEE models one with exchangeable varcov matrix and the other one with the robust variance estimator (also known as Huber-White, Sandwich Estimator, or empirical variance estimator). The point is under both models I get the same estimated variance per each covariate, but my GEE exchangeable estimates leads to much larger beta estimates that are also statistically significant whereas similar beta covariates are not significant in GEE with robust varcov estimator. I wonder why it happens?
In general, the GEE model solves the equation
$$ \sum_{i=1}^{n} \frac{ \partial \mu_i }{ \partial \beta } V(\alpha)^{-1} (y_i - \mu_i) = 0$$
as a function of the regression coefficients, $\beta$, where $\mu_i = x_i \beta$ is the expected values of the cluster $i$ response, $y_i$, given the predictors $x_i$ under the specified model. $V(\alpha)^{-1}$ is the "working" covariance matrix of the of the elements of cluster $i$. (note that $\mu_i=x_i \beta$ because we're dealing with a linear model but GEEs can more generally use a link function so that $\mu_i=g(x_i \beta))$
A key point here is that when you change the working covariance, you change the estimating equation, therefore the $\beta$ that solves it will be different. For example, if $V$ was $\sigma^2$ down the diagonal and $0$ off the diagonal and $\mu_i = x_i \beta$ as it does here, then the GEE estimator is the least squares estimator, which will not solve that equation in the exchangeable case. So it is no surprise that you're getting different parameter estimates. It may be a coincidence that you're getting the same standard errors.
In your situation, I'd suggest reporting the results that used the exchangeable covariance matrix since. While GEE-based inference in consistent even when you're misspecified the correlation structure, it is known that GEE estimators are more efficient when you use a more appropriate covariance structure and, if you have evidence that there are large intra-class correlations within a school, then the exchangeable correlation will probably provide a much closer approximation to the true association structure. | GEE with exchangeable working covariance vs. assuming independence and using Huber-White standard e | One way is to run a linear regression and run the robust variance estimator on top of that to guard against getting biased estimates.
An important point here is that having pockets of correlated data | GEE with exchangeable working covariance vs. assuming independence and using Huber-White standard errors?
One way is to run a linear regression and run the robust variance estimator on top of that to guard against getting biased estimates.
An important point here is that having pockets of correlated data does not bias your estimates in a linear model - it results in having inflated standard errors. In a non-linear model (e.g. logistic regression), you can get biased estimates, since the population average effect is, in general, different from the individual specific effect, which is not the case with a linear model. More information on this distinction is in this answer
Can we take the clustering effect into account with the sandwich estimator?
From the title, I assume you're talking about using Huber White sandwich standard errors for your confidence intervals and $p$-values. These do impose a diagonal covariance matrix but are robust to the diagonal entries possibly being different - for that reason they were originally used when there is possible heteroskedasticity in your errors, which means that the error variance in non-constant. But, using a slight modification of the Huber-White standard errors where the "meat" of the sandwich is replaced with an empirical estimate of the covariance matrix within a cluster
(still called Huber-White standard errors) provides inference that is robust to non-independence within a cluster (but not between clusters!) - this modification is described pretty clearly in a 2006 paper in The American Statistician called On The So-Called “Huber Sandwich Estimator” and “Robust Standard Errors” by David Freedman.
This procedure robust to non-independence within a cluster in the sense that they will still give you asymptotically unbiased inference (i.e. the confidence levels and $p$-values will be right) even if there is correlation within a cluster. I suspect this is what your code labeled 'Empirical Estimator' code is doing.
I've fitted two separate GEE models one with exchangeable varcov matrix and the other one with the robust variance estimator (also known as Huber-White, Sandwich Estimator, or empirical variance estimator). The point is under both models I get the same estimated variance per each covariate, but my GEE exchangeable estimates leads to much larger beta estimates that are also statistically significant whereas similar beta covariates are not significant in GEE with robust varcov estimator. I wonder why it happens?
In general, the GEE model solves the equation
$$ \sum_{i=1}^{n} \frac{ \partial \mu_i }{ \partial \beta } V(\alpha)^{-1} (y_i - \mu_i) = 0$$
as a function of the regression coefficients, $\beta$, where $\mu_i = x_i \beta$ is the expected values of the cluster $i$ response, $y_i$, given the predictors $x_i$ under the specified model. $V(\alpha)^{-1}$ is the "working" covariance matrix of the of the elements of cluster $i$. (note that $\mu_i=x_i \beta$ because we're dealing with a linear model but GEEs can more generally use a link function so that $\mu_i=g(x_i \beta))$
A key point here is that when you change the working covariance, you change the estimating equation, therefore the $\beta$ that solves it will be different. For example, if $V$ was $\sigma^2$ down the diagonal and $0$ off the diagonal and $\mu_i = x_i \beta$ as it does here, then the GEE estimator is the least squares estimator, which will not solve that equation in the exchangeable case. So it is no surprise that you're getting different parameter estimates. It may be a coincidence that you're getting the same standard errors.
In your situation, I'd suggest reporting the results that used the exchangeable covariance matrix since. While GEE-based inference in consistent even when you're misspecified the correlation structure, it is known that GEE estimators are more efficient when you use a more appropriate covariance structure and, if you have evidence that there are large intra-class correlations within a school, then the exchangeable correlation will probably provide a much closer approximation to the true association structure. | GEE with exchangeable working covariance vs. assuming independence and using Huber-White standard e
One way is to run a linear regression and run the robust variance estimator on top of that to guard against getting biased estimates.
An important point here is that having pockets of correlated data |
35,012 | Why autocovariances could fully characterise a time series? | A stationary Gaussian process is completely characterized by the combination of its mean, variance and autocorrelation function. The statement as you read it is not true. You need the following additional conditions:
The process is stationary
the process is Gaussian
the mean $μ$ is specified
Then the entire stochastic process is completely characterized by its autocovariance function (or equivalently its variance $σ^2$ + autocorrelation function).
This simply relies on the fact that any multivariate Gaussian distribution is uniquely determined by its mean vector and its covariance function. So given all the conditions that I stated above the joint distribution of any $k$ observations in the time series has a multivariate normal distribution with mean vector having each component equal to $μ$ (by stationarity) each component has variance $σ^2$ (again by stationarity) and the covariance components are given by the corresponding lagged covariances in the autocovariance function (again stationarity comes in because the autocovariance only depends on the time difference (or lag) between the two observations whose covariance is being taken. | Why autocovariances could fully characterise a time series? | A stationary Gaussian process is completely characterized by the combination of its mean, variance and autocorrelation function. The statement as you read it is not true. You need the following addi | Why autocovariances could fully characterise a time series?
A stationary Gaussian process is completely characterized by the combination of its mean, variance and autocorrelation function. The statement as you read it is not true. You need the following additional conditions:
The process is stationary
the process is Gaussian
the mean $μ$ is specified
Then the entire stochastic process is completely characterized by its autocovariance function (or equivalently its variance $σ^2$ + autocorrelation function).
This simply relies on the fact that any multivariate Gaussian distribution is uniquely determined by its mean vector and its covariance function. So given all the conditions that I stated above the joint distribution of any $k$ observations in the time series has a multivariate normal distribution with mean vector having each component equal to $μ$ (by stationarity) each component has variance $σ^2$ (again by stationarity) and the covariance components are given by the corresponding lagged covariances in the autocovariance function (again stationarity comes in because the autocovariance only depends on the time difference (or lag) between the two observations whose covariance is being taken. | Why autocovariances could fully characterise a time series?
A stationary Gaussian process is completely characterized by the combination of its mean, variance and autocorrelation function. The statement as you read it is not true. You need the following addi |
35,013 | Basic easy rules for statistics | Check out Gerald van Belle's book "Statistical Rules of Thumb" a very nice little paperback text loaded with examples of rules of thumb and explanations including the "Rule of three" that you mention above. | Basic easy rules for statistics | Check out Gerald van Belle's book "Statistical Rules of Thumb" a very nice little paperback text loaded with examples of rules of thumb and explanations including the "Rule of three" that you mention | Basic easy rules for statistics
Check out Gerald van Belle's book "Statistical Rules of Thumb" a very nice little paperback text loaded with examples of rules of thumb and explanations including the "Rule of three" that you mention above. | Basic easy rules for statistics
Check out Gerald van Belle's book "Statistical Rules of Thumb" a very nice little paperback text loaded with examples of rules of thumb and explanations including the "Rule of three" that you mention |
35,014 | Logistic regression performance with high number of predictors | I think we should give the word to Venables and Ripley, page 198 in MASS:
There is one fairly common circumstance in which both convergence
problems and the Hauck-Donner phenomenon can occur. This is when the
fitted probabilities are extremely close to zero or one. Consider a
medical diagnosis problem with thousands of cases and around fifty
binary explanatory variables (which may arise from coding fewer
categorical factors); one of these indicators is rarely true but
always indicates that the disease is present. Then the fitted
probabilities of cases with that indicator should be one, which can
only be achieved by taking $\hat\beta_i = \infty$. The result from
glm will be warnings and an estimated coefficient of around +/- 10.
Besides potential numerical difficulties there is no formal problem with probabilities being estimated numerically to 0 or 1. However, the $t$-test, which is based on a quadratic approximation, for testing the hypothesis $\beta_i = 0$ can become a poor approximation of the likelihood ratio test, and the $t$-test may appear insignificant though in reality the hypothesis is definitely wrong. As I understand it, this it what the warning is about.
With many predictors a situation like the one Venables and Ripley describes may easily occur; one predictor is mostly not informative, but in certain cases it is a strong predictor for a case. | Logistic regression performance with high number of predictors | I think we should give the word to Venables and Ripley, page 198 in MASS:
There is one fairly common circumstance in which both convergence
problems and the Hauck-Donner phenomenon can occur. This | Logistic regression performance with high number of predictors
I think we should give the word to Venables and Ripley, page 198 in MASS:
There is one fairly common circumstance in which both convergence
problems and the Hauck-Donner phenomenon can occur. This is when the
fitted probabilities are extremely close to zero or one. Consider a
medical diagnosis problem with thousands of cases and around fifty
binary explanatory variables (which may arise from coding fewer
categorical factors); one of these indicators is rarely true but
always indicates that the disease is present. Then the fitted
probabilities of cases with that indicator should be one, which can
only be achieved by taking $\hat\beta_i = \infty$. The result from
glm will be warnings and an estimated coefficient of around +/- 10.
Besides potential numerical difficulties there is no formal problem with probabilities being estimated numerically to 0 or 1. However, the $t$-test, which is based on a quadratic approximation, for testing the hypothesis $\beta_i = 0$ can become a poor approximation of the likelihood ratio test, and the $t$-test may appear insignificant though in reality the hypothesis is definitely wrong. As I understand it, this it what the warning is about.
With many predictors a situation like the one Venables and Ripley describes may easily occur; one predictor is mostly not informative, but in certain cases it is a strong predictor for a case. | Logistic regression performance with high number of predictors
I think we should give the word to Venables and Ripley, page 198 in MASS:
There is one fairly common circumstance in which both convergence
problems and the Hauck-Donner phenomenon can occur. This |
35,015 | Logistic regression performance with high number of predictors | While the Hauck-Donner effect is closely related to it, I think the problem in your case is (quasi)-complete separation. This refers to the phenomenon that a certain combination (including interactions) of predictors will give rise to a subset of the observations where you observe only zeros or ones (basically a combination of predictor values will separate the two classes). Then the maximum likelihood estimate will not exist (it will be infinite, which makes its standard error rather large too). This is what V&R write in the quote by @NRH. If you have many predictors, especially categorical ones, this just becomes more likely to happen for a particular combination of predictors. The HD effect then occurs for the Wald test in such a situation. A canonical treatment of qcs is Albert and Anderson 1984 article in Biometrika.
You might want to look at the noverlap package (no longer on CRAN) in R which contains utilities to deal with quasi-complete separation, or the brglm package, that can deal with qcs. | Logistic regression performance with high number of predictors | While the Hauck-Donner effect is closely related to it, I think the problem in your case is (quasi)-complete separation. This refers to the phenomenon that a certain combination (including interaction | Logistic regression performance with high number of predictors
While the Hauck-Donner effect is closely related to it, I think the problem in your case is (quasi)-complete separation. This refers to the phenomenon that a certain combination (including interactions) of predictors will give rise to a subset of the observations where you observe only zeros or ones (basically a combination of predictor values will separate the two classes). Then the maximum likelihood estimate will not exist (it will be infinite, which makes its standard error rather large too). This is what V&R write in the quote by @NRH. If you have many predictors, especially categorical ones, this just becomes more likely to happen for a particular combination of predictors. The HD effect then occurs for the Wald test in such a situation. A canonical treatment of qcs is Albert and Anderson 1984 article in Biometrika.
You might want to look at the noverlap package (no longer on CRAN) in R which contains utilities to deal with quasi-complete separation, or the brglm package, that can deal with qcs. | Logistic regression performance with high number of predictors
While the Hauck-Donner effect is closely related to it, I think the problem in your case is (quasi)-complete separation. This refers to the phenomenon that a certain combination (including interaction |
35,016 | Logistic regression performance with high number of predictors | I'm not quite sure how to explain your problem, but I can offer a potential solution -- try using an R package called glmnet instead. I have used glmnet for both linear and logistic regression. In one particular problem, I had approximately 1,200 cases (i.e. N = 1,200) and about 110 predictors. The package gave me great results.
Of course, it's worth pointing out that glmnet is primarily used for penalized logistic regression, but since the package lets you select the degree of penalty to apply, I'm sure you can set the penalty to zero to obtain the results of regular logistic regression (i.e. one with no penalty). In any case, the package was made specifically for problems in high dimensions (even N << P), and this seems to be the underlying source of your problem. I highly recommend glmnet. | Logistic regression performance with high number of predictors | I'm not quite sure how to explain your problem, but I can offer a potential solution -- try using an R package called glmnet instead. I have used glmnet for both linear and logistic regression. In one | Logistic regression performance with high number of predictors
I'm not quite sure how to explain your problem, but I can offer a potential solution -- try using an R package called glmnet instead. I have used glmnet for both linear and logistic regression. In one particular problem, I had approximately 1,200 cases (i.e. N = 1,200) and about 110 predictors. The package gave me great results.
Of course, it's worth pointing out that glmnet is primarily used for penalized logistic regression, but since the package lets you select the degree of penalty to apply, I'm sure you can set the penalty to zero to obtain the results of regular logistic regression (i.e. one with no penalty). In any case, the package was made specifically for problems in high dimensions (even N << P), and this seems to be the underlying source of your problem. I highly recommend glmnet. | Logistic regression performance with high number of predictors
I'm not quite sure how to explain your problem, but I can offer a potential solution -- try using an R package called glmnet instead. I have used glmnet for both linear and logistic regression. In one |
35,017 | Kernel logistic regression | I've written a couple ;o)
G. C. Cawley and N. L. C. Talbot, Efficient approximate leave-one-out cross-validation for kernel logistic regression, Machine Learning, vol, 71, no. 2-3, pp. 243--264, June 2008.
Which gives a reasonable method for choosing kernel and regularisation parameters and an empirical evaluation
G. C. Cawley, G. J. Janacek and N. L. C. Talbot, Generalised kernel machines, in Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-2007), pages 1732-1737, Orlando, Florida, USA, August 12-17, 2007.
Which basically documents a MATLAB toolbox for making kernel versions of generalised linear models with kernel logistic regression as one of the examples. The library includes code for model selection (but sadly no manual yet, just some demos)
However the earliest paper I know of that uses that particular name is
"Kernel logistic regression and the import vector machine" by Zhu and Hastie, Advances in Neural Information Processing Systems (2001) (available via google scholar) | Kernel logistic regression | I've written a couple ;o)
G. C. Cawley and N. L. C. Talbot, Efficient approximate leave-one-out cross-validation for kernel logistic regression, Machine Learning, vol, 71, no. 2-3, pp. 243--264, June | Kernel logistic regression
I've written a couple ;o)
G. C. Cawley and N. L. C. Talbot, Efficient approximate leave-one-out cross-validation for kernel logistic regression, Machine Learning, vol, 71, no. 2-3, pp. 243--264, June 2008.
Which gives a reasonable method for choosing kernel and regularisation parameters and an empirical evaluation
G. C. Cawley, G. J. Janacek and N. L. C. Talbot, Generalised kernel machines, in Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-2007), pages 1732-1737, Orlando, Florida, USA, August 12-17, 2007.
Which basically documents a MATLAB toolbox for making kernel versions of generalised linear models with kernel logistic regression as one of the examples. The library includes code for model selection (but sadly no manual yet, just some demos)
However the earliest paper I know of that uses that particular name is
"Kernel logistic regression and the import vector machine" by Zhu and Hastie, Advances in Neural Information Processing Systems (2001) (available via google scholar) | Kernel logistic regression
I've written a couple ;o)
G. C. Cawley and N. L. C. Talbot, Efficient approximate leave-one-out cross-validation for kernel logistic regression, Machine Learning, vol, 71, no. 2-3, pp. 243--264, June |
35,018 | Kernel logistic regression | This is the only reference I know of
Frölich, M. (2006), Non-parametric regression for binary dependent
variables. The Econometrics Journal, 9: 511–540. doi:
10.1111/j.1368-423X.2006.00196.x | Kernel logistic regression | This is the only reference I know of
Frölich, M. (2006), Non-parametric regression for binary dependent
variables. The Econometrics Journal, 9: 511–540. doi:
10.1111/j.1368-423X.2006.00196.x | Kernel logistic regression
This is the only reference I know of
Frölich, M. (2006), Non-parametric regression for binary dependent
variables. The Econometrics Journal, 9: 511–540. doi:
10.1111/j.1368-423X.2006.00196.x | Kernel logistic regression
This is the only reference I know of
Frölich, M. (2006), Non-parametric regression for binary dependent
variables. The Econometrics Journal, 9: 511–540. doi:
10.1111/j.1368-423X.2006.00196.x |
35,019 | What is the probability P(X > Y) given X ~ Be(a1, b1), and Y ~ Be(a2, b2), and X and Y are independent? | $\Pr(X > Y) = \int_0^1 \frac{x^{a_1 - 1}(1 - x)^{b_1 - 1}}{\text{Be}(a_1,b_1)} \int_0^x\frac{y^{a_2 - 1}(1 - y)^{b_2 - 1}}{\text{Be}(a_2,b_2)} dy dx$
$\Pr(X > Y) = \frac{1}{\text{Be}(a_1,b_1)} \int_0^1 x^{a_1 - 1}(1 - x)^{b_1 - 1}I_x(a_2, b_2) dx$
where $I_x(a, b)$ is the regularized incomplete beta function. If $a$ and $b$ are integers then
$I_x(a,b) = \sum_{j=a}^{a+b-1} {(a+b-1)! \over j!(a+b-1-j)!} x^j (1-x)^{a+b-1-j}.$
Substitute in, do some simple algebra, and the integral will have a closed form solution as a finite sum of beta functions.
If $a_2$ and $b_2$ aren't integers but $a_1$ and $b_1$ are, then calculate $\Pr(X > Y) = 1 - \Pr(Y > X)$. If neither case holds, you're pooched for an analytical solution but you can always do the integral numerically, either deterministically or by Monte Carlo. | What is the probability P(X > Y) given X ~ Be(a1, b1), and Y ~ Be(a2, b2), and X and Y are independe | $\Pr(X > Y) = \int_0^1 \frac{x^{a_1 - 1}(1 - x)^{b_1 - 1}}{\text{Be}(a_1,b_1)} \int_0^x\frac{y^{a_2 - 1}(1 - y)^{b_2 - 1}}{\text{Be}(a_2,b_2)} dy dx$
$\Pr(X > Y) = \frac{1}{\text{Be}(a_1,b_1)} \int_0^ | What is the probability P(X > Y) given X ~ Be(a1, b1), and Y ~ Be(a2, b2), and X and Y are independent?
$\Pr(X > Y) = \int_0^1 \frac{x^{a_1 - 1}(1 - x)^{b_1 - 1}}{\text{Be}(a_1,b_1)} \int_0^x\frac{y^{a_2 - 1}(1 - y)^{b_2 - 1}}{\text{Be}(a_2,b_2)} dy dx$
$\Pr(X > Y) = \frac{1}{\text{Be}(a_1,b_1)} \int_0^1 x^{a_1 - 1}(1 - x)^{b_1 - 1}I_x(a_2, b_2) dx$
where $I_x(a, b)$ is the regularized incomplete beta function. If $a$ and $b$ are integers then
$I_x(a,b) = \sum_{j=a}^{a+b-1} {(a+b-1)! \over j!(a+b-1-j)!} x^j (1-x)^{a+b-1-j}.$
Substitute in, do some simple algebra, and the integral will have a closed form solution as a finite sum of beta functions.
If $a_2$ and $b_2$ aren't integers but $a_1$ and $b_1$ are, then calculate $\Pr(X > Y) = 1 - \Pr(Y > X)$. If neither case holds, you're pooched for an analytical solution but you can always do the integral numerically, either deterministically or by Monte Carlo. | What is the probability P(X > Y) given X ~ Be(a1, b1), and Y ~ Be(a2, b2), and X and Y are independe
$\Pr(X > Y) = \int_0^1 \frac{x^{a_1 - 1}(1 - x)^{b_1 - 1}}{\text{Be}(a_1,b_1)} \int_0^x\frac{y^{a_2 - 1}(1 - y)^{b_2 - 1}}{\text{Be}(a_2,b_2)} dy dx$
$\Pr(X > Y) = \frac{1}{\text{Be}(a_1,b_1)} \int_0^ |
35,020 | How do you establish complete versus partial mediation in a simple mediational model? | Definitions
I'll use the $a, b, c, c'$ notation common to simple mediation, as shown here.
Assuming there is a positive effect to be mediated (i.e., $c > 0$) and any underlying causal arguments are satisfied then
Partial mediation occurs when $0 < c' < c$.
Complete mediation occurs when $c' = 0$.
Theoretical interest concerns the underlying parameters rather than the sample estimates of these parameters.
Testing for partial mediation
Significance tests can be applied to test for partial mediation.
Significance tests can support inferences such as that $ab$ is significantly greater than zero, or that that $c'$ is significantly less than $c$.
Testing for complete mediation
Significance tests can not be readily applied to the test of complete mediation. The fact that $c$ is significant, and $c'$ is not significant is insufficient to prove complete mediation. First, the difference between significant and non-significant is not necessarily significant. Second, even if the reduction is significant, a non-significant $c'$ does not prove that the value of $c'$ is zero.
I imagine there is discussion of this approaches to testing for complete mediation in the literature, but a few options spring to mind:
Equivalence testing: You could test the null hypothesis that $c' < \hat{c}$, where $0 < \hat{c} < c$, and $\hat{c}$ is deemed to be sufficiently close to zero or sufficient less than $c$ that rejection of the null hypothesis is seen as an argument for complete mediation being plausible.
Confidence intervals: You could get confidence intervals on $c'$.
Bayesian approaches: You could use Bayesian approaches to get a posterior density on $c'$ and if the 95% credibility interval was sufficiently small, you might argue that the mediation is plausibly close to being complete. A quick search revealed this article (Bayesian mediation analysis).
General thoughts on reporting mediation analysis
It seems to be me than when quantifying the degree of mediation, both the percentage reduction of $c$ to $c'$ is interesting as well as the size of the indirect effect. The terms partial and complete mediation suggest a binary distinction that is probably rarely true in social science research applications.
Rather, reporting a mediation analysis should focus on quantifying the degree of mediation both in percentage terms and in terms of the size of the indirect effect. It should also quantify the uncertainty in these estimates.
Review of David Kenny's points
As an additional point, it is worth noting that David A. Kenny acknowledges the issues related to significance testing for mediation on his webpage. I quote the main passage here:
Note that the steps are stated in terms of zero and nonzero
coefficients, not in terms of statistical significance, as they were
in Baron and Kenny (1986). Because trivially small coefficients can
be statistically significant with large sample sizes and very large
coefficients can be nonsignificant with small sample sizes, the steps
should not be defined in terms of statistical significance.
Statistical significance is informative, but other information should
be part of statistical decision making. For instance, consider the
case in which path a is large and b is zero. In this case, c = c'.
It is very possible that the statistical test of c' is not significant
(due to the collinearity between X and M), whereas c is statistically
significant. It would then appear that there is complete mediation
when in fact there is no mediation at all. | How do you establish complete versus partial mediation in a simple mediational model? | Definitions
I'll use the $a, b, c, c'$ notation common to simple mediation, as shown here.
Assuming there is a positive effect to be mediated (i.e., $c > 0$) and any underlying causal arguments are sa | How do you establish complete versus partial mediation in a simple mediational model?
Definitions
I'll use the $a, b, c, c'$ notation common to simple mediation, as shown here.
Assuming there is a positive effect to be mediated (i.e., $c > 0$) and any underlying causal arguments are satisfied then
Partial mediation occurs when $0 < c' < c$.
Complete mediation occurs when $c' = 0$.
Theoretical interest concerns the underlying parameters rather than the sample estimates of these parameters.
Testing for partial mediation
Significance tests can be applied to test for partial mediation.
Significance tests can support inferences such as that $ab$ is significantly greater than zero, or that that $c'$ is significantly less than $c$.
Testing for complete mediation
Significance tests can not be readily applied to the test of complete mediation. The fact that $c$ is significant, and $c'$ is not significant is insufficient to prove complete mediation. First, the difference between significant and non-significant is not necessarily significant. Second, even if the reduction is significant, a non-significant $c'$ does not prove that the value of $c'$ is zero.
I imagine there is discussion of this approaches to testing for complete mediation in the literature, but a few options spring to mind:
Equivalence testing: You could test the null hypothesis that $c' < \hat{c}$, where $0 < \hat{c} < c$, and $\hat{c}$ is deemed to be sufficiently close to zero or sufficient less than $c$ that rejection of the null hypothesis is seen as an argument for complete mediation being plausible.
Confidence intervals: You could get confidence intervals on $c'$.
Bayesian approaches: You could use Bayesian approaches to get a posterior density on $c'$ and if the 95% credibility interval was sufficiently small, you might argue that the mediation is plausibly close to being complete. A quick search revealed this article (Bayesian mediation analysis).
General thoughts on reporting mediation analysis
It seems to be me than when quantifying the degree of mediation, both the percentage reduction of $c$ to $c'$ is interesting as well as the size of the indirect effect. The terms partial and complete mediation suggest a binary distinction that is probably rarely true in social science research applications.
Rather, reporting a mediation analysis should focus on quantifying the degree of mediation both in percentage terms and in terms of the size of the indirect effect. It should also quantify the uncertainty in these estimates.
Review of David Kenny's points
As an additional point, it is worth noting that David A. Kenny acknowledges the issues related to significance testing for mediation on his webpage. I quote the main passage here:
Note that the steps are stated in terms of zero and nonzero
coefficients, not in terms of statistical significance, as they were
in Baron and Kenny (1986). Because trivially small coefficients can
be statistically significant with large sample sizes and very large
coefficients can be nonsignificant with small sample sizes, the steps
should not be defined in terms of statistical significance.
Statistical significance is informative, but other information should
be part of statistical decision making. For instance, consider the
case in which path a is large and b is zero. In this case, c = c'.
It is very possible that the statistical test of c' is not significant
(due to the collinearity between X and M), whereas c is statistically
significant. It would then appear that there is complete mediation
when in fact there is no mediation at all. | How do you establish complete versus partial mediation in a simple mediational model?
Definitions
I'll use the $a, b, c, c'$ notation common to simple mediation, as shown here.
Assuming there is a positive effect to be mediated (i.e., $c > 0$) and any underlying causal arguments are sa |
35,021 | How do you establish complete versus partial mediation in a simple mediational model? | The Baron & Kenny approach is somewhat outdated - nowadays it is recommended to use a bootstrapping approach to test for mediation (Preacher & Hayes, 2004). One problem with the B&K approach is, that it is possible to observe a change from a significant $X\rightarrow Y$ path to a nonsignificant $X\rightarrow Y$ path with a very small change in the absolute size of the coefficient.
A more direct test of mediation is to test the difference of $c - c'$ (which, in most cases, is equivalent to testing the indirect effect $ab$). The bootstrapping approach has much more statistical power and does not rely on multivariate normality assumptions (which are violated in indirect effects anyway).
To directly answer your question:
Q: In a simple mediation model, if I have found the indirect effect (ab) to be significant and the direct effect (c') to be small and insignificant, does that mean I have full mediation or partial mediation?
A: According to B&K: full mediation. According to P&H: not necessarily full mediation.
Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments, & Computers, 36, 717-731. doi:10.3758/BF03206553 | How do you establish complete versus partial mediation in a simple mediational model? | The Baron & Kenny approach is somewhat outdated - nowadays it is recommended to use a bootstrapping approach to test for mediation (Preacher & Hayes, 2004). One problem with the B&K approach is, that | How do you establish complete versus partial mediation in a simple mediational model?
The Baron & Kenny approach is somewhat outdated - nowadays it is recommended to use a bootstrapping approach to test for mediation (Preacher & Hayes, 2004). One problem with the B&K approach is, that it is possible to observe a change from a significant $X\rightarrow Y$ path to a nonsignificant $X\rightarrow Y$ path with a very small change in the absolute size of the coefficient.
A more direct test of mediation is to test the difference of $c - c'$ (which, in most cases, is equivalent to testing the indirect effect $ab$). The bootstrapping approach has much more statistical power and does not rely on multivariate normality assumptions (which are violated in indirect effects anyway).
To directly answer your question:
Q: In a simple mediation model, if I have found the indirect effect (ab) to be significant and the direct effect (c') to be small and insignificant, does that mean I have full mediation or partial mediation?
A: According to B&K: full mediation. According to P&H: not necessarily full mediation.
Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments, & Computers, 36, 717-731. doi:10.3758/BF03206553 | How do you establish complete versus partial mediation in a simple mediational model?
The Baron & Kenny approach is somewhat outdated - nowadays it is recommended to use a bootstrapping approach to test for mediation (Preacher & Hayes, 2004). One problem with the B&K approach is, that |
35,022 | Light bulb color problem | You are correct: $n=2k$ does not improve upon $n=2k-1$ in this symmetric case.
Clearly the optimal strategy is to look at the number of red and blue flashes and choose A or B according to which colour appears more. If the same number appear of each, it doesn't make any difference which you guess, as your chance of being correct is $0.5$ in that situation.
If there is a majority of one colour after $2k$ flashes then the majority must be even and at least 2, so that colour also had a majority of at least 1 after $2k-1$ flashes. If there is equality after $2k$ flashes, then choosing the colour with a majority after $2k-1$ flashes is as good as any other decision rule in this situation. So with an even number of flashes, the final flash does not help you improve your change of guessing correctly. | Light bulb color problem | You are correct: $n=2k$ does not improve upon $n=2k-1$ in this symmetric case.
Clearly the optimal strategy is to look at the number of red and blue flashes and choose A or B according to which colour | Light bulb color problem
You are correct: $n=2k$ does not improve upon $n=2k-1$ in this symmetric case.
Clearly the optimal strategy is to look at the number of red and blue flashes and choose A or B according to which colour appears more. If the same number appear of each, it doesn't make any difference which you guess, as your chance of being correct is $0.5$ in that situation.
If there is a majority of one colour after $2k$ flashes then the majority must be even and at least 2, so that colour also had a majority of at least 1 after $2k-1$ flashes. If there is equality after $2k$ flashes, then choosing the colour with a majority after $2k-1$ flashes is as good as any other decision rule in this situation. So with an even number of flashes, the final flash does not help you improve your change of guessing correctly. | Light bulb color problem
You are correct: $n=2k$ does not improve upon $n=2k-1$ in this symmetric case.
Clearly the optimal strategy is to look at the number of red and blue flashes and choose A or B according to which colour |
35,023 | Light bulb color problem | To answer in a rigorous way, this problem boils down to observing the number of red flashes $X$ which is either a binomial $\mathcal{B}(n,.8)$ (A) or a binomial $\mathcal{B}(n,.8)$ (B), with probability $0.5$ for each. The probability of selecting bulb A is thus given by Bayes theorem
$$
\mathbb{P}(b=A|X=x) = \dfrac{\mathbb{P}(X=x|b=A)}{\mathbb{P}(X=x|b=A)+\mathbb{P}(X=x|b=B)}
$$
so this is
$$
\mathbb{P}(b=A|X=x) = \dfrac{{n \choose x} 0.8^x 0.2^{n-x}}{{n \choose x} 0.8^x 0.2^{n-x}+{n \choose x} 0.2^x 0.8^{n-x}}=\dfrac{1}{1+4^{n-2x}}
$$
Therefore A (resp. B) is chosen when $n-2x<0$ (resp. $n-2x>0$). Thus, when $n=2k-1$, the probability to correctly choose A is
$$
\mathbb{P}(X>(2k-1)/2|b=A) = \mathbb{P}(X\ge k|b=A) =\sum_{x=k}^{2k-1} {2k-1 \choose x} 0.8^x 0.2^{2k-1-x}\,.
$$ | Light bulb color problem | To answer in a rigorous way, this problem boils down to observing the number of red flashes $X$ which is either a binomial $\mathcal{B}(n,.8)$ (A) or a binomial $\mathcal{B}(n,.8)$ (B), with probabili | Light bulb color problem
To answer in a rigorous way, this problem boils down to observing the number of red flashes $X$ which is either a binomial $\mathcal{B}(n,.8)$ (A) or a binomial $\mathcal{B}(n,.8)$ (B), with probability $0.5$ for each. The probability of selecting bulb A is thus given by Bayes theorem
$$
\mathbb{P}(b=A|X=x) = \dfrac{\mathbb{P}(X=x|b=A)}{\mathbb{P}(X=x|b=A)+\mathbb{P}(X=x|b=B)}
$$
so this is
$$
\mathbb{P}(b=A|X=x) = \dfrac{{n \choose x} 0.8^x 0.2^{n-x}}{{n \choose x} 0.8^x 0.2^{n-x}+{n \choose x} 0.2^x 0.8^{n-x}}=\dfrac{1}{1+4^{n-2x}}
$$
Therefore A (resp. B) is chosen when $n-2x<0$ (resp. $n-2x>0$). Thus, when $n=2k-1$, the probability to correctly choose A is
$$
\mathbb{P}(X>(2k-1)/2|b=A) = \mathbb{P}(X\ge k|b=A) =\sum_{x=k}^{2k-1} {2k-1 \choose x} 0.8^x 0.2^{2k-1-x}\,.
$$ | Light bulb color problem
To answer in a rigorous way, this problem boils down to observing the number of red flashes $X$ which is either a binomial $\mathcal{B}(n,.8)$ (A) or a binomial $\mathcal{B}(n,.8)$ (B), with probabili |
35,024 | General linear hypothesis test statistic: equivalence of two expressions | For your second question, you have $\mathbf{y}\sim N(\mathbf{X}\boldsymbol{\beta},\sigma^2 \mathbf{I})$ and suppose you're testing $\mathbf{C}\boldsymbol{\beta}=\mathbf{0}$. So, we have that (the following is all shown through matrix algebra and properties of the normal distribution -- I'm happy to walk through any of these details)
$
\mathbf{C}\hat{\boldsymbol{\beta}}\sim N(\mathbf{0}, \sigma^2 \mathbf{C(X'X)^{-1}C'}).
$
And so,
$
\textrm{Cov}(\mathbf{C}\hat{\boldsymbol{\beta}})=\sigma^2 \mathbf{C(X'X)^{-1}C}.
$
which leads to noting that
$
F_1 = \frac{(\mathbf{C}\hat{\boldsymbol{\beta}})'[\mathbf{C(X'X)^{-1}C'}]^{-1}\mathbf{C}\hat{\boldsymbol{\beta}}}{\sigma^2}\sim \chi^2 \left(q\right).
$
You get the above result because $F_1$ is a quadratic form and by invoking a certain theorem. This theorem states that if $\mathbf{x}\sim N(\boldsymbol{\mu}, \boldsymbol{\Sigma})$, then $\mathbf{x'Ax}\sim \chi^2 (r,p)$, where $r=\textrm{rank}(A)$ and $p=\frac{1}{2}\boldsymbol{\mu}'\mathbf{A}\boldsymbol{\mu}$, iff $\mathbf{A}\boldsymbol{\Sigma}$ is idempotent. [The proof of this theorem is a bit long and tedious, but it's doable. Hint: use the moment generating function of $\mathbf{x'Ax}$].
So, since $\mathbf{C}\hat{\boldsymbol{\beta}}$ is normally distributed, and the numerator of $F_1$ is a quadratic form involving $\mathbf{C}\hat{\boldsymbol{\beta}}$, we can use the above theorem (after proving the idempotent part).
Then,
$
F_2 = \frac{\mathbf{y}'[\mathbf{I} - \mathbf{X(X'X)^{-1}X'}]\mathbf{y}}{\sigma^2}\sim \chi^2(n-p-1)
$
Through some tedious details, you can show that $F_1$ and $F_2$ are independent. And from there you should be able to justify your second $F$ statistic. | General linear hypothesis test statistic: equivalence of two expressions | For your second question, you have $\mathbf{y}\sim N(\mathbf{X}\boldsymbol{\beta},\sigma^2 \mathbf{I})$ and suppose you're testing $\mathbf{C}\boldsymbol{\beta}=\mathbf{0}$. So, we have that (the fol | General linear hypothesis test statistic: equivalence of two expressions
For your second question, you have $\mathbf{y}\sim N(\mathbf{X}\boldsymbol{\beta},\sigma^2 \mathbf{I})$ and suppose you're testing $\mathbf{C}\boldsymbol{\beta}=\mathbf{0}$. So, we have that (the following is all shown through matrix algebra and properties of the normal distribution -- I'm happy to walk through any of these details)
$
\mathbf{C}\hat{\boldsymbol{\beta}}\sim N(\mathbf{0}, \sigma^2 \mathbf{C(X'X)^{-1}C'}).
$
And so,
$
\textrm{Cov}(\mathbf{C}\hat{\boldsymbol{\beta}})=\sigma^2 \mathbf{C(X'X)^{-1}C}.
$
which leads to noting that
$
F_1 = \frac{(\mathbf{C}\hat{\boldsymbol{\beta}})'[\mathbf{C(X'X)^{-1}C'}]^{-1}\mathbf{C}\hat{\boldsymbol{\beta}}}{\sigma^2}\sim \chi^2 \left(q\right).
$
You get the above result because $F_1$ is a quadratic form and by invoking a certain theorem. This theorem states that if $\mathbf{x}\sim N(\boldsymbol{\mu}, \boldsymbol{\Sigma})$, then $\mathbf{x'Ax}\sim \chi^2 (r,p)$, where $r=\textrm{rank}(A)$ and $p=\frac{1}{2}\boldsymbol{\mu}'\mathbf{A}\boldsymbol{\mu}$, iff $\mathbf{A}\boldsymbol{\Sigma}$ is idempotent. [The proof of this theorem is a bit long and tedious, but it's doable. Hint: use the moment generating function of $\mathbf{x'Ax}$].
So, since $\mathbf{C}\hat{\boldsymbol{\beta}}$ is normally distributed, and the numerator of $F_1$ is a quadratic form involving $\mathbf{C}\hat{\boldsymbol{\beta}}$, we can use the above theorem (after proving the idempotent part).
Then,
$
F_2 = \frac{\mathbf{y}'[\mathbf{I} - \mathbf{X(X'X)^{-1}X'}]\mathbf{y}}{\sigma^2}\sim \chi^2(n-p-1)
$
Through some tedious details, you can show that $F_1$ and $F_2$ are independent. And from there you should be able to justify your second $F$ statistic. | General linear hypothesis test statistic: equivalence of two expressions
For your second question, you have $\mathbf{y}\sim N(\mathbf{X}\boldsymbol{\beta},\sigma^2 \mathbf{I})$ and suppose you're testing $\mathbf{C}\boldsymbol{\beta}=\mathbf{0}$. So, we have that (the fol |
35,025 | General linear hypothesis test statistic: equivalence of two expressions | Since nobody has done so yet, I will address your first question. I also could not find a reference for [a proof of] this result anywhere, so if anyone knows a reference please let us know.
The most general test that this $F$-test can handle is $H_0 : C \beta = \psi$ for some $q \times p$ matrix $C$ and $q$-vector $\psi$. This allows you to test hypotheses like $H_0 : \beta_1 + \beta_2 = \beta_3 + 4$.
However, it seems you are focusing on testing hypotheses like $H_0 : \beta_2 = \beta_4 = \beta_5 = 0$, which is a special case with $\psi=0$ and $C$ being a matrix with one $1$ in each row, and all other entries being $0$. This allows you to more concretely view the smaller model as obtained by simply dropping some columns of your design matrix (i.e. going from $X_u$ to $X_r$), but in the end the result you are seeking is in terms of an abstract $C$ anyway.
Since it happens to be true that the formula $(C\hat{\beta})' (C (X'X)^{-1} C') (C \hat{\beta})$ works for arbitrary $C$ and $\psi=0$, I will prove it in that level of generality. Then you can consider your situation as a special case, as described in the previous paragraph.
If $\psi \ne 0$, the formula needs to be modified to
$(C\hat{\beta} - \psi)' (C (X'X)^{-1} C') (C \hat{\beta} - \psi)$,
which I also prove at the end of this post.
First I consider the case $\psi=0$.
I will try to keep some of your notation. Let $V_u = \text{colspace}(X) = \{X\beta : \beta \in \mathbb{R}^p\}$.
Let $V_r := \{X\beta : C\beta=0\}$.
(This would be the column space of your $X_r$ in your special case.)
Let $P_u$ and $P_r$ be the projections on these two subspaces.
As you noted, $P_u y$ and $P_r y$ are the predictions under the full model and the null model respectively. Moreover, you can show $\|(P_u - P_r) y\|^2$ is the difference in the sum of squares of residuals.
Let $V_l$ be the orthogonal complement of $V_r$ when viewed as a subspace of $V_u$. (In your special case, $V_l$ would be the span of the columns of the removed columns of $X_u$.)
Then $V_r \oplus V_l = V_u$, and moreover,
In particular, if $P_l$ is the projection onto $V_l$, then $P_u = P_r + P_l$.
Thus, the difference in the sum of squares of residuals is
$$\|P_l y\|^2.$$
If $\tilde{X}$ is a matrix whose columns span $V_l$, then
$P_l = \tilde{X} (\tilde{X}'\tilde{X})^{-1} \tilde{X}'$ and thus
$$\|P_l y\|^2 = y'\tilde{X} (\tilde{X}'\tilde{X})^{-1} \tilde{X}' y.$$
In view of your attempt at the bottom of your post,
all we have to do is show that choosing $\tilde{X} := X(X'X)^{-1} C'$ works, i.e., that $V_l$ is the span of this matrix's columns.
Then that will conclude the proof.
It is clear that $\text{colspace}(\tilde{X}) \subseteq \text{colspace}(X)=V_u$.
Moreover, if $v \in V_r$ then it is of the form $v=X\beta$ with $C\beta=0$,
and thus $v' \tilde{X} = \beta' X' X (X'X)^{-1} C' = (C \beta)' = 0$, which shows $\text{colspace}(\tilde{X})$ is in the orthogonal complement of $V_r$, i.e. $\text{colspace}(\tilde{X}) \subseteq V_l$.
Finally, suppose $X\beta \in V_l$. Then $(X\beta)'(X\beta_0)=0$ for any $\beta_0 \in \text{nullspace}(C)$. This implies $X'X\beta \in \text{nullspace}(C)^\perp = \text{colspace}(C')$, so $X'X\beta=C'v$ for some $v$. Then, $X(X'X)^{-1} C' v = X\beta$, which shows $V_l \subseteq \text{colspace}(\tilde{X})$.
The more general case $\psi \ne 0$ can be obtained by slight modifications to the above proof.
The fit of the restricted model would just be the projection $\tilde{P}_r$
onto the affine space $\tilde{V}_r = \{X \beta : C \beta = \psi\}$, instead of the projection $P_r$ onto the subspace $V_r =\{X\beta : C \beta = 0\}$. The two are quite related however, as one can write $\tilde{V}_r = V_r + \{X \beta_1\}$, where $\beta_1$ is an arbitrarily chosen vector satisfying $C \beta_1 = \psi$, and thus
$$\tilde{P}_r y = P_r(y - X\beta_1) + X \beta_1.$$
Then, using the fact that $P_u X \beta_1 = X \beta_1$,
we have $$(P_u - \tilde{P}_r) y = P_u y - P_r(y - X \beta_1) - X \beta_1 = (P_u - P_r)(y - X\beta_1) = P_l(y - X \beta_1).$$
Recalling $P_l = \tilde{X} (\tilde{X}'\tilde{X})^{-1} \tilde{X}'$ with $\tilde{X} = X(X'X)^{-1} C'$,
the difference in sum of squares of residuals can be shown to be
$$(y - X \beta_1)' P_l (y - X \beta_1) = (C \hat{\beta} - \psi)'(C (X'X)^{-1} C') (C\hat{\beta} - \psi).$$ | General linear hypothesis test statistic: equivalence of two expressions | Since nobody has done so yet, I will address your first question. I also could not find a reference for [a proof of] this result anywhere, so if anyone knows a reference please let us know.
The most g | General linear hypothesis test statistic: equivalence of two expressions
Since nobody has done so yet, I will address your first question. I also could not find a reference for [a proof of] this result anywhere, so if anyone knows a reference please let us know.
The most general test that this $F$-test can handle is $H_0 : C \beta = \psi$ for some $q \times p$ matrix $C$ and $q$-vector $\psi$. This allows you to test hypotheses like $H_0 : \beta_1 + \beta_2 = \beta_3 + 4$.
However, it seems you are focusing on testing hypotheses like $H_0 : \beta_2 = \beta_4 = \beta_5 = 0$, which is a special case with $\psi=0$ and $C$ being a matrix with one $1$ in each row, and all other entries being $0$. This allows you to more concretely view the smaller model as obtained by simply dropping some columns of your design matrix (i.e. going from $X_u$ to $X_r$), but in the end the result you are seeking is in terms of an abstract $C$ anyway.
Since it happens to be true that the formula $(C\hat{\beta})' (C (X'X)^{-1} C') (C \hat{\beta})$ works for arbitrary $C$ and $\psi=0$, I will prove it in that level of generality. Then you can consider your situation as a special case, as described in the previous paragraph.
If $\psi \ne 0$, the formula needs to be modified to
$(C\hat{\beta} - \psi)' (C (X'X)^{-1} C') (C \hat{\beta} - \psi)$,
which I also prove at the end of this post.
First I consider the case $\psi=0$.
I will try to keep some of your notation. Let $V_u = \text{colspace}(X) = \{X\beta : \beta \in \mathbb{R}^p\}$.
Let $V_r := \{X\beta : C\beta=0\}$.
(This would be the column space of your $X_r$ in your special case.)
Let $P_u$ and $P_r$ be the projections on these two subspaces.
As you noted, $P_u y$ and $P_r y$ are the predictions under the full model and the null model respectively. Moreover, you can show $\|(P_u - P_r) y\|^2$ is the difference in the sum of squares of residuals.
Let $V_l$ be the orthogonal complement of $V_r$ when viewed as a subspace of $V_u$. (In your special case, $V_l$ would be the span of the columns of the removed columns of $X_u$.)
Then $V_r \oplus V_l = V_u$, and moreover,
In particular, if $P_l$ is the projection onto $V_l$, then $P_u = P_r + P_l$.
Thus, the difference in the sum of squares of residuals is
$$\|P_l y\|^2.$$
If $\tilde{X}$ is a matrix whose columns span $V_l$, then
$P_l = \tilde{X} (\tilde{X}'\tilde{X})^{-1} \tilde{X}'$ and thus
$$\|P_l y\|^2 = y'\tilde{X} (\tilde{X}'\tilde{X})^{-1} \tilde{X}' y.$$
In view of your attempt at the bottom of your post,
all we have to do is show that choosing $\tilde{X} := X(X'X)^{-1} C'$ works, i.e., that $V_l$ is the span of this matrix's columns.
Then that will conclude the proof.
It is clear that $\text{colspace}(\tilde{X}) \subseteq \text{colspace}(X)=V_u$.
Moreover, if $v \in V_r$ then it is of the form $v=X\beta$ with $C\beta=0$,
and thus $v' \tilde{X} = \beta' X' X (X'X)^{-1} C' = (C \beta)' = 0$, which shows $\text{colspace}(\tilde{X})$ is in the orthogonal complement of $V_r$, i.e. $\text{colspace}(\tilde{X}) \subseteq V_l$.
Finally, suppose $X\beta \in V_l$. Then $(X\beta)'(X\beta_0)=0$ for any $\beta_0 \in \text{nullspace}(C)$. This implies $X'X\beta \in \text{nullspace}(C)^\perp = \text{colspace}(C')$, so $X'X\beta=C'v$ for some $v$. Then, $X(X'X)^{-1} C' v = X\beta$, which shows $V_l \subseteq \text{colspace}(\tilde{X})$.
The more general case $\psi \ne 0$ can be obtained by slight modifications to the above proof.
The fit of the restricted model would just be the projection $\tilde{P}_r$
onto the affine space $\tilde{V}_r = \{X \beta : C \beta = \psi\}$, instead of the projection $P_r$ onto the subspace $V_r =\{X\beta : C \beta = 0\}$. The two are quite related however, as one can write $\tilde{V}_r = V_r + \{X \beta_1\}$, where $\beta_1$ is an arbitrarily chosen vector satisfying $C \beta_1 = \psi$, and thus
$$\tilde{P}_r y = P_r(y - X\beta_1) + X \beta_1.$$
Then, using the fact that $P_u X \beta_1 = X \beta_1$,
we have $$(P_u - \tilde{P}_r) y = P_u y - P_r(y - X \beta_1) - X \beta_1 = (P_u - P_r)(y - X\beta_1) = P_l(y - X \beta_1).$$
Recalling $P_l = \tilde{X} (\tilde{X}'\tilde{X})^{-1} \tilde{X}'$ with $\tilde{X} = X(X'X)^{-1} C'$,
the difference in sum of squares of residuals can be shown to be
$$(y - X \beta_1)' P_l (y - X \beta_1) = (C \hat{\beta} - \psi)'(C (X'X)^{-1} C') (C\hat{\beta} - \psi).$$ | General linear hypothesis test statistic: equivalence of two expressions
Since nobody has done so yet, I will address your first question. I also could not find a reference for [a proof of] this result anywhere, so if anyone knows a reference please let us know.
The most g |
35,026 | How to log transform Z-scores? | A few quick points about logs
The following R code is a reminder that the log of a negative number is not a number and that the log of zero is negative infinity. Thus, if you are going to take a log of a z-score, you first need to make all values obtained greater than zero.
> values <- c(-2, -1, 0, .001, .1, 1, 10)
> data.frame(values=values, logvalues=log(values))
values logvalues
1 -2.000 NaN
2 -1.000 NaN
3 0.000 -Inf
4 0.001 -6.907755
5 0.100 -2.302585
6 1.000 0.000000
7 10.000 2.302585
Warning message:
In log(values) : NaNs produced
A simple strategy of logs on z-scores
A simple strategy for log transforming a variable is to first add a constant to the variable such that the minimum value is one. i.e., 1 + x - min(x).
The following code shows a simple example of some standardised positively skewed data. The minimum of 1 + x - min(x) is 1. Thus, the variable can be log transformed.
The plot then shows the density before and after transformation.
> set.seed(4444)
> # some skewed raw data
> x <- scale((rnorm(1000) + 3)^2)
>
> xnew <- 1 + x - min(x)
> min(xnew)
[1] 1
> min(x)
[1] -1.584252
> xnew <- log(xnew)
>
> par(mfrow=c(2,1))
> plot(density(x))
> plot(density(xnew))
But exactly what transformation should you perform?
There is a general issue of whether a log transformation is appropriate to your data, and if so, what constant you should add to your raw data.
Presumably if you already have z-scores, then you don't care too much about the absolute metric.
You'll find further discussion of this issue on this question | How to log transform Z-scores? | A few quick points about logs
The following R code is a reminder that the log of a negative number is not a number and that the log of zero is negative infinity. Thus, if you are going to take a log o | How to log transform Z-scores?
A few quick points about logs
The following R code is a reminder that the log of a negative number is not a number and that the log of zero is negative infinity. Thus, if you are going to take a log of a z-score, you first need to make all values obtained greater than zero.
> values <- c(-2, -1, 0, .001, .1, 1, 10)
> data.frame(values=values, logvalues=log(values))
values logvalues
1 -2.000 NaN
2 -1.000 NaN
3 0.000 -Inf
4 0.001 -6.907755
5 0.100 -2.302585
6 1.000 0.000000
7 10.000 2.302585
Warning message:
In log(values) : NaNs produced
A simple strategy of logs on z-scores
A simple strategy for log transforming a variable is to first add a constant to the variable such that the minimum value is one. i.e., 1 + x - min(x).
The following code shows a simple example of some standardised positively skewed data. The minimum of 1 + x - min(x) is 1. Thus, the variable can be log transformed.
The plot then shows the density before and after transformation.
> set.seed(4444)
> # some skewed raw data
> x <- scale((rnorm(1000) + 3)^2)
>
> xnew <- 1 + x - min(x)
> min(xnew)
[1] 1
> min(x)
[1] -1.584252
> xnew <- log(xnew)
>
> par(mfrow=c(2,1))
> plot(density(x))
> plot(density(xnew))
But exactly what transformation should you perform?
There is a general issue of whether a log transformation is appropriate to your data, and if so, what constant you should add to your raw data.
Presumably if you already have z-scores, then you don't care too much about the absolute metric.
You'll find further discussion of this issue on this question | How to log transform Z-scores?
A few quick points about logs
The following R code is a reminder that the log of a negative number is not a number and that the log of zero is negative infinity. Thus, if you are going to take a log o |
35,027 | How to log transform Z-scores? | You cannot assign arbitrary Mean and SD to covert z-score data into Raw data (x). However, you can check a shape of the distribution of z-scores by calculating skewness or kurtosis. Log-transform only useful if you're data is positively skewed. Moreover, it would be good if you explain that what is your objective? as @Karl asked. It might be helpful to visit this URL. | How to log transform Z-scores? | You cannot assign arbitrary Mean and SD to covert z-score data into Raw data (x). However, you can check a shape of the distribution of z-scores by calculating skewness or kurtosis. Log-transform only | How to log transform Z-scores?
You cannot assign arbitrary Mean and SD to covert z-score data into Raw data (x). However, you can check a shape of the distribution of z-scores by calculating skewness or kurtosis. Log-transform only useful if you're data is positively skewed. Moreover, it would be good if you explain that what is your objective? as @Karl asked. It might be helpful to visit this URL. | How to log transform Z-scores?
You cannot assign arbitrary Mean and SD to covert z-score data into Raw data (x). However, you can check a shape of the distribution of z-scores by calculating skewness or kurtosis. Log-transform only |
35,028 | How to log transform Z-scores? | I understood that you wanted to log-transform your data so that it looked more "normal" (that is, more symmetric). But if that is the goal, why don't you apply a transform to your data, that makes it exactly standard normal?
Suppose you have a variable $x$, and you estimated its CDF as $\hat{F}(x)$. Then you can apply the transformation $y=\Phi^{-1}(F(x))$, where $\Phi()$ is standard normal CDF. By definition, $y$ will be standard normal.
Different algorithms, like scikit-learn quantile transformation, will do this for you. | How to log transform Z-scores? | I understood that you wanted to log-transform your data so that it looked more "normal" (that is, more symmetric). But if that is the goal, why don't you apply a transform to your data, that makes it | How to log transform Z-scores?
I understood that you wanted to log-transform your data so that it looked more "normal" (that is, more symmetric). But if that is the goal, why don't you apply a transform to your data, that makes it exactly standard normal?
Suppose you have a variable $x$, and you estimated its CDF as $\hat{F}(x)$. Then you can apply the transformation $y=\Phi^{-1}(F(x))$, where $\Phi()$ is standard normal CDF. By definition, $y$ will be standard normal.
Different algorithms, like scikit-learn quantile transformation, will do this for you. | How to log transform Z-scores?
I understood that you wanted to log-transform your data so that it looked more "normal" (that is, more symmetric). But if that is the goal, why don't you apply a transform to your data, that makes it |
35,029 | Automating plotting CSV files quickly and with some level of artistic control | Python with MatplotLib. Python is pretty good at manipulating csv files.
but haven't seen any simple examples for plotting multiple columns of
data onto the same X and Y axes as a spreadsheet scatterplot will do.
Have you explored ggplot2? You can keep adding series to a plot using ggplot2. It also has a very good facet plotting feature. | Automating plotting CSV files quickly and with some level of artistic control | Python with MatplotLib. Python is pretty good at manipulating csv files.
but haven't seen any simple examples for plotting multiple columns of
data onto the same X and Y axes as a spreadsheet scatt | Automating plotting CSV files quickly and with some level of artistic control
Python with MatplotLib. Python is pretty good at manipulating csv files.
but haven't seen any simple examples for plotting multiple columns of
data onto the same X and Y axes as a spreadsheet scatterplot will do.
Have you explored ggplot2? You can keep adding series to a plot using ggplot2. It also has a very good facet plotting feature. | Automating plotting CSV files quickly and with some level of artistic control
Python with MatplotLib. Python is pretty good at manipulating csv files.
but haven't seen any simple examples for plotting multiple columns of
data onto the same X and Y axes as a spreadsheet scatt |
35,030 | Automating plotting CSV files quickly and with some level of artistic control | As @suncoolsu observed the main thing is strategy for abstracting data operations. First prepare template for each graph you intend to produce. This means definining:
Data for the graph
Artistic details
Now you need the program (software package) which takes as the input the data and the artistic details and outputs the graph in your preferred format.
Data for the graph will probably be not of the format you have data in the csv file. So you need the program which reads the data from csv file and prepares it for plotting.
Finaly you will need a program which coordinates aforementioned processes: data preparation and graphs.
If you work with Unix based systems such combination of different program is very common, so there exist multiple choices for the data manipulation and coordination programs. All the scripting languages (bash, python, perl, ruby) will be able to perform these tasks. For producing graphs you need more specialized software, such as gnuplot, or specialized libraries of scripting languages. Although I mentioned Unix, you can perform these operations on Windows too.
Instead of scripting languages you can write dedicated programs in C, C++, Java, .NET or any other programming language you prefer. It really depends on which environment you are comfortable working with. You can also use Visual Basic or VB macros in Excel.
I myself would do everything in R, since it can perform all three tasks I mentioned. I routinely have to read csv files, do analysis and perform plots. Since usually I am working with multiple country data, I must produce a graph for each country. R lets me do this very easily. Furthermore R graphs are very customizable (see the graph on R project home page), and when you have a graph you can produce practically any format you like, see ?device.
Even in R you can achieve the aforementioned tasks in different ways. For example you can use only base packages, or use packages such as foreach, plyr, reshape for automation and data manipulation. For plotting you can either use base R graphics, lattice, or ggplot2. | Automating plotting CSV files quickly and with some level of artistic control | As @suncoolsu observed the main thing is strategy for abstracting data operations. First prepare template for each graph you intend to produce. This means definining:
Data for the graph
Artistic deta | Automating plotting CSV files quickly and with some level of artistic control
As @suncoolsu observed the main thing is strategy for abstracting data operations. First prepare template for each graph you intend to produce. This means definining:
Data for the graph
Artistic details
Now you need the program (software package) which takes as the input the data and the artistic details and outputs the graph in your preferred format.
Data for the graph will probably be not of the format you have data in the csv file. So you need the program which reads the data from csv file and prepares it for plotting.
Finaly you will need a program which coordinates aforementioned processes: data preparation and graphs.
If you work with Unix based systems such combination of different program is very common, so there exist multiple choices for the data manipulation and coordination programs. All the scripting languages (bash, python, perl, ruby) will be able to perform these tasks. For producing graphs you need more specialized software, such as gnuplot, or specialized libraries of scripting languages. Although I mentioned Unix, you can perform these operations on Windows too.
Instead of scripting languages you can write dedicated programs in C, C++, Java, .NET or any other programming language you prefer. It really depends on which environment you are comfortable working with. You can also use Visual Basic or VB macros in Excel.
I myself would do everything in R, since it can perform all three tasks I mentioned. I routinely have to read csv files, do analysis and perform plots. Since usually I am working with multiple country data, I must produce a graph for each country. R lets me do this very easily. Furthermore R graphs are very customizable (see the graph on R project home page), and when you have a graph you can produce practically any format you like, see ?device.
Even in R you can achieve the aforementioned tasks in different ways. For example you can use only base packages, or use packages such as foreach, plyr, reshape for automation and data manipulation. For plotting you can either use base R graphics, lattice, or ggplot2. | Automating plotting CSV files quickly and with some level of artistic control
As @suncoolsu observed the main thing is strategy for abstracting data operations. First prepare template for each graph you intend to produce. This means definining:
Data for the graph
Artistic deta |
35,031 | Automating plotting CSV files quickly and with some level of artistic control | I use matplotlib for the same purpose. This is my python code:
#!/home/user/miniconda2/bin/python
import sys
import csv
import numpy as np
import matplotlib.pyplot as plt
#plt.style.use('stylesheet')
if len(sys.argv) != 2:
print "Single argument Expected"
sys.exit(2)
f = str(sys.argv[1])
with open(f) as f1:
reader = csv.reader(f1, delimiter=',')
first_row = next(reader)
data = np.genfromtxt(f, delimiter=",", skip_header=1)
for col in range(1,len(first_row)):
plt.plot(data[:,0],data[:,col],label=first_row[col],lw=2)
plt.title(f[:-4])
plt.xlabel(first_row[0])
#plt.ylabel(first_row[1])
plt.legend(loc=0)
#plt.show()
plt.savefig(f[:-4]+".pdf")
print "Done! Saved image to "+f[:-4]+".pdf"
Go through matplotlib documentation to know how to change plot style and other properties. I personally use a stylesheet. | Automating plotting CSV files quickly and with some level of artistic control | I use matplotlib for the same purpose. This is my python code:
#!/home/user/miniconda2/bin/python
import sys
import csv
import numpy as np
import matplotlib.pyplot as plt
#plt.style.use('stylesheet')
| Automating plotting CSV files quickly and with some level of artistic control
I use matplotlib for the same purpose. This is my python code:
#!/home/user/miniconda2/bin/python
import sys
import csv
import numpy as np
import matplotlib.pyplot as plt
#plt.style.use('stylesheet')
if len(sys.argv) != 2:
print "Single argument Expected"
sys.exit(2)
f = str(sys.argv[1])
with open(f) as f1:
reader = csv.reader(f1, delimiter=',')
first_row = next(reader)
data = np.genfromtxt(f, delimiter=",", skip_header=1)
for col in range(1,len(first_row)):
plt.plot(data[:,0],data[:,col],label=first_row[col],lw=2)
plt.title(f[:-4])
plt.xlabel(first_row[0])
#plt.ylabel(first_row[1])
plt.legend(loc=0)
#plt.show()
plt.savefig(f[:-4]+".pdf")
print "Done! Saved image to "+f[:-4]+".pdf"
Go through matplotlib documentation to know how to change plot style and other properties. I personally use a stylesheet. | Automating plotting CSV files quickly and with some level of artistic control
I use matplotlib for the same purpose. This is my python code:
#!/home/user/miniconda2/bin/python
import sys
import csv
import numpy as np
import matplotlib.pyplot as plt
#plt.style.use('stylesheet')
|
35,032 | The effect of the number of samples in different cells on the results of ANOVA | I don't have Matlab but from what I've read in the on-line help for N-way analysis of variance it's not clear to me whether Matlab would automatically adapt the type (1--3) depending on your design. My best guess is that yes you got different results because the tests were not designed in the same way.
Generally, with an imbalanced design it is recommended to use Type III sum of squares (SS), where each term is tested after all other (the difference with Type II sum of squares is only apparent when an interaction term is present), while with an incomplete design it might be interesting to compare Type III and Type IV SS. Note that the use of type III vs. Type II in the case of unbalanced data is subject to discussion in the literature.
(The following is based on a French tutorial that I cannot found anymore on the original website. Here is a personal copy, and here is another paper that discussed the different ways to compute SS in factorial ANOVAs: Which Sums of Squares Are Best In Unbalanced Analysis of Variance?)
The difference between Type I/II and Type III (also called Yates's weighted squares of means) lies in the model that serves as a reference model when computing SS, and whether factors are treated in the order they enter the model or not. Let's say we have two factors, A and B, and their interaction A*B, and a model like y ~ A + B + A:B (Wilkinson's notation).
With Type I SS, we first compute SS associated to A, then B, and finally A*B. Those SS are computed as the difference in residual SS (RSS) between the largest model omitting the term of interest and the smallest one including it.
For Type II and III, SS are computed in a sequantial manner, starting with those associated to A*B, then B, and finally A. For A*B, it is simply the difference between the RSS in the full model and the RSS in the model without interaction. The SS associated to B is computed as the difference between RSS for a model where B is omitted and a model where B is included (reference model); with Type III SS, the reference model is the full model (A+B+A*B), whereas for Type I and II SS, it is the additive model (A+B). This explains why Type II and III will be identical when no interaction is present in the full model. However, to obtain the first SS, we need to use dummy variables to code the levels of the factor, or more precisely difference between those dummy-coded levels (which also means that the reference level considered for a given factor matters; e.g., SAS consider the last level, whereas R consider the first one, in a lexicographic order). To compute SS for the A term, we follow the same idea: we consider the difference between the RSS for the model A+B+A*B and that for the reduced model B+A*B (A omitted), in case of Type III SS; with Type II SS, we consider A+B vs. B.
Note that in a complete balanced design, all SS will be equal. Moreover, with Type I SS, the sum of all SS will equal that of the full model, whatever the order of the terms in the model is. (This is not true for Type II and Type III SS.)
A detailed and concrete overview of the different methods is available in one of Howell's handout: Computing Type I, Type II, and Type III Sums of Squares directly using the general linear model. That might help you check your code. You can also use R with the car package, by John Fox who dicussed the use of incremental sum of squares in his textbook, Applied Regression Analysis, Linear Models, and Related Methods (Sage Publications, 1997, § 8.2.4--8.2.6). An example of use can be found on Daniel Wollschläger website.
Finally, the following paper offers a good discussion on the use of Type III SS (§ 5.1):
Venables, W.N. (2000). Exegeses on
Linear Models. Paper presented to
the S-PLUS User’s Conference
Washington, DC, 8-9th October, 1998.
(See also this R-help thread, references therein, and the following post Anova – Type I/II/III SS explained.) | The effect of the number of samples in different cells on the results of ANOVA | I don't have Matlab but from what I've read in the on-line help for N-way analysis of variance it's not clear to me whether Matlab would automatically adapt the type (1--3) depending on your design. M | The effect of the number of samples in different cells on the results of ANOVA
I don't have Matlab but from what I've read in the on-line help for N-way analysis of variance it's not clear to me whether Matlab would automatically adapt the type (1--3) depending on your design. My best guess is that yes you got different results because the tests were not designed in the same way.
Generally, with an imbalanced design it is recommended to use Type III sum of squares (SS), where each term is tested after all other (the difference with Type II sum of squares is only apparent when an interaction term is present), while with an incomplete design it might be interesting to compare Type III and Type IV SS. Note that the use of type III vs. Type II in the case of unbalanced data is subject to discussion in the literature.
(The following is based on a French tutorial that I cannot found anymore on the original website. Here is a personal copy, and here is another paper that discussed the different ways to compute SS in factorial ANOVAs: Which Sums of Squares Are Best In Unbalanced Analysis of Variance?)
The difference between Type I/II and Type III (also called Yates's weighted squares of means) lies in the model that serves as a reference model when computing SS, and whether factors are treated in the order they enter the model or not. Let's say we have two factors, A and B, and their interaction A*B, and a model like y ~ A + B + A:B (Wilkinson's notation).
With Type I SS, we first compute SS associated to A, then B, and finally A*B. Those SS are computed as the difference in residual SS (RSS) between the largest model omitting the term of interest and the smallest one including it.
For Type II and III, SS are computed in a sequantial manner, starting with those associated to A*B, then B, and finally A. For A*B, it is simply the difference between the RSS in the full model and the RSS in the model without interaction. The SS associated to B is computed as the difference between RSS for a model where B is omitted and a model where B is included (reference model); with Type III SS, the reference model is the full model (A+B+A*B), whereas for Type I and II SS, it is the additive model (A+B). This explains why Type II and III will be identical when no interaction is present in the full model. However, to obtain the first SS, we need to use dummy variables to code the levels of the factor, or more precisely difference between those dummy-coded levels (which also means that the reference level considered for a given factor matters; e.g., SAS consider the last level, whereas R consider the first one, in a lexicographic order). To compute SS for the A term, we follow the same idea: we consider the difference between the RSS for the model A+B+A*B and that for the reduced model B+A*B (A omitted), in case of Type III SS; with Type II SS, we consider A+B vs. B.
Note that in a complete balanced design, all SS will be equal. Moreover, with Type I SS, the sum of all SS will equal that of the full model, whatever the order of the terms in the model is. (This is not true for Type II and Type III SS.)
A detailed and concrete overview of the different methods is available in one of Howell's handout: Computing Type I, Type II, and Type III Sums of Squares directly using the general linear model. That might help you check your code. You can also use R with the car package, by John Fox who dicussed the use of incremental sum of squares in his textbook, Applied Regression Analysis, Linear Models, and Related Methods (Sage Publications, 1997, § 8.2.4--8.2.6). An example of use can be found on Daniel Wollschläger website.
Finally, the following paper offers a good discussion on the use of Type III SS (§ 5.1):
Venables, W.N. (2000). Exegeses on
Linear Models. Paper presented to
the S-PLUS User’s Conference
Washington, DC, 8-9th October, 1998.
(See also this R-help thread, references therein, and the following post Anova – Type I/II/III SS explained.) | The effect of the number of samples in different cells on the results of ANOVA
I don't have Matlab but from what I've read in the on-line help for N-way analysis of variance it's not clear to me whether Matlab would automatically adapt the type (1--3) depending on your design. M |
35,033 | How to choose the link function when performing a logistic regression? | I don't know of SAS, so i'll just answer based on the statistics side of the question. About the software you mays ask at the sister site, stackoverflow.
If the link function is different (logistic, probit or Clog-log), than you will get different results. For logistic, use logistic.
About the real differences of these link functions.
Logistic and probit are pretty much the same. To see why they are pretty much the same, remember that in linear regression the link function is the identity. In logistic regression, the link function is the logistic and in the probit, the normal.
Formally, you can see this by noting that, in case your dependent variable is binary, you can think of it as following a Bernoulli distribution with a given probability of success.
$Y \sim Bernoulli(p_{i})$
$p_{i} = f(\mu)$
$\mu = XB$
Here, thew probabitliy $p_{i}$ is a function of predictor, just like in linear regression. The real difference is the link function. In linear regression, the link function is just the identity, i.e., $f(\mu) = \mu$, so you can just plug-in the linear predictors.In the logistic regression, the link function is the cumulative logistic distribution, given by $1/(1+exp(-x)). In the probit regression, the link function is the (inverse) cumulative Normal distribution function. And in the Clog-log regression, the link function is the complementary log log distribution.
I never used the Cloglog, so i'll abstein of coments about it here.
You can see that Normal and Logist are very similar in this blog post by John Cook, of Endeavour http://www.johndcook.com/blog/2010/05/18/normal-approximation-to-logistic/.
In general I use the logistic because the coefficients are easier to interpret than in a probit regression. In some specific context I use probit (ideal point estimation or when I have to code my own Gibbs Sampler), but I guess they are not relevant to you. So, my advice is, whenever in doubt about probit or logistic, use logistic! | How to choose the link function when performing a logistic regression? | I don't know of SAS, so i'll just answer based on the statistics side of the question. About the software you mays ask at the sister site, stackoverflow.
If the link function is different (logistic, | How to choose the link function when performing a logistic regression?
I don't know of SAS, so i'll just answer based on the statistics side of the question. About the software you mays ask at the sister site, stackoverflow.
If the link function is different (logistic, probit or Clog-log), than you will get different results. For logistic, use logistic.
About the real differences of these link functions.
Logistic and probit are pretty much the same. To see why they are pretty much the same, remember that in linear regression the link function is the identity. In logistic regression, the link function is the logistic and in the probit, the normal.
Formally, you can see this by noting that, in case your dependent variable is binary, you can think of it as following a Bernoulli distribution with a given probability of success.
$Y \sim Bernoulli(p_{i})$
$p_{i} = f(\mu)$
$\mu = XB$
Here, thew probabitliy $p_{i}$ is a function of predictor, just like in linear regression. The real difference is the link function. In linear regression, the link function is just the identity, i.e., $f(\mu) = \mu$, so you can just plug-in the linear predictors.In the logistic regression, the link function is the cumulative logistic distribution, given by $1/(1+exp(-x)). In the probit regression, the link function is the (inverse) cumulative Normal distribution function. And in the Clog-log regression, the link function is the complementary log log distribution.
I never used the Cloglog, so i'll abstein of coments about it here.
You can see that Normal and Logist are very similar in this blog post by John Cook, of Endeavour http://www.johndcook.com/blog/2010/05/18/normal-approximation-to-logistic/.
In general I use the logistic because the coefficients are easier to interpret than in a probit regression. In some specific context I use probit (ideal point estimation or when I have to code my own Gibbs Sampler), but I guess they are not relevant to you. So, my advice is, whenever in doubt about probit or logistic, use logistic! | How to choose the link function when performing a logistic regression?
I don't know of SAS, so i'll just answer based on the statistics side of the question. About the software you mays ask at the sister site, stackoverflow.
If the link function is different (logistic, |
35,034 | How to choose the link function when performing a logistic regression? | I have a question/comment. I thought that by definition, logistic regression uses the logit link. If you are using the probit or complementary log-log link, then I do not think that is logistic regression.
What you are doing is fitting generalized linear models on a binary outcome, which is assumed to follow a Bernoulli. The 3 usual choices of link functions are the logit, probit, and complementary log-log. If you are using the logit link, that is logistic regression. | How to choose the link function when performing a logistic regression? | I have a question/comment. I thought that by definition, logistic regression uses the logit link. If you are using the probit or complementary log-log link, then I do not think that is logistic regr | How to choose the link function when performing a logistic regression?
I have a question/comment. I thought that by definition, logistic regression uses the logit link. If you are using the probit or complementary log-log link, then I do not think that is logistic regression.
What you are doing is fitting generalized linear models on a binary outcome, which is assumed to follow a Bernoulli. The 3 usual choices of link functions are the logit, probit, and complementary log-log. If you are using the logit link, that is logistic regression. | How to choose the link function when performing a logistic regression?
I have a question/comment. I thought that by definition, logistic regression uses the logit link. If you are using the probit or complementary log-log link, then I do not think that is logistic regr |
35,035 | How to choose the link function when performing a logistic regression? | All 3 link functions are s-shaped and are not going to be that different. Li and Duan showed that if the predictor variables are well behaved (elliptically symmetric predictors are a subset of the well behaved group) then changing the link function will change the coefficients by a multiplicitive constant. Even if the predictors are not perfectly well behaved the differences between similar link functions are unlikely to change the overall inference (the exact coefficients will change, but what is important or significant will still be under a different link function).
The logit allows you to interpret individual coefficients as log-odds, so it tends to be the most popular these days. | How to choose the link function when performing a logistic regression? | All 3 link functions are s-shaped and are not going to be that different. Li and Duan showed that if the predictor variables are well behaved (elliptically symmetric predictors are a subset of the we | How to choose the link function when performing a logistic regression?
All 3 link functions are s-shaped and are not going to be that different. Li and Duan showed that if the predictor variables are well behaved (elliptically symmetric predictors are a subset of the well behaved group) then changing the link function will change the coefficients by a multiplicitive constant. Even if the predictors are not perfectly well behaved the differences between similar link functions are unlikely to change the overall inference (the exact coefficients will change, but what is important or significant will still be under a different link function).
The logit allows you to interpret individual coefficients as log-odds, so it tends to be the most popular these days. | How to choose the link function when performing a logistic regression?
All 3 link functions are s-shaped and are not going to be that different. Li and Duan showed that if the predictor variables are well behaved (elliptically symmetric predictors are a subset of the we |
35,036 | How to choose the link function when performing a logistic regression? | This is an excellent question that sits at the nexus of mathematics and science. As someone who teaches a linear models course that touches on "logistic regression" and its several possible link functions, I feel compelled to answer.
First, I believe that SAS is fitting a generalized linear model (GLM) and estimating the parameters using MLE (or qMLE) in its "logistic" function. As such, any appropriate link function that transforms (0, 1) into (-\inf, \inf) is appropriate. Of that infinite class of functions, the logit, the probit, and the complementary log-log are members... so are all quantile functions.
Second, there is little appreciable difference between the logit and the probit link functions. While the coefficient estimates will tend to differ by a factor of about 3.8, the predictions will be very similar.
Third, the logit and probit functions are symmetric about (0, 0.5), while the complementary log-log function is not symmetric. This constitutes the primary difference between the logit/probit functions and the complementary log-log function.
Recall that the dependent variable is the probability of a success and the independent variable is the linear predictor. For the logit/probit links, the function value approaches 0 at the same rate as it does 1. For the complementary log-log function, however, that is not true. The cloglog function approaches 1 more sharply than it approaches 0. [Side note: the log-log function is the complement of the cloglog. It approaches 0 more sharply than 1.]
Fourth... I'm not sure what that actually means in terms of your last question. My experience is that the science has not advanced enough to suggest a "correct" link function. As a result, I instruct my students to fit their model using several link functions. If the coefficient results differ by "a lot," then there is something wrong with their model. Otherwise, the model is robust to the selection of the link function.
While this is an answer to ayush biyani, I think #4 could drive an interesting discussion about link functions. | How to choose the link function when performing a logistic regression? | This is an excellent question that sits at the nexus of mathematics and science. As someone who teaches a linear models course that touches on "logistic regression" and its several possible link funct | How to choose the link function when performing a logistic regression?
This is an excellent question that sits at the nexus of mathematics and science. As someone who teaches a linear models course that touches on "logistic regression" and its several possible link functions, I feel compelled to answer.
First, I believe that SAS is fitting a generalized linear model (GLM) and estimating the parameters using MLE (or qMLE) in its "logistic" function. As such, any appropriate link function that transforms (0, 1) into (-\inf, \inf) is appropriate. Of that infinite class of functions, the logit, the probit, and the complementary log-log are members... so are all quantile functions.
Second, there is little appreciable difference between the logit and the probit link functions. While the coefficient estimates will tend to differ by a factor of about 3.8, the predictions will be very similar.
Third, the logit and probit functions are symmetric about (0, 0.5), while the complementary log-log function is not symmetric. This constitutes the primary difference between the logit/probit functions and the complementary log-log function.
Recall that the dependent variable is the probability of a success and the independent variable is the linear predictor. For the logit/probit links, the function value approaches 0 at the same rate as it does 1. For the complementary log-log function, however, that is not true. The cloglog function approaches 1 more sharply than it approaches 0. [Side note: the log-log function is the complement of the cloglog. It approaches 0 more sharply than 1.]
Fourth... I'm not sure what that actually means in terms of your last question. My experience is that the science has not advanced enough to suggest a "correct" link function. As a result, I instruct my students to fit their model using several link functions. If the coefficient results differ by "a lot," then there is something wrong with their model. Otherwise, the model is robust to the selection of the link function.
While this is an answer to ayush biyani, I think #4 could drive an interesting discussion about link functions. | How to choose the link function when performing a logistic regression?
This is an excellent question that sits at the nexus of mathematics and science. As someone who teaches a linear models course that touches on "logistic regression" and its several possible link funct |
35,037 | Testing paired frequencies for independence | Log-linear models might be another option to look at, if you want to study your two-way data structure.
If you assume that the two samples are matched (i.e., there is some kind of dependency between the two series of locutions) and you take into consideration that data are actually counts that can be considered as scores or ordered responses (as suggested by @caracal), then you can also look at marginal models for matched-pairs, which usually involve the analysis of a square contingency table. It may not be necessarily the case that you end up with such a square Table, but we can also decide of an upper-bound for the number of, e.g. passive sentences. Anyway, models for matched pairs are well explained in Chapter 10 of Agresti, Categorical Data Analysis; relevant models for ordinal categories in square tables are testing for quasi-symmetry (the difference in the effect of a category from one case to the other follows a linear trend in the category scores), conditional symmetry ($\pi_{ab}<\pi_{ab}$ or $\pi_{ab}>\pi_{ab}$, $\forall a,b$), and quasi-uniform association (linear-by-linear association off the main diagonal, which in the case of equal-interval scores means an uniform local association). Ordinal quasi-symmetry (OQS) is a special case of linear logit model, and it can be compared to a simpler model where only marginal homogeneity holds with an LR test, because ordinal quasi-symmetry + marginal homogeneity $=$ symmetry.
Following Agresti's notation (p. 429), we consider $u_1\leq\dots\leq u_I$ ordered scores for variable $X$ (in rows) and variable $Y$ (in columns); $a$ or $b$ denotes any row or column. The OQS model reads as the following log-linear model:
$$
\log\mu_{ab}=\lambda+\lambda_a+\lambda_b+\beta u_b +\lambda_{ab}
$$
where $\lambda_{ab}=\lambda_{ba}$ for all $a<b$. Compared to the usual QS model for nominal data which is $\log\mu_{ab}=\lambda+\lambda_a^X+\lambda_b^Y+\lambda_{ab}$, where $\lambda_{ab}=0$ would mean independence between the two variables, in the OQS model we impose $\lambda_b^Y-\lambda_b^X=\beta u_b$ (hence introducing the idea of a linear trend). The equivalent logit representation is $\log(\pi_{ab}/\pi_{ba})=\beta(u_b-u_a)$, for $a\leq b$.
If $\beta=0$, then we have symmetry as a special case of this model. If $\beta\neq 0$, then we have stochastically ordered margins, that is $\beta>0$ means that column mean is higher compared to row mean (and the greater $|\beta|$, the greater the differences between the two joint probabilities distributions $\pi_{ab}$ and $\pi_{ba}$ are, which will be reflected in the differences between row and column marginal distributions). A test of $\beta=0$ corresponds to a test of marginal homogeneity. The interpretation of the estimated $\beta$ is straightforward: the estimated probability that score on variable $X$ is $x$ units more positive than the score on $Y$ is $\exp(\hat\beta x)$ times the reverse probability. In your particular case, it means that $\hat\beta$ might allow to quantify the influence that one particular speaker exerts on the other.
Of note, all R code was made available by Laura Thompson in her S Manual to Accompany Agresti's Categorical Data Analysis.
Hereafter, I provide some example R code so that you can play with it on your own data. So, let's try to generate some data first:
set.seed(56)
d <- as.data.frame(replicate(2, rpois(420, 1.5)))
colnames(d) <- paste("S", 1:2, sep="")
d.tab <- table(d$S1, d$S2, dnn=names(d)) # or xtabs(~S1+S2, d)
library(vcdExtra)
structable(~S1+S2, data=d)
# library(ggplot2)
# ggfluctuation(d.tab, type="color") + labs(x="S1", y="S2") + theme_bw()
Visually, the cross-classification looks like this:
S2 0 1 2 3 4 5 6
S1
0 17 35 31 8 7 3 0
1 41 41 30 23 7 2 0
2 19 43 18 18 5 0 1
3 11 21 9 15 2 1 0
4 0 3 4 1 0 0 0
5 1 0 0 2 0 0 0
6 0 0 0 1 0 0 0
Now, we can fit the OQS model. Unlike Laura Thompson which used the base glm() function and a custom design matrix for symmetry, we can rely on the gnm package; we need, however, to add a vector for numerical scores to estimate $\beta$ in the above model.
library(gnm)
d.long <- data.frame(counts=c(d.tab), S1=gl(7,1,7*7,labels=0:6),
S2=gl(7,7,7*7,labels=0:6))
d.long$scores <- rep(0:6, each=7)
summary(mod.oqs <- gnm(counts~scores+Symm(S1,S2), data=d.long,
family=poisson))
anova(mod.oqs)
Here, we have $\hat\beta=0.123$, and thus the probability that Speaker B scores 4 when Speaker A scores 3 is $\exp(0.123)=1.13$ times the probability that Speaker B have a score of 3 while Speaker A have a score of 4.
I recently came across the catspec R package which seems to offer similar facilities, but I didn't try it. There was a good tutorial at UseR! 2009 about all this stuff: Introduction to Generalized Nonlinear Models in R, but see also the accompanying vignette, Generalized nonlinear models in R: An overview of the gnm package.
If you want to grasp the idea with real data, there are a lot of examples with real data sets in the vcdExtra package from Michael Friendly. About the OQS model, Agresti used data on Premarital Sex and Extramarital sex (Table 10.5, p. 421). Results are discussed in §10.4.7 (p. 430), and $\hat\beta$ was estimated at -2.86. The code below allow (partly grabbed from Thompson's textbook) to reproduce these results. We would need to relevel factor levels so as to set the same baseline than Agresti.
table.10.5 <- data.frame(expand.grid(PreSex=factor(1:4),
ExSex=factor(1:4)),
counts=c(144,33,84,126,2,4,14,29,0,2,6,25,0,0,1,5))
table.10.5$scores <- rep(1:4,each=4)
summary(mod.oqs <- gnm(counts~scores+Symm(PreSex,ExSex), data=table.10.5,
family=poisson)) # beta = -2.857
anova(mod.oqs) # G^2(5)=2.10 | Testing paired frequencies for independence | Log-linear models might be another option to look at, if you want to study your two-way data structure.
If you assume that the two samples are matched (i.e., there is some kind of dependency between t | Testing paired frequencies for independence
Log-linear models might be another option to look at, if you want to study your two-way data structure.
If you assume that the two samples are matched (i.e., there is some kind of dependency between the two series of locutions) and you take into consideration that data are actually counts that can be considered as scores or ordered responses (as suggested by @caracal), then you can also look at marginal models for matched-pairs, which usually involve the analysis of a square contingency table. It may not be necessarily the case that you end up with such a square Table, but we can also decide of an upper-bound for the number of, e.g. passive sentences. Anyway, models for matched pairs are well explained in Chapter 10 of Agresti, Categorical Data Analysis; relevant models for ordinal categories in square tables are testing for quasi-symmetry (the difference in the effect of a category from one case to the other follows a linear trend in the category scores), conditional symmetry ($\pi_{ab}<\pi_{ab}$ or $\pi_{ab}>\pi_{ab}$, $\forall a,b$), and quasi-uniform association (linear-by-linear association off the main diagonal, which in the case of equal-interval scores means an uniform local association). Ordinal quasi-symmetry (OQS) is a special case of linear logit model, and it can be compared to a simpler model where only marginal homogeneity holds with an LR test, because ordinal quasi-symmetry + marginal homogeneity $=$ symmetry.
Following Agresti's notation (p. 429), we consider $u_1\leq\dots\leq u_I$ ordered scores for variable $X$ (in rows) and variable $Y$ (in columns); $a$ or $b$ denotes any row or column. The OQS model reads as the following log-linear model:
$$
\log\mu_{ab}=\lambda+\lambda_a+\lambda_b+\beta u_b +\lambda_{ab}
$$
where $\lambda_{ab}=\lambda_{ba}$ for all $a<b$. Compared to the usual QS model for nominal data which is $\log\mu_{ab}=\lambda+\lambda_a^X+\lambda_b^Y+\lambda_{ab}$, where $\lambda_{ab}=0$ would mean independence between the two variables, in the OQS model we impose $\lambda_b^Y-\lambda_b^X=\beta u_b$ (hence introducing the idea of a linear trend). The equivalent logit representation is $\log(\pi_{ab}/\pi_{ba})=\beta(u_b-u_a)$, for $a\leq b$.
If $\beta=0$, then we have symmetry as a special case of this model. If $\beta\neq 0$, then we have stochastically ordered margins, that is $\beta>0$ means that column mean is higher compared to row mean (and the greater $|\beta|$, the greater the differences between the two joint probabilities distributions $\pi_{ab}$ and $\pi_{ba}$ are, which will be reflected in the differences between row and column marginal distributions). A test of $\beta=0$ corresponds to a test of marginal homogeneity. The interpretation of the estimated $\beta$ is straightforward: the estimated probability that score on variable $X$ is $x$ units more positive than the score on $Y$ is $\exp(\hat\beta x)$ times the reverse probability. In your particular case, it means that $\hat\beta$ might allow to quantify the influence that one particular speaker exerts on the other.
Of note, all R code was made available by Laura Thompson in her S Manual to Accompany Agresti's Categorical Data Analysis.
Hereafter, I provide some example R code so that you can play with it on your own data. So, let's try to generate some data first:
set.seed(56)
d <- as.data.frame(replicate(2, rpois(420, 1.5)))
colnames(d) <- paste("S", 1:2, sep="")
d.tab <- table(d$S1, d$S2, dnn=names(d)) # or xtabs(~S1+S2, d)
library(vcdExtra)
structable(~S1+S2, data=d)
# library(ggplot2)
# ggfluctuation(d.tab, type="color") + labs(x="S1", y="S2") + theme_bw()
Visually, the cross-classification looks like this:
S2 0 1 2 3 4 5 6
S1
0 17 35 31 8 7 3 0
1 41 41 30 23 7 2 0
2 19 43 18 18 5 0 1
3 11 21 9 15 2 1 0
4 0 3 4 1 0 0 0
5 1 0 0 2 0 0 0
6 0 0 0 1 0 0 0
Now, we can fit the OQS model. Unlike Laura Thompson which used the base glm() function and a custom design matrix for symmetry, we can rely on the gnm package; we need, however, to add a vector for numerical scores to estimate $\beta$ in the above model.
library(gnm)
d.long <- data.frame(counts=c(d.tab), S1=gl(7,1,7*7,labels=0:6),
S2=gl(7,7,7*7,labels=0:6))
d.long$scores <- rep(0:6, each=7)
summary(mod.oqs <- gnm(counts~scores+Symm(S1,S2), data=d.long,
family=poisson))
anova(mod.oqs)
Here, we have $\hat\beta=0.123$, and thus the probability that Speaker B scores 4 when Speaker A scores 3 is $\exp(0.123)=1.13$ times the probability that Speaker B have a score of 3 while Speaker A have a score of 4.
I recently came across the catspec R package which seems to offer similar facilities, but I didn't try it. There was a good tutorial at UseR! 2009 about all this stuff: Introduction to Generalized Nonlinear Models in R, but see also the accompanying vignette, Generalized nonlinear models in R: An overview of the gnm package.
If you want to grasp the idea with real data, there are a lot of examples with real data sets in the vcdExtra package from Michael Friendly. About the OQS model, Agresti used data on Premarital Sex and Extramarital sex (Table 10.5, p. 421). Results are discussed in §10.4.7 (p. 430), and $\hat\beta$ was estimated at -2.86. The code below allow (partly grabbed from Thompson's textbook) to reproduce these results. We would need to relevel factor levels so as to set the same baseline than Agresti.
table.10.5 <- data.frame(expand.grid(PreSex=factor(1:4),
ExSex=factor(1:4)),
counts=c(144,33,84,126,2,4,14,29,0,2,6,25,0,0,1,5))
table.10.5$scores <- rep(1:4,each=4)
summary(mod.oqs <- gnm(counts~scores+Symm(PreSex,ExSex), data=table.10.5,
family=poisson)) # beta = -2.857
anova(mod.oqs) # G^2(5)=2.10 | Testing paired frequencies for independence
Log-linear models might be another option to look at, if you want to study your two-way data structure.
If you assume that the two samples are matched (i.e., there is some kind of dependency between t |
35,038 | Testing paired frequencies for independence | You seem to have ordered categorical data, therefore I suggest a linear-by-linear test as described by Agresti (2007, p229 ff). Function lbl_test() of package coin implements it in R.
Agresti, A. (2007). Introduction to Categorical Data Analysis. 2nd Ed. Hoboken, New Jersey: John Wiley & Sons. Hoboken, NJ: Wiley. | Testing paired frequencies for independence | You seem to have ordered categorical data, therefore I suggest a linear-by-linear test as described by Agresti (2007, p229 ff). Function lbl_test() of package coin implements it in R.
Agresti, A. (200 | Testing paired frequencies for independence
You seem to have ordered categorical data, therefore I suggest a linear-by-linear test as described by Agresti (2007, p229 ff). Function lbl_test() of package coin implements it in R.
Agresti, A. (2007). Introduction to Categorical Data Analysis. 2nd Ed. Hoboken, New Jersey: John Wiley & Sons. Hoboken, NJ: Wiley. | Testing paired frequencies for independence
You seem to have ordered categorical data, therefore I suggest a linear-by-linear test as described by Agresti (2007, p229 ff). Function lbl_test() of package coin implements it in R.
Agresti, A. (200 |
35,039 | Testing paired frequencies for independence | I would maybe start with a rank correlation analysis.
The issue is that you may have very low correlations as the effects you are trying to capture are small.
Both Kendall and Spearman correlation coefficients are implemented in R in
cor(x=A, y=B, method = "spearman")
cor(x=A, y=B, method = "kendall") | Testing paired frequencies for independence | I would maybe start with a rank correlation analysis.
The issue is that you may have very low correlations as the effects you are trying to capture are small.
Both Kendall and Spearman correlation coe | Testing paired frequencies for independence
I would maybe start with a rank correlation analysis.
The issue is that you may have very low correlations as the effects you are trying to capture are small.
Both Kendall and Spearman correlation coefficients are implemented in R in
cor(x=A, y=B, method = "spearman")
cor(x=A, y=B, method = "kendall") | Testing paired frequencies for independence
I would maybe start with a rank correlation analysis.
The issue is that you may have very low correlations as the effects you are trying to capture are small.
Both Kendall and Spearman correlation coe |
35,040 | Newman's modularity clustering for graphs | The igraph library implements some algorithms for community structure based on Newman's optimization of modularity. You can consult the reference manual for details and citations. | Newman's modularity clustering for graphs | The igraph library implements some algorithms for community structure based on Newman's optimization of modularity. You can consult the reference manual for details and citations. | Newman's modularity clustering for graphs
The igraph library implements some algorithms for community structure based on Newman's optimization of modularity. You can consult the reference manual for details and citations. | Newman's modularity clustering for graphs
The igraph library implements some algorithms for community structure based on Newman's optimization of modularity. You can consult the reference manual for details and citations. |
35,041 | Newman's modularity clustering for graphs | Use the igraph package for R:
http://igraph.sourceforge.net/doc/R/fastgreedy.community.html
this implements a fast algorithm for community finding using the newman-girvan modularity maximization method.
your code will look like this:
library(igraph)
# read graph from csv file
G<-read.graph("unipartite_edgelist.txt", format="ncol")
fgreedy<-fastgreedy.community(G,merges=TRUE, modularity=TRUE)
memberships <-community.to.membership(G, fgreedy$merges, steps=which.max(fgreedy$modularity)-1)
print(paste('Number of detected communities=',length(memberships$csize)))
# Community sizes:
print(memberships$csize)
# modularity:
max(fgreedy$modularity) | Newman's modularity clustering for graphs | Use the igraph package for R:
http://igraph.sourceforge.net/doc/R/fastgreedy.community.html
this implements a fast algorithm for community finding using the newman-girvan modularity maximization metho | Newman's modularity clustering for graphs
Use the igraph package for R:
http://igraph.sourceforge.net/doc/R/fastgreedy.community.html
this implements a fast algorithm for community finding using the newman-girvan modularity maximization method.
your code will look like this:
library(igraph)
# read graph from csv file
G<-read.graph("unipartite_edgelist.txt", format="ncol")
fgreedy<-fastgreedy.community(G,merges=TRUE, modularity=TRUE)
memberships <-community.to.membership(G, fgreedy$merges, steps=which.max(fgreedy$modularity)-1)
print(paste('Number of detected communities=',length(memberships$csize)))
# Community sizes:
print(memberships$csize)
# modularity:
max(fgreedy$modularity) | Newman's modularity clustering for graphs
Use the igraph package for R:
http://igraph.sourceforge.net/doc/R/fastgreedy.community.html
this implements a fast algorithm for community finding using the newman-girvan modularity maximization metho |
35,042 | What do I do when a false negative is far more expensive than a false positive? | Let's assume that your model is well-calibrated (see calibration). If it isn't you can calibrate it, if there are other issues they need to be solved accordingly. This does not need re-sampling of the data.
Then, the problem is picking the right threshold for making the predictions. You already seem to have all the pieces for doing it! The costs matrix for your problem is:
is good
is rotten
predicted as good
50
-30
predicted as rotten
-4
-1
With this information, you can pick a threshold and make the positive prediction when the predicted probability is greater than the threshold. After doing this, for each prediction assign the appropriate cost from the matrix above, and calculate the average cost. You can do this for different thresholds and just pick the threshold that maximizes the expected cost. | What do I do when a false negative is far more expensive than a false positive? | Let's assume that your model is well-calibrated (see calibration). If it isn't you can calibrate it, if there are other issues they need to be solved accordingly. This does not need re-sampling of the | What do I do when a false negative is far more expensive than a false positive?
Let's assume that your model is well-calibrated (see calibration). If it isn't you can calibrate it, if there are other issues they need to be solved accordingly. This does not need re-sampling of the data.
Then, the problem is picking the right threshold for making the predictions. You already seem to have all the pieces for doing it! The costs matrix for your problem is:
is good
is rotten
predicted as good
50
-30
predicted as rotten
-4
-1
With this information, you can pick a threshold and make the positive prediction when the predicted probability is greater than the threshold. After doing this, for each prediction assign the appropriate cost from the matrix above, and calculate the average cost. You can do this for different thresholds and just pick the threshold that maximizes the expected cost. | What do I do when a false negative is far more expensive than a false positive?
Let's assume that your model is well-calibrated (see calibration). If it isn't you can calibrate it, if there are other issues they need to be solved accordingly. This does not need re-sampling of the |
35,043 | Is it possible to turn PCA into ICA by rotating the eigenvectors? | No, in general, you can't rotate the principal components to obtain ICA. One of the defining traits of PCA is that the component directions are orthogonal. If you rotate the principal components, they'll still be orthogonal after the rotation. (This is because a rotation matrix is an orthogonal transformation.) Almost always, ICA components are not orthogonal, so rotation of principal components will not recover ICA components.
The only caveat is trivial -- if the ICA directions are orthogonal to begin with, then they will still be orthogonal after rotation, for the same reasons. | Is it possible to turn PCA into ICA by rotating the eigenvectors? | No, in general, you can't rotate the principal components to obtain ICA. One of the defining traits of PCA is that the component directions are orthogonal. If you rotate the principal components, they | Is it possible to turn PCA into ICA by rotating the eigenvectors?
No, in general, you can't rotate the principal components to obtain ICA. One of the defining traits of PCA is that the component directions are orthogonal. If you rotate the principal components, they'll still be orthogonal after the rotation. (This is because a rotation matrix is an orthogonal transformation.) Almost always, ICA components are not orthogonal, so rotation of principal components will not recover ICA components.
The only caveat is trivial -- if the ICA directions are orthogonal to begin with, then they will still be orthogonal after rotation, for the same reasons. | Is it possible to turn PCA into ICA by rotating the eigenvectors?
No, in general, you can't rotate the principal components to obtain ICA. One of the defining traits of PCA is that the component directions are orthogonal. If you rotate the principal components, they |
35,044 | Variance of sample autocorrelation (Ljung-Box) | Edit (05/07/2023)
When answering this question, I realized the job actually can be done by only summoning Lemma 1 below (hence avoid touching the much more difficult Lemma 2), which will substantially reduce the machinery and calculations. The argument (largely exploiting the independence between $T(X)$ and $r$) may be how authors discovered the variance expression.
With the notations introduced in my old answer, the goal is to prove
\begin{align}
& E[T_iT_j] = 0, & 1 \leq i \neq j \leq n, \tag{i} \\
& E[T_i^2T_j^2] = \frac{1}{n(n + 2)}, & 1 \leq i \neq j \leq n, \tag{ii} \\
& E[T_i^2T_jT_k] = 0, & i, j, k \text{ distinct}, \tag{iii} \\
& E[T_iT_jT_kT_l] = 0, & i, j, k, l \text{ distinct}. \tag{iv}
\end{align}
We assume normality of $X$ from now on. To prove (i), write $X_iX_j = T_iT_j \times r^2$. Since $T_iT_j$ is independent of $r^2$ ($T_iT_j$ is a function of $T(X)$), $E[X_iX_j] = E[T_iT_jr^2] = E[T_iT_j]E[r^2]$. But then $E[X_iX_j] = E[X_i]E[X_j] = 0$ (by normality) and $E[r^2] > 0$ imply that $E[T_iT_j] = 0$. In the same manner, (iii) and (iv) hold.
Similarly, $X_i^2X_j^2 = T_i^2T_j^2 \times r^4$ and independence imply that $E[X_i^2X_j^2] = E[T_i^2T_j^2]E[r^4]$, whence
\begin{align}
E[T_i^2T_j^2] = \frac{E[X_i^2]E[X_j^2]}{E[r^4]} = \frac{1}{E[r^4]}.
\end{align}
So it suffices to determine $E[r^4]$, which is straightforward:
\begin{align}
E[r^4] &= E[(X_1^2 + \cdots + X_n^2)^2] = nE[X_1^4] + 2\binom{n}{2}E[X_1^2X_2^2] \\
&= 3n + n(n - 1) = n(n + 2).
\end{align}
This completes the proof of (ii).
Old Answer (04/05/2023)
One way to show this result is to use the following two lemmas (Lemma 1 and Lemma 2 are Theorem 1.5.6 and Exercise 1.32 in Aspects of Multivariate Statistical Theory by R. Muirhead respectively):
Lemma 1. If $X$ has an $m$-variate spherical distribution with $P(X = 0) = 0$ and $r = \|X\| = (X'X)^{1/2}, T(X) = \|X\|^{-1}X$, then $T(X)$ is
uniformly distributed on $S_m$ and $T(X)$ and $r$ are independent.
Lemma 2. Let $T$ be uniformly distributed on $S_m$ and partition $T$ as $T' = (\mathbf{T_1}' | \mathbf{T_2}')$, where $\mathbf{T_1}$ is $k \times 1$ and $\mathbf{T_2}$ is $(m - k) \times 1$. Then $\mathbf{T_1}$ has density function
\begin{align*}
f_{\mathbf{T_1}}(u) = \frac{\Gamma(m/2)}{\pi^{k/2}\Gamma[(m - k)/2]}(1 - u'u)^{(m - k)/2 - 1}, \quad 0 < u'u < 1. \tag{1}
\end{align*}
Here $S_m$ stands for the unit sphere in $\mathbb{R}^m$: $S_m = \{x \in \mathbb{R}^m: x'x = 1\}$. The proof to Lemma 1 can be found in the referenced text, and the proof to Lemma 2 can be found in this link (NOT EASY!).
With these preparations, now let's attack the problem. Denote $X = (a_1, \ldots, a_n) \sim N_n(0, I_{(n)})$, then by Lemma 1$^\dagger$, $r_k$ can be rewritten as
\begin{align*}
r_k = T_1T_{k + 1} + T_2T_{k + 2} + \cdots + T_{n - k}T_n,
\end{align*}
where $T := (T_1, \ldots, T_n)' = X/\|X\|$ has uniform distribution on $S_n$.
By Lemma 2, for $1 \leq i \neq j \leq n$, we have
\begin{align*}
& E(T_iT_j) = \int_{0 < t_i^2 + t_j^2 < 1}t_it_jf_{(T_i, T_j)}(t_i, t_j)dt_idt_j, \tag{2} \\
& E(T_i^2T_j^2) = \int_{0 < t_i^2 + t_j^2 < 1}t_i^2t_j^2f_{(T_i, T_j)}(t_i, t_j)dt_idt_j, \tag{3}
\end{align*}
where $f_{(T_i, T_j)}(t_i, t_j)$ is given by $(1)$ with $\mathbf{T_1} = (T_i, T_j)$. To evaluate $(2)$ and $(3)$, apply the polar transformation $t_i = r\cos\theta, t_j = r\sin\theta$, $0 < r < 1, 0 \leq \theta < 2\pi$. It then follows that
\begin{align}
E(T_iT_j) = \frac{\frac{n}{2} - 1}{\pi}\int_0^1\int_0^{2\pi}r^2\sin\theta\cos\theta(1 - r^2)^{n/2 - 2}rdrd\theta = 0. \tag{4}
\end{align}
This is because $\int_0^{2\pi}\sin\theta\cos\theta d\theta = 0$.
In addition, it follows by
\begin{align}
& \int_0^{2\pi}\sin^2\theta\cos^2\theta d\theta = \frac{1}{4}\pi, \\
& \int_0^1 r^5(1 - r^2)^{n/2 - 2}dr = \frac{1}{2}B\left(3, \frac{n}{2} - 1\right) =
\frac{1}{(\frac{n}{2} + 1) \times \frac{n}{2} \times (\frac{n}{2} - 1)}
\end{align}
that
\begin{align}
E(T_i^2T_j^2) = \frac{\frac{n}{2} - 1}{\pi}\int_0^1\int_0^{2\pi}r^4\sin^2\theta\cos^2\theta(1 - r^2)^{n/2 - 2}rdrd\theta = \frac{1}{n(n + 2)}. \tag{5}
\end{align}
To complete the evaluation of cross-product terms from $\operatorname{Var}(r_k)$, it remains to show $E[T_a^2T_bT_c] = 0$ for distinct $a, b, c \in \{1, \ldots, n\}$ and $E[T_aT_bT_cT_d] = 0$ for distinct $a, b, c, d \in \{1, \ldots, n\}$. These calculations are shown as follows.
To calculate $E[T_a^2T_bT_c]$, applying lemma 2 with $\mathbf{T_1} = (T_a, T_b, T_c)$ yields
\begin{align*}
E(T_a^2T_bT_c) = \int_{0 < t_a^2 + t_b^2 + t_c^2 < 1}t_a^2t_bt_cf_{(T_a, T_b, T_c)}(t_a, t_b, t_c)dt_adt_bdt_c. \tag{6}
\end{align*}
Under the spherical transformation
\begin{align*}
& t_a = r\cos(\theta_1), \\
& t_b = r\sin(\theta_1)\cos(\theta_2), \\
& t_c = r\sin(\theta_1)\sin(\theta_2),
\end{align*}
where $0 < r < 1$, $0 \leq \theta_1 < \pi$, $0 \leq \theta_2 < 2\pi$, the integrand in $(6)$ that includes $\theta_1, \theta_2$ is (after multiplying the Jacobian determinant) $\cos^2(\theta_1)\sin^3(\theta_1)\cos(\theta_2)\sin(\theta_2)$, which integrates to $0$ over $[0, \pi) \times [0, 2\pi)$. Hence $E[T_a^2T_bT_c] = 0$.
To calculate $E[T_aT_bT_cT_d]$, applying lemma 2 with $\mathbf{T_1} = (T_a, T_b, T_c, T_d)$ yields
\begin{align*}
E(T_aT_bT_cT_d) = \int_{0 < t_a^2 + t_b^2 + t_c^2 + t_d^2 < 1}t_at_bt_ct_df_{(T_a, T_b, T_c, T_d)}(t_a, t_b, t_c, t_d)dt_adt_bdt_cdt_d. \tag{7}
\end{align*}
Under the spherical transformation
\begin{align*}
& t_a = r\cos(\theta_1), \\
& t_b = r\sin(\theta_1)\cos(\theta_2), \\
& t_c = r\sin(\theta_1)\sin(\theta_2)\cos(\theta_3), \\
& t_d = r\sin(\theta_1)\sin(\theta_2)\sin(\theta_3), \\
\end{align*}
where $0 < r < 1$, $0 \leq \theta_1, \theta_2 < \pi$, $0 \leq \theta_3 < 2\pi$, the integrand in $(7)$ that includes $\theta_1, \theta_2, \theta_3$ is (after multiplying the Jacobian determinant) $\cos(\theta_1)\sin^5(\theta_1)\cos(\theta_2)\sin^3(\theta_2)\cos(\theta_3)
\sin(\theta_3)$, which integrates to $0$ over $[0, \pi) \times [0, \pi) \times [0, 2\pi)$. Hence $E[T_aT_bT_cT_d] = 0$.
To summarize all these pieces, we conclude that $E[r_k] = 0$ and
\begin{align}
& \operatorname{Var}(r_k) = E[r_k^2] \\
=& E(T_1^2T_{k + 1}^2) + \cdots + E(T_{n - k}^2T_n^2) + \sum E[T_a^2T_bT_c] + \sum E[T_aT_bT_cT_d] \\
=& (n - k) \times \frac{1}{n(n + 2)} = \frac{n - k}{n(n + 2)}.
\end{align}
This completes the proof. As a by-product, that both $(6)$ and $(7)$ are identical to $0$ also readily (now it is truly "readily") imply that $r_k$ and $r_l$ are uncorrelated when $k \neq l$, which is another proposition claimed by the original paper.
$^\dagger$: The condition of Lemma 1 implies that the main result still holds when the distribution assumption of innovations $(a_1, \ldots, a_n)$ is slightly generalized to spherical distributions (from Gaussian). | Variance of sample autocorrelation (Ljung-Box) | Edit (05/07/2023)
When answering this question, I realized the job actually can be done by only summoning Lemma 1 below (hence avoid touching the much more difficult Lemma 2), which will substantially | Variance of sample autocorrelation (Ljung-Box)
Edit (05/07/2023)
When answering this question, I realized the job actually can be done by only summoning Lemma 1 below (hence avoid touching the much more difficult Lemma 2), which will substantially reduce the machinery and calculations. The argument (largely exploiting the independence between $T(X)$ and $r$) may be how authors discovered the variance expression.
With the notations introduced in my old answer, the goal is to prove
\begin{align}
& E[T_iT_j] = 0, & 1 \leq i \neq j \leq n, \tag{i} \\
& E[T_i^2T_j^2] = \frac{1}{n(n + 2)}, & 1 \leq i \neq j \leq n, \tag{ii} \\
& E[T_i^2T_jT_k] = 0, & i, j, k \text{ distinct}, \tag{iii} \\
& E[T_iT_jT_kT_l] = 0, & i, j, k, l \text{ distinct}. \tag{iv}
\end{align}
We assume normality of $X$ from now on. To prove (i), write $X_iX_j = T_iT_j \times r^2$. Since $T_iT_j$ is independent of $r^2$ ($T_iT_j$ is a function of $T(X)$), $E[X_iX_j] = E[T_iT_jr^2] = E[T_iT_j]E[r^2]$. But then $E[X_iX_j] = E[X_i]E[X_j] = 0$ (by normality) and $E[r^2] > 0$ imply that $E[T_iT_j] = 0$. In the same manner, (iii) and (iv) hold.
Similarly, $X_i^2X_j^2 = T_i^2T_j^2 \times r^4$ and independence imply that $E[X_i^2X_j^2] = E[T_i^2T_j^2]E[r^4]$, whence
\begin{align}
E[T_i^2T_j^2] = \frac{E[X_i^2]E[X_j^2]}{E[r^4]} = \frac{1}{E[r^4]}.
\end{align}
So it suffices to determine $E[r^4]$, which is straightforward:
\begin{align}
E[r^4] &= E[(X_1^2 + \cdots + X_n^2)^2] = nE[X_1^4] + 2\binom{n}{2}E[X_1^2X_2^2] \\
&= 3n + n(n - 1) = n(n + 2).
\end{align}
This completes the proof of (ii).
Old Answer (04/05/2023)
One way to show this result is to use the following two lemmas (Lemma 1 and Lemma 2 are Theorem 1.5.6 and Exercise 1.32 in Aspects of Multivariate Statistical Theory by R. Muirhead respectively):
Lemma 1. If $X$ has an $m$-variate spherical distribution with $P(X = 0) = 0$ and $r = \|X\| = (X'X)^{1/2}, T(X) = \|X\|^{-1}X$, then $T(X)$ is
uniformly distributed on $S_m$ and $T(X)$ and $r$ are independent.
Lemma 2. Let $T$ be uniformly distributed on $S_m$ and partition $T$ as $T' = (\mathbf{T_1}' | \mathbf{T_2}')$, where $\mathbf{T_1}$ is $k \times 1$ and $\mathbf{T_2}$ is $(m - k) \times 1$. Then $\mathbf{T_1}$ has density function
\begin{align*}
f_{\mathbf{T_1}}(u) = \frac{\Gamma(m/2)}{\pi^{k/2}\Gamma[(m - k)/2]}(1 - u'u)^{(m - k)/2 - 1}, \quad 0 < u'u < 1. \tag{1}
\end{align*}
Here $S_m$ stands for the unit sphere in $\mathbb{R}^m$: $S_m = \{x \in \mathbb{R}^m: x'x = 1\}$. The proof to Lemma 1 can be found in the referenced text, and the proof to Lemma 2 can be found in this link (NOT EASY!).
With these preparations, now let's attack the problem. Denote $X = (a_1, \ldots, a_n) \sim N_n(0, I_{(n)})$, then by Lemma 1$^\dagger$, $r_k$ can be rewritten as
\begin{align*}
r_k = T_1T_{k + 1} + T_2T_{k + 2} + \cdots + T_{n - k}T_n,
\end{align*}
where $T := (T_1, \ldots, T_n)' = X/\|X\|$ has uniform distribution on $S_n$.
By Lemma 2, for $1 \leq i \neq j \leq n$, we have
\begin{align*}
& E(T_iT_j) = \int_{0 < t_i^2 + t_j^2 < 1}t_it_jf_{(T_i, T_j)}(t_i, t_j)dt_idt_j, \tag{2} \\
& E(T_i^2T_j^2) = \int_{0 < t_i^2 + t_j^2 < 1}t_i^2t_j^2f_{(T_i, T_j)}(t_i, t_j)dt_idt_j, \tag{3}
\end{align*}
where $f_{(T_i, T_j)}(t_i, t_j)$ is given by $(1)$ with $\mathbf{T_1} = (T_i, T_j)$. To evaluate $(2)$ and $(3)$, apply the polar transformation $t_i = r\cos\theta, t_j = r\sin\theta$, $0 < r < 1, 0 \leq \theta < 2\pi$. It then follows that
\begin{align}
E(T_iT_j) = \frac{\frac{n}{2} - 1}{\pi}\int_0^1\int_0^{2\pi}r^2\sin\theta\cos\theta(1 - r^2)^{n/2 - 2}rdrd\theta = 0. \tag{4}
\end{align}
This is because $\int_0^{2\pi}\sin\theta\cos\theta d\theta = 0$.
In addition, it follows by
\begin{align}
& \int_0^{2\pi}\sin^2\theta\cos^2\theta d\theta = \frac{1}{4}\pi, \\
& \int_0^1 r^5(1 - r^2)^{n/2 - 2}dr = \frac{1}{2}B\left(3, \frac{n}{2} - 1\right) =
\frac{1}{(\frac{n}{2} + 1) \times \frac{n}{2} \times (\frac{n}{2} - 1)}
\end{align}
that
\begin{align}
E(T_i^2T_j^2) = \frac{\frac{n}{2} - 1}{\pi}\int_0^1\int_0^{2\pi}r^4\sin^2\theta\cos^2\theta(1 - r^2)^{n/2 - 2}rdrd\theta = \frac{1}{n(n + 2)}. \tag{5}
\end{align}
To complete the evaluation of cross-product terms from $\operatorname{Var}(r_k)$, it remains to show $E[T_a^2T_bT_c] = 0$ for distinct $a, b, c \in \{1, \ldots, n\}$ and $E[T_aT_bT_cT_d] = 0$ for distinct $a, b, c, d \in \{1, \ldots, n\}$. These calculations are shown as follows.
To calculate $E[T_a^2T_bT_c]$, applying lemma 2 with $\mathbf{T_1} = (T_a, T_b, T_c)$ yields
\begin{align*}
E(T_a^2T_bT_c) = \int_{0 < t_a^2 + t_b^2 + t_c^2 < 1}t_a^2t_bt_cf_{(T_a, T_b, T_c)}(t_a, t_b, t_c)dt_adt_bdt_c. \tag{6}
\end{align*}
Under the spherical transformation
\begin{align*}
& t_a = r\cos(\theta_1), \\
& t_b = r\sin(\theta_1)\cos(\theta_2), \\
& t_c = r\sin(\theta_1)\sin(\theta_2),
\end{align*}
where $0 < r < 1$, $0 \leq \theta_1 < \pi$, $0 \leq \theta_2 < 2\pi$, the integrand in $(6)$ that includes $\theta_1, \theta_2$ is (after multiplying the Jacobian determinant) $\cos^2(\theta_1)\sin^3(\theta_1)\cos(\theta_2)\sin(\theta_2)$, which integrates to $0$ over $[0, \pi) \times [0, 2\pi)$. Hence $E[T_a^2T_bT_c] = 0$.
To calculate $E[T_aT_bT_cT_d]$, applying lemma 2 with $\mathbf{T_1} = (T_a, T_b, T_c, T_d)$ yields
\begin{align*}
E(T_aT_bT_cT_d) = \int_{0 < t_a^2 + t_b^2 + t_c^2 + t_d^2 < 1}t_at_bt_ct_df_{(T_a, T_b, T_c, T_d)}(t_a, t_b, t_c, t_d)dt_adt_bdt_cdt_d. \tag{7}
\end{align*}
Under the spherical transformation
\begin{align*}
& t_a = r\cos(\theta_1), \\
& t_b = r\sin(\theta_1)\cos(\theta_2), \\
& t_c = r\sin(\theta_1)\sin(\theta_2)\cos(\theta_3), \\
& t_d = r\sin(\theta_1)\sin(\theta_2)\sin(\theta_3), \\
\end{align*}
where $0 < r < 1$, $0 \leq \theta_1, \theta_2 < \pi$, $0 \leq \theta_3 < 2\pi$, the integrand in $(7)$ that includes $\theta_1, \theta_2, \theta_3$ is (after multiplying the Jacobian determinant) $\cos(\theta_1)\sin^5(\theta_1)\cos(\theta_2)\sin^3(\theta_2)\cos(\theta_3)
\sin(\theta_3)$, which integrates to $0$ over $[0, \pi) \times [0, \pi) \times [0, 2\pi)$. Hence $E[T_aT_bT_cT_d] = 0$.
To summarize all these pieces, we conclude that $E[r_k] = 0$ and
\begin{align}
& \operatorname{Var}(r_k) = E[r_k^2] \\
=& E(T_1^2T_{k + 1}^2) + \cdots + E(T_{n - k}^2T_n^2) + \sum E[T_a^2T_bT_c] + \sum E[T_aT_bT_cT_d] \\
=& (n - k) \times \frac{1}{n(n + 2)} = \frac{n - k}{n(n + 2)}.
\end{align}
This completes the proof. As a by-product, that both $(6)$ and $(7)$ are identical to $0$ also readily (now it is truly "readily") imply that $r_k$ and $r_l$ are uncorrelated when $k \neq l$, which is another proposition claimed by the original paper.
$^\dagger$: The condition of Lemma 1 implies that the main result still holds when the distribution assumption of innovations $(a_1, \ldots, a_n)$ is slightly generalized to spherical distributions (from Gaussian). | Variance of sample autocorrelation (Ljung-Box)
Edit (05/07/2023)
When answering this question, I realized the job actually can be done by only summoning Lemma 1 below (hence avoid touching the much more difficult Lemma 2), which will substantially |
35,045 | Variance of sample autocorrelation (Ljung-Box) | This might not be the most elegant proof, but I believe it answers the question:
We assume $a_1,a_2,...a_n$ are i.i.d standard normal (we can assume as in Ljung-Box a constant variance $\sigma^2$, but then we can divide by $\sigma^2$ both the numerator and denominator in the expression for $r_k$, arriving at the standardized variables, so this doesn't lose generality).
Consider, for arbitrary $k \ne m$ $$ r = \frac{a_ka_m}{\sum_{t=1}^n a_t^2} = \frac{a_ka_m}{a_k^2 + a_m^2 + Z^2}$$
where $Z^2 \sim \chi^2_{n-2}$ and $Z^2,a_k,a_m$ are all independent. It follows from symmetry$^1$ that $r$ have zero mean, $E[r]=0$. Next introduce the rotated variables:
$$ u = (a_k+a_m)/\sqrt{2} , v = (a_k-a_m)/\sqrt{2} $$
which are also independent and standard normal, and observe that $a_k^2 + a_m^2 = u^2+v^2$, $a_ka_m = (u^2-v^2)/2$, so we have
$$r = \frac{1}{2}\left( \frac{u^2}{u^2+v^2+Z^2} - \frac{v^2}{u^2+v^2+Z^2} \right) \equiv \frac{1}{2}(U - V).$$
Now notice that $U$ and $V$ are ratios of Chi-squared random variables, which have a known Beta distribution:
$$U,V \sim \text{Beta}\left(\frac{1}{2},\frac{n-1}{2}\right)$$
with $E[U]=\frac{1}{n}$ and $E[U^2]= \frac{3}{n(n+2)}$ following from the properties of the Beta distribution. Furthermore,
notice that $UV = \left( \frac{uv}{u^2+v^2+Z^2} \right)^2 $ has exactly the same distribution as $r^2$, so $E[UV]=E[r^2]=Var(r)$.
From $r=(U-V)/2$ we also have
$$\begin{align*}
Var(r) &= \frac{1}{4}( Var(U) + Var(V) - 2Cov(U,V) ) \\
&= \frac{1}{2}( Var(U) - Cov(U,V) ) \\
&= \frac{1}{2}( Var(U) - E[UV] + E[U]^2 ) \\
&= \frac{1}{2}( E[U^2] - Var(r) )
\end{align*}$$
So finally,
$$ Var(r) = \frac{1}{3}E[U^2] = \frac{1}{n(n+2)} .$$
To complete the proof we can observe that $r_k$ is a sum of $(n-k)$ terms which all have the same distribution as $r$, namely $r_k = \sum_{t=k+1}^n r_{t,t-k}$, and they are uncorrelated since for any $k,m \ne s,t$, $Cov(r_{k,m},r_{s,t}) = E[r_{k,m} \cdot r_{s,t}] = 0$, following again from symmetry. Therefore,
$$Var(r_k) = (n-k)Var(r) = \frac{n-k}{n(n+2)}.$$
$^1$ Since the joint distribution of $\{a_t\}$ is symmetric with respect to interchaning $a_t \leftrightarrow -a_t$, the expectation of any odd function (with respect to any of its arguments) is zero. For example, define $\tilde a_k = -a_k$, such that $ r = -\frac{\tilde a_ka_m}{\sum_{t=1}^n a_t^2} \equiv -\tilde r$. Since $r$ and $\tilde r$ have the same distribution, $E[r] = E[\tilde r]=-E[r]$, implying that $E[r]=0$. The same argument holds for any other odd function such as $\frac{a_ka_ma_sa_t}{(\sum_{t=1}^n a_t^2)^2}$, provided that at least one of the $a_i$'s in the numerator has an odd power. (This is essentially the same argument as given, e.g., here ) | Variance of sample autocorrelation (Ljung-Box) | This might not be the most elegant proof, but I believe it answers the question:
We assume $a_1,a_2,...a_n$ are i.i.d standard normal (we can assume as in Ljung-Box a constant variance $\sigma^2$, but | Variance of sample autocorrelation (Ljung-Box)
This might not be the most elegant proof, but I believe it answers the question:
We assume $a_1,a_2,...a_n$ are i.i.d standard normal (we can assume as in Ljung-Box a constant variance $\sigma^2$, but then we can divide by $\sigma^2$ both the numerator and denominator in the expression for $r_k$, arriving at the standardized variables, so this doesn't lose generality).
Consider, for arbitrary $k \ne m$ $$ r = \frac{a_ka_m}{\sum_{t=1}^n a_t^2} = \frac{a_ka_m}{a_k^2 + a_m^2 + Z^2}$$
where $Z^2 \sim \chi^2_{n-2}$ and $Z^2,a_k,a_m$ are all independent. It follows from symmetry$^1$ that $r$ have zero mean, $E[r]=0$. Next introduce the rotated variables:
$$ u = (a_k+a_m)/\sqrt{2} , v = (a_k-a_m)/\sqrt{2} $$
which are also independent and standard normal, and observe that $a_k^2 + a_m^2 = u^2+v^2$, $a_ka_m = (u^2-v^2)/2$, so we have
$$r = \frac{1}{2}\left( \frac{u^2}{u^2+v^2+Z^2} - \frac{v^2}{u^2+v^2+Z^2} \right) \equiv \frac{1}{2}(U - V).$$
Now notice that $U$ and $V$ are ratios of Chi-squared random variables, which have a known Beta distribution:
$$U,V \sim \text{Beta}\left(\frac{1}{2},\frac{n-1}{2}\right)$$
with $E[U]=\frac{1}{n}$ and $E[U^2]= \frac{3}{n(n+2)}$ following from the properties of the Beta distribution. Furthermore,
notice that $UV = \left( \frac{uv}{u^2+v^2+Z^2} \right)^2 $ has exactly the same distribution as $r^2$, so $E[UV]=E[r^2]=Var(r)$.
From $r=(U-V)/2$ we also have
$$\begin{align*}
Var(r) &= \frac{1}{4}( Var(U) + Var(V) - 2Cov(U,V) ) \\
&= \frac{1}{2}( Var(U) - Cov(U,V) ) \\
&= \frac{1}{2}( Var(U) - E[UV] + E[U]^2 ) \\
&= \frac{1}{2}( E[U^2] - Var(r) )
\end{align*}$$
So finally,
$$ Var(r) = \frac{1}{3}E[U^2] = \frac{1}{n(n+2)} .$$
To complete the proof we can observe that $r_k$ is a sum of $(n-k)$ terms which all have the same distribution as $r$, namely $r_k = \sum_{t=k+1}^n r_{t,t-k}$, and they are uncorrelated since for any $k,m \ne s,t$, $Cov(r_{k,m},r_{s,t}) = E[r_{k,m} \cdot r_{s,t}] = 0$, following again from symmetry. Therefore,
$$Var(r_k) = (n-k)Var(r) = \frac{n-k}{n(n+2)}.$$
$^1$ Since the joint distribution of $\{a_t\}$ is symmetric with respect to interchaning $a_t \leftrightarrow -a_t$, the expectation of any odd function (with respect to any of its arguments) is zero. For example, define $\tilde a_k = -a_k$, such that $ r = -\frac{\tilde a_ka_m}{\sum_{t=1}^n a_t^2} \equiv -\tilde r$. Since $r$ and $\tilde r$ have the same distribution, $E[r] = E[\tilde r]=-E[r]$, implying that $E[r]=0$. The same argument holds for any other odd function such as $\frac{a_ka_ma_sa_t}{(\sum_{t=1}^n a_t^2)^2}$, provided that at least one of the $a_i$'s in the numerator has an odd power. (This is essentially the same argument as given, e.g., here ) | Variance of sample autocorrelation (Ljung-Box)
This might not be the most elegant proof, but I believe it answers the question:
We assume $a_1,a_2,...a_n$ are i.i.d standard normal (we can assume as in Ljung-Box a constant variance $\sigma^2$, but |
35,046 | Sum of sample given a priori knowledge of its maximum | This is a only partial answer for the case of $n=2$. Let $p(x)$ and $F(x)$ denote the pmf and cdf of $X_1$ and $X_2$. The joint pmf of the minimum and the maximum $X_{(1)},X_{(2)}$ is then clearly
$$
p_{X_{(1)},X_{(2)}}(x_1,x_2)=\begin{cases}
p(x_1)p(x_2) & \text{for }x_1=x_2 \\
2p(x_1)p(x_2) & \text{for }x_1<x_2
\end{cases}
$$
since $X_{(1)},X_{(2)}$ can take the same value only in one way whereas they can take different values in two ways.
Conditional on $X_{(2)}=x_2$ the pmf of $X_{(1)}$ is thus
$$
p_{X_{(1)}|X_{(2)}=x_2}(x_1)=\frac{p_{X_{(1)},X_{(2)}}(x_1,x_2)}{p_{X_{(2)}}(x_2)}.
$$
The cdf of $X_{(2)}$ is
$$
F_{X_{(2)}}(x_2)=F(x_2)^2
$$
and hence the pmf of $X_{(2)}$ is
$$
p_{X_{(2)}}(x_2)=F(x_2)^2-F(x_2-1)^2.
$$
The pmf of the sum conditional on $X_{(2})=a$ is then
$$
P(X_{(1)}+X_{(2)}=x|X_{(2)}=a)=p_{X_{(1)}|X_{(2)}=a}(x-a)
$$
which will agree with your simulations. | Sum of sample given a priori knowledge of its maximum | This is a only partial answer for the case of $n=2$. Let $p(x)$ and $F(x)$ denote the pmf and cdf of $X_1$ and $X_2$. The joint pmf of the minimum and the maximum $X_{(1)},X_{(2)}$ is then clearly
$ | Sum of sample given a priori knowledge of its maximum
This is a only partial answer for the case of $n=2$. Let $p(x)$ and $F(x)$ denote the pmf and cdf of $X_1$ and $X_2$. The joint pmf of the minimum and the maximum $X_{(1)},X_{(2)}$ is then clearly
$$
p_{X_{(1)},X_{(2)}}(x_1,x_2)=\begin{cases}
p(x_1)p(x_2) & \text{for }x_1=x_2 \\
2p(x_1)p(x_2) & \text{for }x_1<x_2
\end{cases}
$$
since $X_{(1)},X_{(2)}$ can take the same value only in one way whereas they can take different values in two ways.
Conditional on $X_{(2)}=x_2$ the pmf of $X_{(1)}$ is thus
$$
p_{X_{(1)}|X_{(2)}=x_2}(x_1)=\frac{p_{X_{(1)},X_{(2)}}(x_1,x_2)}{p_{X_{(2)}}(x_2)}.
$$
The cdf of $X_{(2)}$ is
$$
F_{X_{(2)}}(x_2)=F(x_2)^2
$$
and hence the pmf of $X_{(2)}$ is
$$
p_{X_{(2)}}(x_2)=F(x_2)^2-F(x_2-1)^2.
$$
The pmf of the sum conditional on $X_{(2})=a$ is then
$$
P(X_{(1)}+X_{(2)}=x|X_{(2)}=a)=p_{X_{(1)}|X_{(2)}=a}(x-a)
$$
which will agree with your simulations. | Sum of sample given a priori knowledge of its maximum
This is a only partial answer for the case of $n=2$. Let $p(x)$ and $F(x)$ denote the pmf and cdf of $X_1$ and $X_2$. The joint pmf of the minimum and the maximum $X_{(1)},X_{(2)}$ is then clearly
$ |
35,047 | Sum of sample given a priori knowledge of its maximum | The problem with your initial approach is that it does not properly deal with the possibility that more than one of the $X_i$ is $a$; it has the tendency to overcount these cases (which tend to have higher sums) and undercount cases where a single case is the maximum (which tend to have lower sums), which is why your blue bars are shifted right to your orange bars.
As a much simpler example, consider $n=2, a=1, F \sim Bin(1,\frac12)$. Given the maximum is $1$, the total is twice as likely to be $1$ as $2$ (cases $1+0, 0+1,1+1$ are equally likely) but your initial approach could suggest wrongly that the totals of $1$ or $2$ are equally likely (cases $1+0, 1+1$).
For your example, it is possible to calculate the possibilities, for example in R:
# Alter these for different distributions or maximum or number of random variables summed
n <- 2 # number RVs to be summed
a <- 10 # maximum allowed
probtrunc <- dbinom(0:a, 20, 1/2) # truncated pmf
# Calculations
probless <- c(probtrunc[-(a+1)],0)
probsum <- numeric((n+1)*a+1)
probsum[a+1] <- 1
probsumless <- probsum
for (i in 1:n){
for (j in ((i+1)*a+1):(a+1)){
probsum[j] <- sum(probsum[j:(j-a)] * probtrunc)
probsumless[j] <- sum(probsumless[j:(j-a+1)] * probless)
}
if (i == n-1){
probearly <- probsum
}
}
probdiff <- probsum - probsumless
results <- cbind(total=a:(n*a),
actualprob=(probdiff/sum(probdiff))[(2*a+1):((n+1)*a+1)],
yourcalc=(probearly/sum(probearly))[(a+1):(n*a+1)] )
giving the results
print(results)
# total actualprob yourcalc
# [1,] 10 1.907349e-06 1.621623e-06
# [2,] 11 3.814697e-05 3.243247e-05
# [3,] 12 3.623962e-04 3.081084e-04
# [4,] 13 2.174377e-03 1.848651e-03
# [5,] 14 9.241104e-03 7.856765e-03
# [6,] 15 2.957153e-02 2.514165e-02
# [7,] 16 7.392883e-02 6.285412e-02
# [8,] 17 1.478577e-01 1.257082e-01
# [9,] 18 2.402687e-01 2.042759e-01
#[10,] 19 3.203583e-01 2.723679e-01
#[11,] 20 1.761971e-01 2.996046e-01
which seems to match your bar chart of your simulations | Sum of sample given a priori knowledge of its maximum | The problem with your initial approach is that it does not properly deal with the possibility that more than one of the $X_i$ is $a$; it has the tendency to overcount these cases (which tend to have h | Sum of sample given a priori knowledge of its maximum
The problem with your initial approach is that it does not properly deal with the possibility that more than one of the $X_i$ is $a$; it has the tendency to overcount these cases (which tend to have higher sums) and undercount cases where a single case is the maximum (which tend to have lower sums), which is why your blue bars are shifted right to your orange bars.
As a much simpler example, consider $n=2, a=1, F \sim Bin(1,\frac12)$. Given the maximum is $1$, the total is twice as likely to be $1$ as $2$ (cases $1+0, 0+1,1+1$ are equally likely) but your initial approach could suggest wrongly that the totals of $1$ or $2$ are equally likely (cases $1+0, 1+1$).
For your example, it is possible to calculate the possibilities, for example in R:
# Alter these for different distributions or maximum or number of random variables summed
n <- 2 # number RVs to be summed
a <- 10 # maximum allowed
probtrunc <- dbinom(0:a, 20, 1/2) # truncated pmf
# Calculations
probless <- c(probtrunc[-(a+1)],0)
probsum <- numeric((n+1)*a+1)
probsum[a+1] <- 1
probsumless <- probsum
for (i in 1:n){
for (j in ((i+1)*a+1):(a+1)){
probsum[j] <- sum(probsum[j:(j-a)] * probtrunc)
probsumless[j] <- sum(probsumless[j:(j-a+1)] * probless)
}
if (i == n-1){
probearly <- probsum
}
}
probdiff <- probsum - probsumless
results <- cbind(total=a:(n*a),
actualprob=(probdiff/sum(probdiff))[(2*a+1):((n+1)*a+1)],
yourcalc=(probearly/sum(probearly))[(a+1):(n*a+1)] )
giving the results
print(results)
# total actualprob yourcalc
# [1,] 10 1.907349e-06 1.621623e-06
# [2,] 11 3.814697e-05 3.243247e-05
# [3,] 12 3.623962e-04 3.081084e-04
# [4,] 13 2.174377e-03 1.848651e-03
# [5,] 14 9.241104e-03 7.856765e-03
# [6,] 15 2.957153e-02 2.514165e-02
# [7,] 16 7.392883e-02 6.285412e-02
# [8,] 17 1.478577e-01 1.257082e-01
# [9,] 18 2.402687e-01 2.042759e-01
#[10,] 19 3.203583e-01 2.723679e-01
#[11,] 20 1.761971e-01 2.996046e-01
which seems to match your bar chart of your simulations | Sum of sample given a priori knowledge of its maximum
The problem with your initial approach is that it does not properly deal with the possibility that more than one of the $X_i$ is $a$; it has the tendency to overcount these cases (which tend to have h |
35,048 | What are the benefits of time-series over a well-setup linear regression for forecasting? | It is somewhat of a false dichotomy that one has choose between using a time series framework OR a regression framework.
The main reason for wanting to use a time-series framework would be autocorrelated errors. With this in mind, one can incorporate autocorrelated errors into regression by using Regression with ARMA errors (The ARIMAX model muddle - Hyndman) and in a sense get the 'best of both worlds'. | What are the benefits of time-series over a well-setup linear regression for forecasting? | It is somewhat of a false dichotomy that one has choose between using a time series framework OR a regression framework.
The main reason for wanting to use a time-series framework would be autocorrela | What are the benefits of time-series over a well-setup linear regression for forecasting?
It is somewhat of a false dichotomy that one has choose between using a time series framework OR a regression framework.
The main reason for wanting to use a time-series framework would be autocorrelated errors. With this in mind, one can incorporate autocorrelated errors into regression by using Regression with ARMA errors (The ARIMAX model muddle - Hyndman) and in a sense get the 'best of both worlds'. | What are the benefits of time-series over a well-setup linear regression for forecasting?
It is somewhat of a false dichotomy that one has choose between using a time series framework OR a regression framework.
The main reason for wanting to use a time-series framework would be autocorrela |
35,049 | What are the benefits of time-series over a well-setup linear regression for forecasting? | Autocorrelation
Here is a counterexample. Suppose the data generating process is either exactly an invertible MA($1$) model or well approximated by one. If you wanted to approximate it by a time series regression about as well, you would need an AR($\infty$) model. This is not feasible, so you would end up with an AR($p$) with some large $p$. Estimating the $p+2$ parameters* of the AR($p$) model will make it have high variance and thus perform poorly in forecasting. Meanwhile, you could use an MA($1$) model instead. It has only 1+2 parameters* and thus much lower variance and better forecast accuracy.
Seasonality
Here is another counterexample. Consider a parsimonious SARIMA model as a data generating process or its close approximation. The seasonality generated by a SARIMA model cannot be modelled by dummy variables (flags as you called them). Moreover, approximating it with a time series regression might take a lot of variables and bring in a lot of unnecessary estimation variance.
Even in the very simplest instance of SARIMA(1,0,0)(1,0,0), the SARIMA model will have 2+2 parameters* to be estimated while an equivalent autoregression (which you can consider as a time series regression) will have 3+2 parameters. This way you would be estimating one superfluous parameter and thereby increasing the variance of the model. If we were to add some moving average terms, this would get significantly worse.
Extrapolation
If by that you mean forecasting into the future (beyond the available data), then the above two points apply. Otherwise I agree that there are challenges to both approaches.
Regarding including additional variables, regression with ARMA errors (as already suggested by David Veitch) is always an option. But these additional variables typically also need to be forecast, which brings us to vector autoregressions (VAR) and VARMA models.
In conclusion, it is good to know both time series regression and ARIMA-type time series models. There will be situations where the former is more natural or more effective, and there will be situations where the opposite is the case. You can then only benefit from having both tools under your belt and being able to use whichever one works better at the given task.
*+2 comes from the intercept and the error variance. | What are the benefits of time-series over a well-setup linear regression for forecasting? | Autocorrelation
Here is a counterexample. Suppose the data generating process is either exactly an invertible MA($1$) model or well approximated by one. If you wanted to approximate it by a time serie | What are the benefits of time-series over a well-setup linear regression for forecasting?
Autocorrelation
Here is a counterexample. Suppose the data generating process is either exactly an invertible MA($1$) model or well approximated by one. If you wanted to approximate it by a time series regression about as well, you would need an AR($\infty$) model. This is not feasible, so you would end up with an AR($p$) with some large $p$. Estimating the $p+2$ parameters* of the AR($p$) model will make it have high variance and thus perform poorly in forecasting. Meanwhile, you could use an MA($1$) model instead. It has only 1+2 parameters* and thus much lower variance and better forecast accuracy.
Seasonality
Here is another counterexample. Consider a parsimonious SARIMA model as a data generating process or its close approximation. The seasonality generated by a SARIMA model cannot be modelled by dummy variables (flags as you called them). Moreover, approximating it with a time series regression might take a lot of variables and bring in a lot of unnecessary estimation variance.
Even in the very simplest instance of SARIMA(1,0,0)(1,0,0), the SARIMA model will have 2+2 parameters* to be estimated while an equivalent autoregression (which you can consider as a time series regression) will have 3+2 parameters. This way you would be estimating one superfluous parameter and thereby increasing the variance of the model. If we were to add some moving average terms, this would get significantly worse.
Extrapolation
If by that you mean forecasting into the future (beyond the available data), then the above two points apply. Otherwise I agree that there are challenges to both approaches.
Regarding including additional variables, regression with ARMA errors (as already suggested by David Veitch) is always an option. But these additional variables typically also need to be forecast, which brings us to vector autoregressions (VAR) and VARMA models.
In conclusion, it is good to know both time series regression and ARIMA-type time series models. There will be situations where the former is more natural or more effective, and there will be situations where the opposite is the case. You can then only benefit from having both tools under your belt and being able to use whichever one works better at the given task.
*+2 comes from the intercept and the error variance. | What are the benefits of time-series over a well-setup linear regression for forecasting?
Autocorrelation
Here is a counterexample. Suppose the data generating process is either exactly an invertible MA($1$) model or well approximated by one. If you wanted to approximate it by a time serie |
35,050 | Is data-driven modelling and machine learning the same thing? | The term "machine learning" is somewhat a term of art, but it generally refers to the construction of algorithms that "learn through experience". The requirement of learning through experience necessitates data, and so machine learning is necessarily "data-driven" --- after all, if not from data, what else would it learn from?
When we refer to a "model" in statistics or machine learning, we really just mean a set of assumptions that describe the presumed probabilistic process for the data, and the logical consequences of the assumptions (e.g., resulting distributions of statistics, estimators, etc.). Even very broad forms of non-parametric models are considered "models", so it encompasses a lot. It is difficult to conceive of how you could generate a machine learning algorithm without some assumptions about the generative process for the data, and consequently, one can probably broadly use the term "modelling" for any machine learning process. One might quibble with this, since some machine learning algorithms are broad non-parametric methods, but even here we usually called these "models", and consequently, I think it is reasonable to say that machine learning methods are built on "models". Even such simple methods as least-squares estimation are built on underlying statistical models.
There may certainly be situations in machine learning where an algorithm is built, and even deployed, without regard to setting underlying probabilistic assumptions. If the algorithm is sufficiently adaptive (in the sense that most non-parametric models are). In this case one could argue that the algorithm is "model-free" insofar as it was created without regard to any model. Even then, and even if the algorithm works well in a wide class of situations, one will still tend to find that there are cases where it works well and cases where it works badly. Consequently, subsequent analysts will usually be able to figure out the kinds of assumptions required to ensure that the algorithm works well when deployed in a situation. In this case, the "modelling" gradually catches up to the initial "model-free" creation of the algorithm as we begin to learn more about the situations where the algorithm works well or badly. So you could call some machine-learning algorithms "model-free" in one sense, but modelling catches us up in the end.
In view of these considerations, I think it is reasonable to say that all machine learning involves data-driven modelling. Of course, it is possible to do data-driven modelling without using a computer algorithm at all (e.g., calculation by pen and paper), and in these cases we would not usually call that "machine learning". | Is data-driven modelling and machine learning the same thing? | The term "machine learning" is somewhat a term of art, but it generally refers to the construction of algorithms that "learn through experience". The requirement of learning through experience necess | Is data-driven modelling and machine learning the same thing?
The term "machine learning" is somewhat a term of art, but it generally refers to the construction of algorithms that "learn through experience". The requirement of learning through experience necessitates data, and so machine learning is necessarily "data-driven" --- after all, if not from data, what else would it learn from?
When we refer to a "model" in statistics or machine learning, we really just mean a set of assumptions that describe the presumed probabilistic process for the data, and the logical consequences of the assumptions (e.g., resulting distributions of statistics, estimators, etc.). Even very broad forms of non-parametric models are considered "models", so it encompasses a lot. It is difficult to conceive of how you could generate a machine learning algorithm without some assumptions about the generative process for the data, and consequently, one can probably broadly use the term "modelling" for any machine learning process. One might quibble with this, since some machine learning algorithms are broad non-parametric methods, but even here we usually called these "models", and consequently, I think it is reasonable to say that machine learning methods are built on "models". Even such simple methods as least-squares estimation are built on underlying statistical models.
There may certainly be situations in machine learning where an algorithm is built, and even deployed, without regard to setting underlying probabilistic assumptions. If the algorithm is sufficiently adaptive (in the sense that most non-parametric models are). In this case one could argue that the algorithm is "model-free" insofar as it was created without regard to any model. Even then, and even if the algorithm works well in a wide class of situations, one will still tend to find that there are cases where it works well and cases where it works badly. Consequently, subsequent analysts will usually be able to figure out the kinds of assumptions required to ensure that the algorithm works well when deployed in a situation. In this case, the "modelling" gradually catches up to the initial "model-free" creation of the algorithm as we begin to learn more about the situations where the algorithm works well or badly. So you could call some machine-learning algorithms "model-free" in one sense, but modelling catches us up in the end.
In view of these considerations, I think it is reasonable to say that all machine learning involves data-driven modelling. Of course, it is possible to do data-driven modelling without using a computer algorithm at all (e.g., calculation by pen and paper), and in these cases we would not usually call that "machine learning". | Is data-driven modelling and machine learning the same thing?
The term "machine learning" is somewhat a term of art, but it generally refers to the construction of algorithms that "learn through experience". The requirement of learning through experience necess |
35,051 | Is data-driven modelling and machine learning the same thing? | TL;DR: IMO, data-driven is a broader term, but it's a matter of definition.
Different people might have different understanding of the terms "Machine Learning" and "data-driven", so I'm slightly (pleasantly) surprised that this question hasn't been closed as "opinion based". Since it still stands, I'll offer my opinion.
Historically, Machine Learning evolved as an attempt to make machines "intelligent", by allowing them to learn from "experience" (i.e. data), often by mimicking how living beings learn. So it was necessarily "data-driven". In other words, ML $\subseteq$ DD.
However, some statisticians also consider statistical modelling to be "data-driven" (e.g. Efron & Hastie, "Computer age statistical inference", p. 264). If you agree with that and if you consider data-driven statistical methods to be distinct from Machine Learning, then, obviously, "data-driven" is a broader term: DD $\supset$ ML.
(Personally, I'd rather contrast "data-driven" to "domain knowledge-driven", "probability model-based", or simply "parametric", but still leading to the same result)
There is, of course, considerable disagreement about terminology. Some statisticians consider Machine Learning to be a subset of Statistics (and most machine learners would disagree). Some machine learners consider some traditionally statistical methods, like linear or logistic regression, to be "machine learning" methods (and most statisticians would disagree). If you side with the statisticians on this point, these models would be examples of data-driven models that are not machine learning.
P.S. I disagree with bogovicj's comment. ML always builds models, only in some cases these models are not made explicit to the users. But ML algorithms certainly make some internal representations of the "things" (e.g. classes) they have learned and these representations are, for all practical purposes, synonymous to "models". | Is data-driven modelling and machine learning the same thing? | TL;DR: IMO, data-driven is a broader term, but it's a matter of definition.
Different people might have different understanding of the terms "Machine Learning" and "data-driven", so I'm slightly (plea | Is data-driven modelling and machine learning the same thing?
TL;DR: IMO, data-driven is a broader term, but it's a matter of definition.
Different people might have different understanding of the terms "Machine Learning" and "data-driven", so I'm slightly (pleasantly) surprised that this question hasn't been closed as "opinion based". Since it still stands, I'll offer my opinion.
Historically, Machine Learning evolved as an attempt to make machines "intelligent", by allowing them to learn from "experience" (i.e. data), often by mimicking how living beings learn. So it was necessarily "data-driven". In other words, ML $\subseteq$ DD.
However, some statisticians also consider statistical modelling to be "data-driven" (e.g. Efron & Hastie, "Computer age statistical inference", p. 264). If you agree with that and if you consider data-driven statistical methods to be distinct from Machine Learning, then, obviously, "data-driven" is a broader term: DD $\supset$ ML.
(Personally, I'd rather contrast "data-driven" to "domain knowledge-driven", "probability model-based", or simply "parametric", but still leading to the same result)
There is, of course, considerable disagreement about terminology. Some statisticians consider Machine Learning to be a subset of Statistics (and most machine learners would disagree). Some machine learners consider some traditionally statistical methods, like linear or logistic regression, to be "machine learning" methods (and most statisticians would disagree). If you side with the statisticians on this point, these models would be examples of data-driven models that are not machine learning.
P.S. I disagree with bogovicj's comment. ML always builds models, only in some cases these models are not made explicit to the users. But ML algorithms certainly make some internal representations of the "things" (e.g. classes) they have learned and these representations are, for all practical purposes, synonymous to "models". | Is data-driven modelling and machine learning the same thing?
TL;DR: IMO, data-driven is a broader term, but it's a matter of definition.
Different people might have different understanding of the terms "Machine Learning" and "data-driven", so I'm slightly (plea |
35,052 | Is data-driven modelling and machine learning the same thing? | I think that the term "data-driven" has become now very popular because of deep learning techniques. The big change is that we do not longer hand-craft features, but design architectures and learning strategies that help the network learn the features directly from data. This has the advantage of being able to learn the "optimal" features for a given problem in an automated way, without having to rethink the design of those features. As an example, UNet is enormously popular architecture for segmentation. For a given problem you collect some data, choose the best fitting loss function, and you usually get pretty good results. With or without some transfer learning (fine tuning of pretrained models).
Otherwise, machine learning is about designing programs that solve problems by learning from data, instead of designing hand-crafted algorithms specific for a given situation. So yes, so data-driven modelling is just part of machine learning. | Is data-driven modelling and machine learning the same thing? | I think that the term "data-driven" has become now very popular because of deep learning techniques. The big change is that we do not longer hand-craft features, but design architectures and learning | Is data-driven modelling and machine learning the same thing?
I think that the term "data-driven" has become now very popular because of deep learning techniques. The big change is that we do not longer hand-craft features, but design architectures and learning strategies that help the network learn the features directly from data. This has the advantage of being able to learn the "optimal" features for a given problem in an automated way, without having to rethink the design of those features. As an example, UNet is enormously popular architecture for segmentation. For a given problem you collect some data, choose the best fitting loss function, and you usually get pretty good results. With or without some transfer learning (fine tuning of pretrained models).
Otherwise, machine learning is about designing programs that solve problems by learning from data, instead of designing hand-crafted algorithms specific for a given situation. So yes, so data-driven modelling is just part of machine learning. | Is data-driven modelling and machine learning the same thing?
I think that the term "data-driven" has become now very popular because of deep learning techniques. The big change is that we do not longer hand-craft features, but design architectures and learning |
35,053 | Is data-driven modelling and machine learning the same thing? | When people use data-driven modeling and machine learning interchangeably, they normally intend for them to mean the same thing. Simple definitions might be like this:
Data-driven modeling: The process of using data to derive the functional form of a model or the parameters of an algorithm.
Machine learning: The process of fitting parameters to data to minimize a cost function when the model is applied to the data. The "learning" part requires data.
One example of a data-driven modeling that is not machine learning might be physics-based modeling. In some cases of physics-based modeling, the results of an underlying physics process is compared with data, but the error with the data does not update the model parameters. Therefore, the model is not machine learning.
I don't think there are any machine learning models that are not data-driven.
One other note: These phases have no real definitions, and many people define them for their own purposes. For example, see this IBM page | Is data-driven modelling and machine learning the same thing? | When people use data-driven modeling and machine learning interchangeably, they normally intend for them to mean the same thing. Simple definitions might be like this:
Data-driven modeling: The proc | Is data-driven modelling and machine learning the same thing?
When people use data-driven modeling and machine learning interchangeably, they normally intend for them to mean the same thing. Simple definitions might be like this:
Data-driven modeling: The process of using data to derive the functional form of a model or the parameters of an algorithm.
Machine learning: The process of fitting parameters to data to minimize a cost function when the model is applied to the data. The "learning" part requires data.
One example of a data-driven modeling that is not machine learning might be physics-based modeling. In some cases of physics-based modeling, the results of an underlying physics process is compared with data, but the error with the data does not update the model parameters. Therefore, the model is not machine learning.
I don't think there are any machine learning models that are not data-driven.
One other note: These phases have no real definitions, and many people define them for their own purposes. For example, see this IBM page | Is data-driven modelling and machine learning the same thing?
When people use data-driven modeling and machine learning interchangeably, they normally intend for them to mean the same thing. Simple definitions might be like this:
Data-driven modeling: The proc |
35,054 | Sklearn Average_Precision_Score vs. AUC | AUC (or AUROC, area under receiver operating characteristic) and AUPR (area under precision recall curve) are threshold-independent methods for evaluating a threshold-based classifier (i.e. logistic regression). Average precision score is a way to calculate AUPR. We'll discuss AUROC and AUPRC in the context of binary classification for simplicity.
Tl;dr:
The ROC is a curve that plots true positive rate (TPR) against false positive rate (FPR) as your discrimination threshold varies.
AUROC is the area under that curve (ranging from 0 to 1); the higher the AUROC, the better your model is at differentiating the two classes.
AUPRC is the area under the precision-recall curve, which similarly plots precision against recall at varying thresholds.
sklearn.metrics.average_precision_score gives you a way to calculate AUPRC.
On AUROC
The ROC curve is a parametric function in your threshold $T$, plotting false positive rate (a.k.a. 1 - specificity, usually on x-axis) versus true positive rate (a.k.a. recall, on y-axis). Intuitively, this metric tries to answer the question "as my decision threshold varies, how well can my classifier discriminate between negative + positive examples?" In fact, AUROC is statistically equivalent to the probability that a randomly chosen positive instance will be ranked higher than a randomly chosen negative instance (by relation to the Wilcoxon rank test -- I don't know the details of the proof though).
Here's a nice schematic that illustrates some of the core patterns to know:
Red dotted line: As you can see, a random classifier will have an AUROC of 0.5. To see why, think about what happens when you choose a threshold such that 60% of all points are randomly predicted positive; then try to reason about the expected TPR/FPR in that case.
Green, blue, and orange lines: The AUROC increases for each of these curves as they "swell" away from the red dotted line, towards the blue dotted point. These classifiers get better and better; as at varying thresholds, the TPR consistently "outpaces" the FPR.
For further reading -- Section 7 of this is highly informative, which also briefly covers the relation between AUROC and the Gini coefficient. You can also find a great answer for an ROC-related question here. Lastly, here's a (debatable) rule-of-thumb for assessing AUROC values: 90%—100%: Excellent, 80%—90%: Good, 70%—80%: Fair, 60%—70%: Poor, 50%—60%: Fail.
On AUPR
One of the key limitations of AUROC becomes most apparent on highly imbalanced datasets (low % of positives, lots of negatives), e.g. many medical datasets, rare event detection problems, etc. Small changes in the number of false positives/false negatives can severely shift AUROC. AUPR, which plots precision vs. recall parametrically in threshold $t$ (similar setup to ROC, except the variables plotted), is more robust to this problem.
The baseline value for AUPR is equivalent to the ratio of positive instances to negative instances; i.e. $\left(\frac{\#(+)}{\#(-)\; + \;\#(+)}\right)$. Similarly to AUROC, this metric ranges from 0 to 1, and higher is "better."
Now, to address your question about average precision score more directly, this gives us a method of computing AUPR using rectangles somewhat reminiscent of Riemannian summation (without the limit business that gives you the integral). Average precision score gives us a guideline for fitting rectangles underneath this curve prior to summing up the area. Let's say that we're doing logistic regression and we sample 11 thresholds: $T = \{0.0, 0.1, 0.2, \dots, 1.0\}$. Each threshold $t_n$ is going to give us a corresponding value of precision and recall $P_n, R_n$; we can plot each of those and connect the dots. Perhaps we end up with a curve like the one we see below. The width of the rectangle is the difference in recall achieved at the $n$th and $n-1$st threshold; the height is the precision achieved at the $n$th threshold. You can easily see from the step-wise shape of the curve how one might try to fit rectangles underneath the curve to compute the area underneath.
On a related note, yes, you can also squish trapezoids underneath the curve (this is what sklearn.metrics.auc does) -- think about what advantages/disadvantages might occur in that case.
For further reading, I found this to be a nice resource for showing the limitations of AUROC in favor of AUPR in some cases. | Sklearn Average_Precision_Score vs. AUC | AUC (or AUROC, area under receiver operating characteristic) and AUPR (area under precision recall curve) are threshold-independent methods for evaluating a threshold-based classifier (i.e. logistic r | Sklearn Average_Precision_Score vs. AUC
AUC (or AUROC, area under receiver operating characteristic) and AUPR (area under precision recall curve) are threshold-independent methods for evaluating a threshold-based classifier (i.e. logistic regression). Average precision score is a way to calculate AUPR. We'll discuss AUROC and AUPRC in the context of binary classification for simplicity.
Tl;dr:
The ROC is a curve that plots true positive rate (TPR) against false positive rate (FPR) as your discrimination threshold varies.
AUROC is the area under that curve (ranging from 0 to 1); the higher the AUROC, the better your model is at differentiating the two classes.
AUPRC is the area under the precision-recall curve, which similarly plots precision against recall at varying thresholds.
sklearn.metrics.average_precision_score gives you a way to calculate AUPRC.
On AUROC
The ROC curve is a parametric function in your threshold $T$, plotting false positive rate (a.k.a. 1 - specificity, usually on x-axis) versus true positive rate (a.k.a. recall, on y-axis). Intuitively, this metric tries to answer the question "as my decision threshold varies, how well can my classifier discriminate between negative + positive examples?" In fact, AUROC is statistically equivalent to the probability that a randomly chosen positive instance will be ranked higher than a randomly chosen negative instance (by relation to the Wilcoxon rank test -- I don't know the details of the proof though).
Here's a nice schematic that illustrates some of the core patterns to know:
Red dotted line: As you can see, a random classifier will have an AUROC of 0.5. To see why, think about what happens when you choose a threshold such that 60% of all points are randomly predicted positive; then try to reason about the expected TPR/FPR in that case.
Green, blue, and orange lines: The AUROC increases for each of these curves as they "swell" away from the red dotted line, towards the blue dotted point. These classifiers get better and better; as at varying thresholds, the TPR consistently "outpaces" the FPR.
For further reading -- Section 7 of this is highly informative, which also briefly covers the relation between AUROC and the Gini coefficient. You can also find a great answer for an ROC-related question here. Lastly, here's a (debatable) rule-of-thumb for assessing AUROC values: 90%—100%: Excellent, 80%—90%: Good, 70%—80%: Fair, 60%—70%: Poor, 50%—60%: Fail.
On AUPR
One of the key limitations of AUROC becomes most apparent on highly imbalanced datasets (low % of positives, lots of negatives), e.g. many medical datasets, rare event detection problems, etc. Small changes in the number of false positives/false negatives can severely shift AUROC. AUPR, which plots precision vs. recall parametrically in threshold $t$ (similar setup to ROC, except the variables plotted), is more robust to this problem.
The baseline value for AUPR is equivalent to the ratio of positive instances to negative instances; i.e. $\left(\frac{\#(+)}{\#(-)\; + \;\#(+)}\right)$. Similarly to AUROC, this metric ranges from 0 to 1, and higher is "better."
Now, to address your question about average precision score more directly, this gives us a method of computing AUPR using rectangles somewhat reminiscent of Riemannian summation (without the limit business that gives you the integral). Average precision score gives us a guideline for fitting rectangles underneath this curve prior to summing up the area. Let's say that we're doing logistic regression and we sample 11 thresholds: $T = \{0.0, 0.1, 0.2, \dots, 1.0\}$. Each threshold $t_n$ is going to give us a corresponding value of precision and recall $P_n, R_n$; we can plot each of those and connect the dots. Perhaps we end up with a curve like the one we see below. The width of the rectangle is the difference in recall achieved at the $n$th and $n-1$st threshold; the height is the precision achieved at the $n$th threshold. You can easily see from the step-wise shape of the curve how one might try to fit rectangles underneath the curve to compute the area underneath.
On a related note, yes, you can also squish trapezoids underneath the curve (this is what sklearn.metrics.auc does) -- think about what advantages/disadvantages might occur in that case.
For further reading, I found this to be a nice resource for showing the limitations of AUROC in favor of AUPR in some cases. | Sklearn Average_Precision_Score vs. AUC
AUC (or AUROC, area under receiver operating characteristic) and AUPR (area under precision recall curve) are threshold-independent methods for evaluating a threshold-based classifier (i.e. logistic r |
35,055 | Difference Omitted Variable Bias and Confounding? | Omitted variable bias (OVB) is agnostic to the causal relationship between $X$ and $Z$. It concerns only the ability to estimate $\tau$ in the structural model for $Y$. The joint distribution of $Y$, $X$, and $Z$ is compatible both with a data-generating process in which $Z$ is a confounder of the $X \rightarrow Y$ relationship, so that $\tau$ represents the total effect of $X$ on $Y$, and with a data-generating process in which $Z$ is a mediator of the $X \rightarrow Y$ relationship, so that $\tau$ represents the direct effect of $X$ on $Y$.
In the confounding model, the data-generating process for $X$ and $Z$ is:
$$
Z := \epsilon_Z \\
X := \gamma Z + \epsilon_X
$$
In the mediation model, the data-genertaing process for $X$ and $Z$ is:
$$
Z := \alpha X + \epsilon_Z \\
X := \epsilon_X
$$
For the confounding process, omitting $Z$ from the model for $Y$ yields a biased estimate of $\tau$, the total effect of $X$ on $Y$. Thisis the classic bias due to an omitted confounder.
For the mediation process, the $X \rightarrow Y$ relationship is not confounded. The estimated coefficient $\hat \tau$ in the model omitting $Z$ is unbiased for the total causal effect of $X$ on $Y$. However, it is biased for $\tau$, the direct effect of $X$ on $Y$.
This is all to say that it's possible to have OVB without confounding if the coefficient you are trying to estimate is a direct effect, in which case omitting the mediator yields a biased estimate of this quantity. In the absence of confounding, the model omitting the mediator yields the total effect. The formula for the bias is the same regardless of the data-generating process of $X$ and $Z$, but the interpretation of the biased parameter depends on the causal relationship between $X$ and $Z$. | Difference Omitted Variable Bias and Confounding? | Omitted variable bias (OVB) is agnostic to the causal relationship between $X$ and $Z$. It concerns only the ability to estimate $\tau$ in the structural model for $Y$. The joint distribution of $Y$, | Difference Omitted Variable Bias and Confounding?
Omitted variable bias (OVB) is agnostic to the causal relationship between $X$ and $Z$. It concerns only the ability to estimate $\tau$ in the structural model for $Y$. The joint distribution of $Y$, $X$, and $Z$ is compatible both with a data-generating process in which $Z$ is a confounder of the $X \rightarrow Y$ relationship, so that $\tau$ represents the total effect of $X$ on $Y$, and with a data-generating process in which $Z$ is a mediator of the $X \rightarrow Y$ relationship, so that $\tau$ represents the direct effect of $X$ on $Y$.
In the confounding model, the data-generating process for $X$ and $Z$ is:
$$
Z := \epsilon_Z \\
X := \gamma Z + \epsilon_X
$$
In the mediation model, the data-genertaing process for $X$ and $Z$ is:
$$
Z := \alpha X + \epsilon_Z \\
X := \epsilon_X
$$
For the confounding process, omitting $Z$ from the model for $Y$ yields a biased estimate of $\tau$, the total effect of $X$ on $Y$. Thisis the classic bias due to an omitted confounder.
For the mediation process, the $X \rightarrow Y$ relationship is not confounded. The estimated coefficient $\hat \tau$ in the model omitting $Z$ is unbiased for the total causal effect of $X$ on $Y$. However, it is biased for $\tau$, the direct effect of $X$ on $Y$.
This is all to say that it's possible to have OVB without confounding if the coefficient you are trying to estimate is a direct effect, in which case omitting the mediator yields a biased estimate of this quantity. In the absence of confounding, the model omitting the mediator yields the total effect. The formula for the bias is the same regardless of the data-generating process of $X$ and $Z$, but the interpretation of the biased parameter depends on the causal relationship between $X$ and $Z$. | Difference Omitted Variable Bias and Confounding?
Omitted variable bias (OVB) is agnostic to the causal relationship between $X$ and $Z$. It concerns only the ability to estimate $\tau$ in the structural model for $Y$. The joint distribution of $Y$, |
35,056 | Linear regression when dividing the dependent variable by the independent variable | The first equation: $$ y = \beta'x + \epsilon $$
represents a linear regression where there is a linear association between $x$ and $y$ with some error $\epsilon$
Taking the 2nd equation:
$$ \frac{y}{x} = \beta'x + \epsilon $$
and multiplying through by $x$ we have:
$$ y = \beta'x^2 + \epsilon x $$
So we can interpret this as a linear regression where the functional form is quadratic and the errors are proportional to $x$ | Linear regression when dividing the dependent variable by the independent variable | The first equation: $$ y = \beta'x + \epsilon $$
represents a linear regression where there is a linear association between $x$ and $y$ with some error $\epsilon$
Taking the 2nd equation:
$$ \frac{y}{ | Linear regression when dividing the dependent variable by the independent variable
The first equation: $$ y = \beta'x + \epsilon $$
represents a linear regression where there is a linear association between $x$ and $y$ with some error $\epsilon$
Taking the 2nd equation:
$$ \frac{y}{x} = \beta'x + \epsilon $$
and multiplying through by $x$ we have:
$$ y = \beta'x^2 + \epsilon x $$
So we can interpret this as a linear regression where the functional form is quadratic and the errors are proportional to $x$ | Linear regression when dividing the dependent variable by the independent variable
The first equation: $$ y = \beta'x + \epsilon $$
represents a linear regression where there is a linear association between $x$ and $y$ with some error $\epsilon$
Taking the 2nd equation:
$$ \frac{y}{ |
35,057 | Do I need to adjust OLS standard errors after matching? | Following up on Dimitriy's comment, which I agree with. There are (at least) three sources of uncertainty when performing a propensity score matching analysis: 1) the estimation of the PS, 2) the matching, and 3) sampling variability. I have been writing a review of uncertainty estimation after matching so I'll briefly share those findings here.
The way standard errors must be estimated depends on how the matching was performed. For many forms of matching, we only have simulation evidence of how to proceed; for others, we have analytic expressions; and for others, we have both. After k:1 matching without replacement (including 1:1 matching), the evidence points to using a cluster-robust standard error with pair membership as the clustering variable. Austin and Small (2014) and many of Austin's other papers confirm this using simulation evidence, and Abadie and Spiess (2020) derive this analytically. Both papers also point to the block bootstrap as another solution, in which pairs are sampled with replacement from the matched dataset, and effects are estimated within each bootstrap sample to form the sampling distribution. This is statistically equivalent to a cluster-robust standard error.
There is some debate about how to account for the estimation of the propensity score. Abadie and Imbens (2016) proved analytically that when matching with replacement, ignoring the propensity score estimation makes inferences conservative when estimating the ATE but could go either way when estimating the ATT. However, when Bodory et al. (2020) performed a simulation study attempting to examine the performance of Abadie and Imebns' proposed standard error estimator that accounts for propensity score estimation, they found it to be anti-conservative, and that methods that ignored propensity score estimation performed better empirically. Austin's simulations also indicate that ignoring the estimation of the propensity scores and using cluster-robust standard errors tends to be sufficient.
Finally, it should be known that the standard error depends less on the method used to estimate it when covariates are included in the outcome model. Abadie and Spiess (2020) derived this for matching without replacement, and Hill and Reiter (2006) demonstrated this with simulations for matching with replacement.
What should you do? Include covariates in your outcome model, especially covariates with remaining imbalance or that are highly predictive of the outcome, and use a cluster-robust standard error estimator to estimate the standard error. You can cite Abadie and Spiess (2019) and Austin and Small (2014) to justify this choice.
I'll show you how to implement this in R. Use match.data() on the matchit object to extract the matched dataset. Then use the following code to estimate the effect and its standard error (letting Y be the outcome, A the treatment, X1 and X2 the covariates, and m.data the output of match.data()):
fit <- lm(Y ~ A + X1 + X2, data = m.data, weights = weights)
lmtest::coeftest(fit, vcov. = sandwich::vcovCL, cluster = ~subclass)
Remember only to interpret the coefficient on treatment and not those on the covariates.
Austin, P. C., & Small, D. S. (2014). The use of bootstrapping when using propensity-score matching without replacement: A simulation study. Statistics in Medicine, 33(24), 4306–4319. https://doi.org/10.1002/sim.6276
Abadie, A., & Imbens, G. W. (2016). Matching on the Estimated Propensity Score. Econometrica, 84(2), 781–807. https://doi.org/10.3982/ECTA11293
Abadie, A., & Spiess, J. (2020). Robust Post-Matching Inference. Journal of the American Statistical Association, 0(ja), 1–37. https://doi.org/10.1080/01621459.2020.1840383
Bodory, H., Camponovo, L., Huber, M., & Lechner, M. (2020). The Finite Sample Performance of Inference Methods for Propensity Score Matching and Weighting Estimators. Journal of Business & Economic Statistics, 38(1), 183–200. https://doi.org/10.1080/07350015.2018.1476247
Hill, J., & Reiter, J. P. (2006). Interval estimation for treatment effects using propensity score matching. Statistics in Medicine, 25(13), 2230–2256. https://doi.org/10.1002/sim.2277 | Do I need to adjust OLS standard errors after matching? | Following up on Dimitriy's comment, which I agree with. There are (at least) three sources of uncertainty when performing a propensity score matching analysis: 1) the estimation of the PS, 2) the matc | Do I need to adjust OLS standard errors after matching?
Following up on Dimitriy's comment, which I agree with. There are (at least) three sources of uncertainty when performing a propensity score matching analysis: 1) the estimation of the PS, 2) the matching, and 3) sampling variability. I have been writing a review of uncertainty estimation after matching so I'll briefly share those findings here.
The way standard errors must be estimated depends on how the matching was performed. For many forms of matching, we only have simulation evidence of how to proceed; for others, we have analytic expressions; and for others, we have both. After k:1 matching without replacement (including 1:1 matching), the evidence points to using a cluster-robust standard error with pair membership as the clustering variable. Austin and Small (2014) and many of Austin's other papers confirm this using simulation evidence, and Abadie and Spiess (2020) derive this analytically. Both papers also point to the block bootstrap as another solution, in which pairs are sampled with replacement from the matched dataset, and effects are estimated within each bootstrap sample to form the sampling distribution. This is statistically equivalent to a cluster-robust standard error.
There is some debate about how to account for the estimation of the propensity score. Abadie and Imbens (2016) proved analytically that when matching with replacement, ignoring the propensity score estimation makes inferences conservative when estimating the ATE but could go either way when estimating the ATT. However, when Bodory et al. (2020) performed a simulation study attempting to examine the performance of Abadie and Imebns' proposed standard error estimator that accounts for propensity score estimation, they found it to be anti-conservative, and that methods that ignored propensity score estimation performed better empirically. Austin's simulations also indicate that ignoring the estimation of the propensity scores and using cluster-robust standard errors tends to be sufficient.
Finally, it should be known that the standard error depends less on the method used to estimate it when covariates are included in the outcome model. Abadie and Spiess (2020) derived this for matching without replacement, and Hill and Reiter (2006) demonstrated this with simulations for matching with replacement.
What should you do? Include covariates in your outcome model, especially covariates with remaining imbalance or that are highly predictive of the outcome, and use a cluster-robust standard error estimator to estimate the standard error. You can cite Abadie and Spiess (2019) and Austin and Small (2014) to justify this choice.
I'll show you how to implement this in R. Use match.data() on the matchit object to extract the matched dataset. Then use the following code to estimate the effect and its standard error (letting Y be the outcome, A the treatment, X1 and X2 the covariates, and m.data the output of match.data()):
fit <- lm(Y ~ A + X1 + X2, data = m.data, weights = weights)
lmtest::coeftest(fit, vcov. = sandwich::vcovCL, cluster = ~subclass)
Remember only to interpret the coefficient on treatment and not those on the covariates.
Austin, P. C., & Small, D. S. (2014). The use of bootstrapping when using propensity-score matching without replacement: A simulation study. Statistics in Medicine, 33(24), 4306–4319. https://doi.org/10.1002/sim.6276
Abadie, A., & Imbens, G. W. (2016). Matching on the Estimated Propensity Score. Econometrica, 84(2), 781–807. https://doi.org/10.3982/ECTA11293
Abadie, A., & Spiess, J. (2020). Robust Post-Matching Inference. Journal of the American Statistical Association, 0(ja), 1–37. https://doi.org/10.1080/01621459.2020.1840383
Bodory, H., Camponovo, L., Huber, M., & Lechner, M. (2020). The Finite Sample Performance of Inference Methods for Propensity Score Matching and Weighting Estimators. Journal of Business & Economic Statistics, 38(1), 183–200. https://doi.org/10.1080/07350015.2018.1476247
Hill, J., & Reiter, J. P. (2006). Interval estimation for treatment effects using propensity score matching. Statistics in Medicine, 25(13), 2230–2256. https://doi.org/10.1002/sim.2277 | Do I need to adjust OLS standard errors after matching?
Following up on Dimitriy's comment, which I agree with. There are (at least) three sources of uncertainty when performing a propensity score matching analysis: 1) the estimation of the PS, 2) the matc |
35,058 | How much does Mathematical Logic relate to Statistics? | You can derive Bayesian statistics from mathematical logic. See its axiomatization in logic at
Cox, R. T. (1961). The Algebra of Probable Inference. Baltimore, MD: Johns Hopkins University Press.
I cannot answer how many statistics professors are required to study logic at a deep level.
You can find its extension at
Edwin Thompson Jaynes, Probability Theory: The Logic of Science, Cambridge University Press (2003).
You can also pick up anything on decision theory and you are back in mathematical logic combined with a utility function. | How much does Mathematical Logic relate to Statistics? | You can derive Bayesian statistics from mathematical logic. See its axiomatization in logic at
Cox, R. T. (1961). The Algebra of Probable Inference. Baltimore, MD: Johns Hopkins University Press.
I | How much does Mathematical Logic relate to Statistics?
You can derive Bayesian statistics from mathematical logic. See its axiomatization in logic at
Cox, R. T. (1961). The Algebra of Probable Inference. Baltimore, MD: Johns Hopkins University Press.
I cannot answer how many statistics professors are required to study logic at a deep level.
You can find its extension at
Edwin Thompson Jaynes, Probability Theory: The Logic of Science, Cambridge University Press (2003).
You can also pick up anything on decision theory and you are back in mathematical logic combined with a utility function. | How much does Mathematical Logic relate to Statistics?
You can derive Bayesian statistics from mathematical logic. See its axiomatization in logic at
Cox, R. T. (1961). The Algebra of Probable Inference. Baltimore, MD: Johns Hopkins University Press.
I |
35,059 | How much does Mathematical Logic relate to Statistics? | Answering this question requires a degree of generalisation that may obscure the diverse pathways through which people become statistics professors. Here I will give some broad generalisations based on my own observations of the pathways through which my own mentors and colleagues became statistics professors.
In my observation, most ---but not all--- statistics professors who work on theoretical material come directly from a mathematics background. Most have done an undergraduate degree in mathematics, and this piqued their interest in probability and statistics, leading to postgraduate work in statistics. For professors with this background, they were taught all the things in the coverage of an undergraduate mathematics degree, including exposure to naive set theory, axiomatic set theory, and the foundations of mathematics. This is usually something that they previously studied during undergraduate/postgraduate degrees, but it is not common for them to continue work in this area in their careers, and so they are usually quite rusty on this material.
Other statistics professors come from an applied science background with an undergraduate degree that was in some area that used statistics but did not involve deeper study of mathematics (e.g., economics, actuarial studies, finance, psychology, etc.). For these people, they usually hit a bit of a mathematical wall when starting postgraduate work in statistics, and they have to learn a lot of mathematical material that was absent in their undergraduate degree. This would usually include learning measure theory and set theory, and learning real analysis in greater depth than is usually covered in an applied science degree. Deeper learning of the foundations of mathematics would not usually be required, but it is not uncommon for people to dabble out of curiosity.
In terms of the depth of mathematical knowledge required for theoretical statistical work, for the most part, it is sufficent to have a solid understanding of logic and proofs (so that you can form theorems and prove them), and also have a good understanding of real analysis and measure theory. That is enough to understand the foundations of probability theory, which is where most statistical theory work starts. Knowledge of the foundations of mathematics is rarely required --- it is usually sufficient to take for granted that we can form a probability measure on a sigma-field of sets (e.g., the Borel sets) and start from there. Deeper foundational issues are left to logicians and mathematicians who work in that field.
Finally, it is worth noting that probabilists and statisticians often consider themselves somewhat similar to logicians, insofar as probability theory can be regarded as an "extension" of propositional (true-false) logic. This is particularly true for Bayesian statisticians, who often regard their work to be merely inductive logic, framed in mathematical form as an extension of propositional logic. | How much does Mathematical Logic relate to Statistics? | Answering this question requires a degree of generalisation that may obscure the diverse pathways through which people become statistics professors. Here I will give some broad generalisations based | How much does Mathematical Logic relate to Statistics?
Answering this question requires a degree of generalisation that may obscure the diverse pathways through which people become statistics professors. Here I will give some broad generalisations based on my own observations of the pathways through which my own mentors and colleagues became statistics professors.
In my observation, most ---but not all--- statistics professors who work on theoretical material come directly from a mathematics background. Most have done an undergraduate degree in mathematics, and this piqued their interest in probability and statistics, leading to postgraduate work in statistics. For professors with this background, they were taught all the things in the coverage of an undergraduate mathematics degree, including exposure to naive set theory, axiomatic set theory, and the foundations of mathematics. This is usually something that they previously studied during undergraduate/postgraduate degrees, but it is not common for them to continue work in this area in their careers, and so they are usually quite rusty on this material.
Other statistics professors come from an applied science background with an undergraduate degree that was in some area that used statistics but did not involve deeper study of mathematics (e.g., economics, actuarial studies, finance, psychology, etc.). For these people, they usually hit a bit of a mathematical wall when starting postgraduate work in statistics, and they have to learn a lot of mathematical material that was absent in their undergraduate degree. This would usually include learning measure theory and set theory, and learning real analysis in greater depth than is usually covered in an applied science degree. Deeper learning of the foundations of mathematics would not usually be required, but it is not uncommon for people to dabble out of curiosity.
In terms of the depth of mathematical knowledge required for theoretical statistical work, for the most part, it is sufficent to have a solid understanding of logic and proofs (so that you can form theorems and prove them), and also have a good understanding of real analysis and measure theory. That is enough to understand the foundations of probability theory, which is where most statistical theory work starts. Knowledge of the foundations of mathematics is rarely required --- it is usually sufficient to take for granted that we can form a probability measure on a sigma-field of sets (e.g., the Borel sets) and start from there. Deeper foundational issues are left to logicians and mathematicians who work in that field.
Finally, it is worth noting that probabilists and statisticians often consider themselves somewhat similar to logicians, insofar as probability theory can be regarded as an "extension" of propositional (true-false) logic. This is particularly true for Bayesian statisticians, who often regard their work to be merely inductive logic, framed in mathematical form as an extension of propositional logic. | How much does Mathematical Logic relate to Statistics?
Answering this question requires a degree of generalisation that may obscure the diverse pathways through which people become statistics professors. Here I will give some broad generalisations based |
35,060 | How much does Mathematical Logic relate to Statistics? | Interesting question, is it possible maybe to find some references? See for instance Did Deborah Mayo refute Birnbaum's proof of the likelihood principle? a post about D Mayo's claimed refutation of the likelihood principle, a discussion where it seems some of the subtleties studied in mathematical logic enters.
Might be interesting: MATHEMATICAL LOGIC AND
STATISTICAL OR STOCHASTICAL WAYS OF THINKING:
AN EDUCATIONAL POINT OF VIEW ,
Abduction? Deduction? Induction? Is There a Logic of Exploratory Data Analysis?.
Actually, changing search terms in google scholar to "formal logic" machine learning gives a lot more interesting-looking hits, which might just be hinting at something ... | How much does Mathematical Logic relate to Statistics? | Interesting question, is it possible maybe to find some references? See for instance Did Deborah Mayo refute Birnbaum's proof of the likelihood principle? a post about D Mayo's claimed refutation of | How much does Mathematical Logic relate to Statistics?
Interesting question, is it possible maybe to find some references? See for instance Did Deborah Mayo refute Birnbaum's proof of the likelihood principle? a post about D Mayo's claimed refutation of the likelihood principle, a discussion where it seems some of the subtleties studied in mathematical logic enters.
Might be interesting: MATHEMATICAL LOGIC AND
STATISTICAL OR STOCHASTICAL WAYS OF THINKING:
AN EDUCATIONAL POINT OF VIEW ,
Abduction? Deduction? Induction? Is There a Logic of Exploratory Data Analysis?.
Actually, changing search terms in google scholar to "formal logic" machine learning gives a lot more interesting-looking hits, which might just be hinting at something ... | How much does Mathematical Logic relate to Statistics?
Interesting question, is it possible maybe to find some references? See for instance Did Deborah Mayo refute Birnbaum's proof of the likelihood principle? a post about D Mayo's claimed refutation of |
35,061 | How much does Mathematical Logic relate to Statistics? | Mathematical logic and axiomatic set theory are deeper, "lower" layers (or you can also call it "background"). You don't necessarily need to study them to be a good statistician. You will just apply them without even knowing it. But once you dive into them, you realize that they are actually keystones on which all the current science stands (well, math stands on them and current science stands on math). These disciplines formalize things which are considered so "obvious" and "natural" by scientists (i.e. axioms) that they don't even think about them.
I'd propose an analogy with computer world - if statistics would be an application, a program, then mathematical logic and set theory would symbolize the operating system. You can happily use the statistics without understanding the operating system.
One might imagine it like this:
The blue boxes are actually these basic keystone layers which most people using applied statistics and mathematics won't need. The orange boxes are the applied disciplines which build on the below layers as if they were axioms. It is just an imperfect sketch of course, one might discuss if e.g. Mathematical analysis shouldn't be in the blue layers as well, and perpaphs aside from the Arithmetic... And also the orange boxes would perhaps form a network rather than independent boxes... so don't take it overly seriously :-) It just gives you an idea.
So, to summarize your questions:
Statistics professors don't need to study Mathematical Logic, just maybe the basics in the first semester and not even it is necessary. I know a lot of excellent statisticians who didn't study the background of the mathematical logic.
Statistics and Statisticians use ML without even knowing it. It is a base, an "operating" system of every formula, every statement. It is a basic keystone of all the scientific disciplines. But, they act as "lower layers" you don't have to dive deep into in order to be an excellent statistician. | How much does Mathematical Logic relate to Statistics? | Mathematical logic and axiomatic set theory are deeper, "lower" layers (or you can also call it "background"). You don't necessarily need to study them to be a good statistician. You will just apply t | How much does Mathematical Logic relate to Statistics?
Mathematical logic and axiomatic set theory are deeper, "lower" layers (or you can also call it "background"). You don't necessarily need to study them to be a good statistician. You will just apply them without even knowing it. But once you dive into them, you realize that they are actually keystones on which all the current science stands (well, math stands on them and current science stands on math). These disciplines formalize things which are considered so "obvious" and "natural" by scientists (i.e. axioms) that they don't even think about them.
I'd propose an analogy with computer world - if statistics would be an application, a program, then mathematical logic and set theory would symbolize the operating system. You can happily use the statistics without understanding the operating system.
One might imagine it like this:
The blue boxes are actually these basic keystone layers which most people using applied statistics and mathematics won't need. The orange boxes are the applied disciplines which build on the below layers as if they were axioms. It is just an imperfect sketch of course, one might discuss if e.g. Mathematical analysis shouldn't be in the blue layers as well, and perpaphs aside from the Arithmetic... And also the orange boxes would perhaps form a network rather than independent boxes... so don't take it overly seriously :-) It just gives you an idea.
So, to summarize your questions:
Statistics professors don't need to study Mathematical Logic, just maybe the basics in the first semester and not even it is necessary. I know a lot of excellent statisticians who didn't study the background of the mathematical logic.
Statistics and Statisticians use ML without even knowing it. It is a base, an "operating" system of every formula, every statement. It is a basic keystone of all the scientific disciplines. But, they act as "lower layers" you don't have to dive deep into in order to be an excellent statistician. | How much does Mathematical Logic relate to Statistics?
Mathematical logic and axiomatic set theory are deeper, "lower" layers (or you can also call it "background"). You don't necessarily need to study them to be a good statistician. You will just apply t |
35,062 | Expectation of sample averages from normal distribution | Let's take $\sigma=1$ and ignore the division by $k;$ these simplifications will require us to multiply the answer by $|\sigma|/k$ (which I leave up to you). Thus we seek the expectation of $\left|Z(n,k)\right| $ where
$$Z(n,k) = \sum_{i\in\Phi_1} s_i - \sum_{i\in\Phi_2}s_i.$$
Because $-s_i$ and $s_i$ have the same distribution, the expression inside the absolute value has the same distribution as
$$\sum_{i\in\Phi_1\oplus\Phi_2}s_i$$
(writing $\Phi_1\oplus\Phi_2$ for the symmetric difference $\Phi_1\cup \Phi_2 \setminus \left(\Phi_1\cap\Phi_2\right)$), because the values in the intersection $\Phi_1\cap\Phi_2$ cancel out in the definition of $Z(n,k).$
Conditional on $(\Phi_1,\Phi_2),$ since $Z$ is the sum of independent Normal variables, its distribution is Normal with mean $0$ and variance $2(k-j)$ where $j$ is the cardinality of $\Phi_1\cap\Phi_2.$ (Notice that the component for $j=k$ is singular: it is an atom at $0.$)
Consequently, the distribution of $Z$ is a mixture of these Normal distributions. The weights in the mixture are the chances of $j$ given by the hypergeometric distribution
$$\Pr(|\Phi_1\cap\Phi_2|=j) = \frac{\binom{k}{j}\binom{n-k}{k-j}}{\binom{n}{k}} =: p_{n,k}(j).$$
The distribution of $|Z(n,k)|$ thus is a mixture of variables $Z_j(k),$ $j=0, 1, \ldots, k,$ that are $\sqrt{2(k-j)}$ times (independent copies of) $\chi(1)$ variables. Its expectation therefore is
$$E\left[\left|Z(n,k)\right|\right] = \sum_{j=0}^k p_{n,k}(j) \sqrt{2(k-j)} \sqrt{2/\pi} = \frac{2}{\sqrt{\pi}} \sum_{j=0}^k \sqrt{k-j}\, p_{n,k}(j).$$
As a test, we may simulate many values of $Z(n,k)$ directly from either of the first two formulas and compare their distribution to the mixture. Here, for instance, is the cumulative distribution of $5000$ simulated values on which the mixture CDF is overplotted in red:
The agreement is excellent.
Finally, with the formula for the expected absolute value available, we may plot $E\left[\left|Z(n,k)\right|\right]$ for $k=0, 1, \ldots, n.$ Here is a plot for larger $n:$
Remarks
This analysis readily extends to the case where $\Phi_1$ and $\Phi_2$ are of different sizes $k_1$ and $k_2:$ replace $2(k-j) = \left|\Phi_1\oplus\Phi_2\right|$ by $(k_1-j)+(k_2-j)$ at the outset and use
$$p_{n;k_1,k_2}(j)=\Pr\left(\left|\Phi_1\cap\Phi_2\right| = j\right) = \frac{\binom{k_1}{j}\binom{n-k_1}{k_2-j}}{\binom{n}{k_2}}$$
for the mixture weights, taking the sum over all $j$ for which the binomial coefficients are nonzero.
The atom (discrete component) in the distribution of $Z$ occurs only when $k_1=k_2=k.$ Its weight is the chance of complete cancellation where $\Phi_1=\Phi_2,$ given by $$p_{n,k}(k) = 1/\binom{n}{k}.$$ In the figure (showing the CDF), this is the height of the vertical jump at $Z=0,$ there equal to $1/\binom{5}{3}=1/10.$
We could even go so far as to choose fixed coefficient vectors $\alpha_i$ and $\beta_i,$ let the $s_i$ have an arbitrary distribution (with possibly nonzero mean), and consider
$$Z(n,k;\alpha,\beta) = \sum_{i\in\Phi_1}\alpha_i s_i + \sum_{i\in\Phi_2}\beta_i s_i.$$
The question concerns the case $\alpha_i=1/k$ and $\beta_i=-1/k$ for all $i.$ The preliminary simplification of factoring out the common factor of $1/k$ is no longer available, but the analysis doesn't essentially change: the strategy of conditioning on $(\Phi_1,\Phi_2)$ and breaking the union of the samples into $\Phi_1\setminus\Phi_2,$ $\Phi_2\setminus\Phi_1,$ and $\Phi_1\cap\Phi_2$ still works. I leave the algebraic complications to the interested reader.
Appendix
Here is R code for the simulation in the first figure:
n <- 5
k <- 3
#
# Random draws of Z
#
set.seed(17)
Z <- replicate(5e3, {
x <- rnorm(n)
i1 <- sample.int(n, k)
i2 <- sample.int(n, k)
sum(x[i1]) - sum(x[i2]) # Original formula
# sum(x[setdiff(union(i1,i2), intersect(i1,i2))])# Second formula
})
#
# CDF of Z
#
pf <- function(x, n, k) {
lp <- function(j) lchoose(k,j) + lchoose(n-k,k-j) - lchoose(n,k)
z <- sapply(0:k, function(j) exp(lp(j) + pnorm(x, 0, sqrt(2*(k-j)), log=TRUE)))
rowSums(matrix(z, ncol=k+1))
}
#
# Plots
#
plot(ecdf(Z), main=paste0("Simulated values of Z(",n,",",k,")"),
cex.main=1, xlab="Z", ylab="Probability")
curve(pf(x, n, k), xlim=c(min(Z), -1e-15), add=TRUE, col="Red", lwd=2, n=1001)
curve(pf(x, n, k), xlim=c(1e-15, max(Z)), add=TRUE, col="Red", lwd=2, n=1001)
Here is R code for the second figure, showing the direct calculation of the expectation:
eZ <- Vectorize(function(n, k) {
p <- function(j) exp(lchoose(k,j) + lchoose(n-k,k-j) - lchoose(n,k))
j <- 0:k
2 / sqrt(pi) * sum(sqrt(k-j) * p(j))
}, "k")
n <- 25
plot(0:n, eZ(n, 0:n), type="h", ylab="Value",
main=expression(E*group("[", list(italic(Z)(25,k)), "]")), cex.main=1,
bty="n", xlab=expression(italic(k))) | Expectation of sample averages from normal distribution | Let's take $\sigma=1$ and ignore the division by $k;$ these simplifications will require us to multiply the answer by $|\sigma|/k$ (which I leave up to you). Thus we seek the expectation of $\left|Z( | Expectation of sample averages from normal distribution
Let's take $\sigma=1$ and ignore the division by $k;$ these simplifications will require us to multiply the answer by $|\sigma|/k$ (which I leave up to you). Thus we seek the expectation of $\left|Z(n,k)\right| $ where
$$Z(n,k) = \sum_{i\in\Phi_1} s_i - \sum_{i\in\Phi_2}s_i.$$
Because $-s_i$ and $s_i$ have the same distribution, the expression inside the absolute value has the same distribution as
$$\sum_{i\in\Phi_1\oplus\Phi_2}s_i$$
(writing $\Phi_1\oplus\Phi_2$ for the symmetric difference $\Phi_1\cup \Phi_2 \setminus \left(\Phi_1\cap\Phi_2\right)$), because the values in the intersection $\Phi_1\cap\Phi_2$ cancel out in the definition of $Z(n,k).$
Conditional on $(\Phi_1,\Phi_2),$ since $Z$ is the sum of independent Normal variables, its distribution is Normal with mean $0$ and variance $2(k-j)$ where $j$ is the cardinality of $\Phi_1\cap\Phi_2.$ (Notice that the component for $j=k$ is singular: it is an atom at $0.$)
Consequently, the distribution of $Z$ is a mixture of these Normal distributions. The weights in the mixture are the chances of $j$ given by the hypergeometric distribution
$$\Pr(|\Phi_1\cap\Phi_2|=j) = \frac{\binom{k}{j}\binom{n-k}{k-j}}{\binom{n}{k}} =: p_{n,k}(j).$$
The distribution of $|Z(n,k)|$ thus is a mixture of variables $Z_j(k),$ $j=0, 1, \ldots, k,$ that are $\sqrt{2(k-j)}$ times (independent copies of) $\chi(1)$ variables. Its expectation therefore is
$$E\left[\left|Z(n,k)\right|\right] = \sum_{j=0}^k p_{n,k}(j) \sqrt{2(k-j)} \sqrt{2/\pi} = \frac{2}{\sqrt{\pi}} \sum_{j=0}^k \sqrt{k-j}\, p_{n,k}(j).$$
As a test, we may simulate many values of $Z(n,k)$ directly from either of the first two formulas and compare their distribution to the mixture. Here, for instance, is the cumulative distribution of $5000$ simulated values on which the mixture CDF is overplotted in red:
The agreement is excellent.
Finally, with the formula for the expected absolute value available, we may plot $E\left[\left|Z(n,k)\right|\right]$ for $k=0, 1, \ldots, n.$ Here is a plot for larger $n:$
Remarks
This analysis readily extends to the case where $\Phi_1$ and $\Phi_2$ are of different sizes $k_1$ and $k_2:$ replace $2(k-j) = \left|\Phi_1\oplus\Phi_2\right|$ by $(k_1-j)+(k_2-j)$ at the outset and use
$$p_{n;k_1,k_2}(j)=\Pr\left(\left|\Phi_1\cap\Phi_2\right| = j\right) = \frac{\binom{k_1}{j}\binom{n-k_1}{k_2-j}}{\binom{n}{k_2}}$$
for the mixture weights, taking the sum over all $j$ for which the binomial coefficients are nonzero.
The atom (discrete component) in the distribution of $Z$ occurs only when $k_1=k_2=k.$ Its weight is the chance of complete cancellation where $\Phi_1=\Phi_2,$ given by $$p_{n,k}(k) = 1/\binom{n}{k}.$$ In the figure (showing the CDF), this is the height of the vertical jump at $Z=0,$ there equal to $1/\binom{5}{3}=1/10.$
We could even go so far as to choose fixed coefficient vectors $\alpha_i$ and $\beta_i,$ let the $s_i$ have an arbitrary distribution (with possibly nonzero mean), and consider
$$Z(n,k;\alpha,\beta) = \sum_{i\in\Phi_1}\alpha_i s_i + \sum_{i\in\Phi_2}\beta_i s_i.$$
The question concerns the case $\alpha_i=1/k$ and $\beta_i=-1/k$ for all $i.$ The preliminary simplification of factoring out the common factor of $1/k$ is no longer available, but the analysis doesn't essentially change: the strategy of conditioning on $(\Phi_1,\Phi_2)$ and breaking the union of the samples into $\Phi_1\setminus\Phi_2,$ $\Phi_2\setminus\Phi_1,$ and $\Phi_1\cap\Phi_2$ still works. I leave the algebraic complications to the interested reader.
Appendix
Here is R code for the simulation in the first figure:
n <- 5
k <- 3
#
# Random draws of Z
#
set.seed(17)
Z <- replicate(5e3, {
x <- rnorm(n)
i1 <- sample.int(n, k)
i2 <- sample.int(n, k)
sum(x[i1]) - sum(x[i2]) # Original formula
# sum(x[setdiff(union(i1,i2), intersect(i1,i2))])# Second formula
})
#
# CDF of Z
#
pf <- function(x, n, k) {
lp <- function(j) lchoose(k,j) + lchoose(n-k,k-j) - lchoose(n,k)
z <- sapply(0:k, function(j) exp(lp(j) + pnorm(x, 0, sqrt(2*(k-j)), log=TRUE)))
rowSums(matrix(z, ncol=k+1))
}
#
# Plots
#
plot(ecdf(Z), main=paste0("Simulated values of Z(",n,",",k,")"),
cex.main=1, xlab="Z", ylab="Probability")
curve(pf(x, n, k), xlim=c(min(Z), -1e-15), add=TRUE, col="Red", lwd=2, n=1001)
curve(pf(x, n, k), xlim=c(1e-15, max(Z)), add=TRUE, col="Red", lwd=2, n=1001)
Here is R code for the second figure, showing the direct calculation of the expectation:
eZ <- Vectorize(function(n, k) {
p <- function(j) exp(lchoose(k,j) + lchoose(n-k,k-j) - lchoose(n,k))
j <- 0:k
2 / sqrt(pi) * sum(sqrt(k-j) * p(j))
}, "k")
n <- 25
plot(0:n, eZ(n, 0:n), type="h", ylab="Value",
main=expression(E*group("[", list(italic(Z)(25,k)), "]")), cex.main=1,
bty="n", xlab=expression(italic(k))) | Expectation of sample averages from normal distribution
Let's take $\sigma=1$ and ignore the division by $k;$ these simplifications will require us to multiply the answer by $|\sigma|/k$ (which I leave up to you). Thus we seek the expectation of $\left|Z( |
35,063 | Expectation of sample averages from normal distribution | Suppose $n = 100, k = 80.$ Then it makes a difference whether
sampling is with or without replacement.
set.seed(2020)
x = rnorm(100, 50, 8)
a = mean(x); a
[1] 50.87113
sd(x); sd(x)/sqrt(100)
[1] 8.954334
[1] 0.8954334 # aprx SE mean
The population SD is $\sigma = 8.$ The reference sample of 100 has $S = 8.954,$
so the SE mean estimated from the reference sample is $S/\sqrt{n} = 0.8954.$
a.wo = replicate(10^5, mean(sample(x,80)) )
sd(a.wo)
[1] 0.4467356 # aprx SE mean w/o replacement
a.wr = replicate(10^5, mean(sample(x,80, rep=T)) )
sd(a.wr)
[1] 0.99378 # aprx SE mean with replacement
Means of subsamples taken without replacement are less variable than means
of subsamples taken with replacement. As the available pool of values
decreases so does the variability. Also, means of subsamples taken with with
replacement get more variable as the size of the subsample decreases (as for $k=50$ below).
a.wr.50 = replicate(10^5, mean(sample(x,50, rep=T)) )
sd(a.wr.50)
[1] 1.262685
Now for a second vector of $100\,000$ such averages of subsamples of size $k=80.$
a.wr2 = replicate(10^5, mean(sample(x,80,rep=T)))
sd(a.wr2)
mean(abs(a.wr - awr2))
a.wr2 = replicate(10^5, mean(sample(x,80,rep=T)))
sd(a.wr2)
[1] 0.9945862
mean(abs(a.wr - a.wr2))
[1] 1.121448
As I interpret your question, the last result above approximates the
answer to your question for $n = 100, k = 80$ and sampling with replacement
for two independent samples.
If that is correct, it seems worthwhile to try to get an analytic solution
for $Var(\frac{1}{k}\sum_i X_i)$ and from there the variance of the absolute difference of two such averages. | Expectation of sample averages from normal distribution | Suppose $n = 100, k = 80.$ Then it makes a difference whether
sampling is with or without replacement.
set.seed(2020)
x = rnorm(100, 50, 8)
a = mean(x); a
[1] 50.87113
sd(x); sd(x)/sqrt(100)
[1] 8.9 | Expectation of sample averages from normal distribution
Suppose $n = 100, k = 80.$ Then it makes a difference whether
sampling is with or without replacement.
set.seed(2020)
x = rnorm(100, 50, 8)
a = mean(x); a
[1] 50.87113
sd(x); sd(x)/sqrt(100)
[1] 8.954334
[1] 0.8954334 # aprx SE mean
The population SD is $\sigma = 8.$ The reference sample of 100 has $S = 8.954,$
so the SE mean estimated from the reference sample is $S/\sqrt{n} = 0.8954.$
a.wo = replicate(10^5, mean(sample(x,80)) )
sd(a.wo)
[1] 0.4467356 # aprx SE mean w/o replacement
a.wr = replicate(10^5, mean(sample(x,80, rep=T)) )
sd(a.wr)
[1] 0.99378 # aprx SE mean with replacement
Means of subsamples taken without replacement are less variable than means
of subsamples taken with replacement. As the available pool of values
decreases so does the variability. Also, means of subsamples taken with with
replacement get more variable as the size of the subsample decreases (as for $k=50$ below).
a.wr.50 = replicate(10^5, mean(sample(x,50, rep=T)) )
sd(a.wr.50)
[1] 1.262685
Now for a second vector of $100\,000$ such averages of subsamples of size $k=80.$
a.wr2 = replicate(10^5, mean(sample(x,80,rep=T)))
sd(a.wr2)
mean(abs(a.wr - awr2))
a.wr2 = replicate(10^5, mean(sample(x,80,rep=T)))
sd(a.wr2)
[1] 0.9945862
mean(abs(a.wr - a.wr2))
[1] 1.121448
As I interpret your question, the last result above approximates the
answer to your question for $n = 100, k = 80$ and sampling with replacement
for two independent samples.
If that is correct, it seems worthwhile to try to get an analytic solution
for $Var(\frac{1}{k}\sum_i X_i)$ and from there the variance of the absolute difference of two such averages. | Expectation of sample averages from normal distribution
Suppose $n = 100, k = 80.$ Then it makes a difference whether
sampling is with or without replacement.
set.seed(2020)
x = rnorm(100, 50, 8)
a = mean(x); a
[1] 50.87113
sd(x); sd(x)/sqrt(100)
[1] 8.9 |
35,064 | Expectation of sample averages from normal distribution | I have started this way: The probability that an element from the second sample is already in the first is $\dfrac{k}{n}$.
If $𝑝$ elements overlap between the two samples (and consequently $𝑘−𝑝$ wash out), then the difference is distributed as $\mathcal{N}\left(0,2\frac{\sigma^2}{k^2}\left(k-p\right)\right)$. The expectation of the absolute value is therefore $2\frac{\sigma}{k}\sqrt{𝑘−𝑝}$.
The next step is to take the expectation over different overlap levels $p$:
$$\frac{2\sigma}{k} \sum_{p=0}^k \binom{k}{p} \left(\frac{k}{n}\right)^p \left(1-\frac{k}{n}\right)^{k-p} \sqrt{k-p}$$.
Does this have a closed form? | Expectation of sample averages from normal distribution | I have started this way: The probability that an element from the second sample is already in the first is $\dfrac{k}{n}$.
If $𝑝$ elements overlap between the two samples (and consequently $𝑘−𝑝$ wash | Expectation of sample averages from normal distribution
I have started this way: The probability that an element from the second sample is already in the first is $\dfrac{k}{n}$.
If $𝑝$ elements overlap between the two samples (and consequently $𝑘−𝑝$ wash out), then the difference is distributed as $\mathcal{N}\left(0,2\frac{\sigma^2}{k^2}\left(k-p\right)\right)$. The expectation of the absolute value is therefore $2\frac{\sigma}{k}\sqrt{𝑘−𝑝}$.
The next step is to take the expectation over different overlap levels $p$:
$$\frac{2\sigma}{k} \sum_{p=0}^k \binom{k}{p} \left(\frac{k}{n}\right)^p \left(1-\frac{k}{n}\right)^{k-p} \sqrt{k-p}$$.
Does this have a closed form? | Expectation of sample averages from normal distribution
I have started this way: The probability that an element from the second sample is already in the first is $\dfrac{k}{n}$.
If $𝑝$ elements overlap between the two samples (and consequently $𝑘−𝑝$ wash |
35,065 | Are there any power calculation formulas for ML methods beyond Logistic Regression? [closed] | The type of power analysis you seem to be referring to is: make some assumptions about the distribution of variables, the effect size, etc., and then ask how many samples you'd need to have a (say) 80% probability of detecting an effect of that magnitude.
There are in fact many results of a similar flavor in ML theory. For instance, here's one for SVMs (rephrased from Corollary 15.7 of Shalev-Shwartz and Ben-David, Understanding Machine Learning):
$\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathOperator*{\E}{\mathbb{E}}
\DeclareMathOperator{\sign}{sign}
\newcommand{\x}{\mathbf x}
\newcommand{\X}{\mathcal X}
\newcommand{\D}{\mathcal D}
\newcommand{\norm}[1]{\lVert #1 \rVert}
\newcommand{\w}{\mathbf w}
\newcommand{\u}{\mathbf u}
$
Let $\D$ be a distribution over $\X \times \{-1, 1\}$, where $\X = \{ \x : \norm\x \le \rho \}$. Consider running Soft-SVM (with no bias term) on a training set $S \sim \D^m$ and let $A(S)$ be the solution of Soft-SVM:
$$A(s) = \argmin_{\mathbf w} \lambda \norm\w^2 + L_S^\mathit{hinge}(\w)$$
where $$L_S^\mathrm{hinge}(\w) = \frac1m \sum_{i=1}^m \max\{0, 1 - y \, \langle \w, \x_i \rangle \} .$$
Then, for every $B > 0$, if we set $\lambda = \sqrt{2 \rho^2 / (B^2 m)}$, then
$$
\E_{S \sim \D^m}[ L_\D^{0-1}(A(S)) ]
\le \E_{S \sim \D^m}[ L_\D^\mathit{hinge}(A(S)) ]
\le \min_{\w : \norm\w \le B} L_\D^\mathrm{hinge}(\w) + \sqrt{\frac{8 \rho^2 B^2}{m}}
,$$
where $L_\D^\mathit{hinge}(\w) = \E_{S \sim \D} L_S^\mathit{hinge}(\w)$,
and $L_\D^{0-1}(\w) = \E_{(\x, y) \sim \D} \left[ \begin{cases}0 & \sign(\langle \w, \x \rangle) = y \\ 1 & \text{otherwise} \end{cases} \right]$ is just the error rate of $\w$.
That is,
The error rate of our SVM is no worse than its hinge loss performance: this is just because for any predictor, $L^{0-1} \le L^\mathit{hinge}$. If the prediction is the wrong sign so that 0-1 loss is 1, then the hinge loss is at least 1; if the prediction is the right sign so that 0-1 loss is 0, the hinge loss is between 0 and 1.
The expected hinge loss performance of our SVM is not too much worse than the hinge loss for the best-possible SVM of norm at most $B$: the gap is at most $2 \rho B \sqrt{2 / m}$.
(There are similar results for high-probability bounds on the accuracy, and requiring the offset to be 0 is just a convenience; adding 1 to the kernel and centering the labels generally accounts for it, but there are presumably bounds out there explicitly incorporating the offset as well.)
Now, because it's a generic bound, its numerical value is probably quite loose on any particular problem. But even if we accept that it will be quite conservative: how do we know the best possible hinge loss for a given $B$, and then trade that off with the other term to find the best value of the overall bound?
The same kind of issues hold in power analysis for linear regression – what do we think the effect size might be? But it's often easier to reason about effect size than about the best hinge loss at a given norm. It might be easier to make a guess about accuracy, but unfortunately, I'm pretty sure no comparable bounds are available with the best-possible accuracy/0-1 loss on the right-hand side.
(Finding the best linear predictor w.r.t. 0-1 loss is NP-hard, while SVMs are in P, so if you find such a bound, please let me know!)
In practice, then, the method in machine learning is almost always "try it and see how well you do on a validation set." People develop a rough intuition – you can't train a 44,654,504-parameter ResNet-101 on 60,000-image MNIST, but you can on 1,200,000-image ImageNet. But we don't really theoretically understand why this is true; it's a very active current area of research. | Are there any power calculation formulas for ML methods beyond Logistic Regression? [closed] | The type of power analysis you seem to be referring to is: make some assumptions about the distribution of variables, the effect size, etc., and then ask how many samples you'd need to have a (say) 80 | Are there any power calculation formulas for ML methods beyond Logistic Regression? [closed]
The type of power analysis you seem to be referring to is: make some assumptions about the distribution of variables, the effect size, etc., and then ask how many samples you'd need to have a (say) 80% probability of detecting an effect of that magnitude.
There are in fact many results of a similar flavor in ML theory. For instance, here's one for SVMs (rephrased from Corollary 15.7 of Shalev-Shwartz and Ben-David, Understanding Machine Learning):
$\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathOperator*{\E}{\mathbb{E}}
\DeclareMathOperator{\sign}{sign}
\newcommand{\x}{\mathbf x}
\newcommand{\X}{\mathcal X}
\newcommand{\D}{\mathcal D}
\newcommand{\norm}[1]{\lVert #1 \rVert}
\newcommand{\w}{\mathbf w}
\newcommand{\u}{\mathbf u}
$
Let $\D$ be a distribution over $\X \times \{-1, 1\}$, where $\X = \{ \x : \norm\x \le \rho \}$. Consider running Soft-SVM (with no bias term) on a training set $S \sim \D^m$ and let $A(S)$ be the solution of Soft-SVM:
$$A(s) = \argmin_{\mathbf w} \lambda \norm\w^2 + L_S^\mathit{hinge}(\w)$$
where $$L_S^\mathrm{hinge}(\w) = \frac1m \sum_{i=1}^m \max\{0, 1 - y \, \langle \w, \x_i \rangle \} .$$
Then, for every $B > 0$, if we set $\lambda = \sqrt{2 \rho^2 / (B^2 m)}$, then
$$
\E_{S \sim \D^m}[ L_\D^{0-1}(A(S)) ]
\le \E_{S \sim \D^m}[ L_\D^\mathit{hinge}(A(S)) ]
\le \min_{\w : \norm\w \le B} L_\D^\mathrm{hinge}(\w) + \sqrt{\frac{8 \rho^2 B^2}{m}}
,$$
where $L_\D^\mathit{hinge}(\w) = \E_{S \sim \D} L_S^\mathit{hinge}(\w)$,
and $L_\D^{0-1}(\w) = \E_{(\x, y) \sim \D} \left[ \begin{cases}0 & \sign(\langle \w, \x \rangle) = y \\ 1 & \text{otherwise} \end{cases} \right]$ is just the error rate of $\w$.
That is,
The error rate of our SVM is no worse than its hinge loss performance: this is just because for any predictor, $L^{0-1} \le L^\mathit{hinge}$. If the prediction is the wrong sign so that 0-1 loss is 1, then the hinge loss is at least 1; if the prediction is the right sign so that 0-1 loss is 0, the hinge loss is between 0 and 1.
The expected hinge loss performance of our SVM is not too much worse than the hinge loss for the best-possible SVM of norm at most $B$: the gap is at most $2 \rho B \sqrt{2 / m}$.
(There are similar results for high-probability bounds on the accuracy, and requiring the offset to be 0 is just a convenience; adding 1 to the kernel and centering the labels generally accounts for it, but there are presumably bounds out there explicitly incorporating the offset as well.)
Now, because it's a generic bound, its numerical value is probably quite loose on any particular problem. But even if we accept that it will be quite conservative: how do we know the best possible hinge loss for a given $B$, and then trade that off with the other term to find the best value of the overall bound?
The same kind of issues hold in power analysis for linear regression – what do we think the effect size might be? But it's often easier to reason about effect size than about the best hinge loss at a given norm. It might be easier to make a guess about accuracy, but unfortunately, I'm pretty sure no comparable bounds are available with the best-possible accuracy/0-1 loss on the right-hand side.
(Finding the best linear predictor w.r.t. 0-1 loss is NP-hard, while SVMs are in P, so if you find such a bound, please let me know!)
In practice, then, the method in machine learning is almost always "try it and see how well you do on a validation set." People develop a rough intuition – you can't train a 44,654,504-parameter ResNet-101 on 60,000-image MNIST, but you can on 1,200,000-image ImageNet. But we don't really theoretically understand why this is true; it's a very active current area of research. | Are there any power calculation formulas for ML methods beyond Logistic Regression? [closed]
The type of power analysis you seem to be referring to is: make some assumptions about the distribution of variables, the effect size, etc., and then ask how many samples you'd need to have a (say) 80 |
35,066 | Are there any power calculation formulas for ML methods beyond Logistic Regression? [closed] | Power analysis refers to calculating power of a hypothesis test
The power of a binary hypothesis test is the probability that the test
rejects the null hypothesis ( $H_{0}$) when a specific alternative
hypothesis ( $H_{1}$ ) is true
In machine learning you make no hypothesis, are not interested in testing them, and don't have any hypothesis tests available. Since there's no hypothesis tests, there is no power analysis. You can make power analysis for the hypothesis tests related to logistic regression, because logistic regression is a statistical model that is also used in machine learning as a classifier. That is not the case for other machine learning models. Moreover, even if you were using logistic regression for making predictions, rather then inference, you would not do any power analysis. Finally, power analysis for logistic regression would not tell you how accurate the predictions would be.
This is also not really a case in machine learning. In machine learning we do not care about "minimal sample size", as machine learning models are are usually used with large datasets. For small dataset, you would usually use simple algorithms like logistic regression, because with more complicated ones you risk overfitting.
As about being "confident in the classification your ML model creates", this is judged by using things like cross validation. What machine learning models do, is they learn to recognize patterns in the data and make predictions given the familiar patterns. Whatever data you give them, they will always find some patterns and make some predictions. If the data is garbage, they will give you garbage predictions.
You may also want to read the The Two Cultures: statistics vs. machine learning? thread to learning about differences between statistics and machine learning. | Are there any power calculation formulas for ML methods beyond Logistic Regression? [closed] | Power analysis refers to calculating power of a hypothesis test
The power of a binary hypothesis test is the probability that the test
rejects the null hypothesis ( $H_{0}$) when a specific alternati | Are there any power calculation formulas for ML methods beyond Logistic Regression? [closed]
Power analysis refers to calculating power of a hypothesis test
The power of a binary hypothesis test is the probability that the test
rejects the null hypothesis ( $H_{0}$) when a specific alternative
hypothesis ( $H_{1}$ ) is true
In machine learning you make no hypothesis, are not interested in testing them, and don't have any hypothesis tests available. Since there's no hypothesis tests, there is no power analysis. You can make power analysis for the hypothesis tests related to logistic regression, because logistic regression is a statistical model that is also used in machine learning as a classifier. That is not the case for other machine learning models. Moreover, even if you were using logistic regression for making predictions, rather then inference, you would not do any power analysis. Finally, power analysis for logistic regression would not tell you how accurate the predictions would be.
This is also not really a case in machine learning. In machine learning we do not care about "minimal sample size", as machine learning models are are usually used with large datasets. For small dataset, you would usually use simple algorithms like logistic regression, because with more complicated ones you risk overfitting.
As about being "confident in the classification your ML model creates", this is judged by using things like cross validation. What machine learning models do, is they learn to recognize patterns in the data and make predictions given the familiar patterns. Whatever data you give them, they will always find some patterns and make some predictions. If the data is garbage, they will give you garbage predictions.
You may also want to read the The Two Cultures: statistics vs. machine learning? thread to learning about differences between statistics and machine learning. | Are there any power calculation formulas for ML methods beyond Logistic Regression? [closed]
Power analysis refers to calculating power of a hypothesis test
The power of a binary hypothesis test is the probability that the test
rejects the null hypothesis ( $H_{0}$) when a specific alternati |
35,067 | Is exploratory data analysis (EDA) actually needed / useful | I come from a traditional biostatistics/epidemiology background, and EDA are definitely useful, although it doesn't mean doing histograms/correlation plots just for the sake of it. With the preeminence of machine learning and prediction, I do feel that it is practiced less and less often these days though.
If you are in medical statistics/epidemiology, then you are usually presented with "rectangular" datasets, i.e. datasets where your rows correspond to individual participants, and columns are variables (features in machine learning terms). You typically only focus on the variables that are relevant to your questions, and that generally won't be more than a dozen or so. It is of course possible that you have more. For example, you may have data collected over time, or biomarkers, or even genetic data. In these cases, you will need to find out the best practices for dealing with these data first. Often this will involve some kind of dimension reduction or summarization. What we emphatically don't do is to just throw everything into a machine learning model and see what predictions it generates. In other words, there's a strong emphasis on understanding your model.
Given the emphasis on understanding the model, EDA is indispensable in that it helps us to identify reasons for various unexpected behaviour or bias in our model fitting. For example, there may be one variable you expect to be very important, and it turned out that it wasn't. You look at the histogram, and you see that the vast majority of it were 0. Or Likewise, there may be patterns in missing data, and you need to understand them and how they may bias your results.
In summary, EDA is not something you do before your main analysis and forget about. It's something you keep doing together with your main analysis, to try and understand the picture better. | Is exploratory data analysis (EDA) actually needed / useful | I come from a traditional biostatistics/epidemiology background, and EDA are definitely useful, although it doesn't mean doing histograms/correlation plots just for the sake of it. With the preeminenc | Is exploratory data analysis (EDA) actually needed / useful
I come from a traditional biostatistics/epidemiology background, and EDA are definitely useful, although it doesn't mean doing histograms/correlation plots just for the sake of it. With the preeminence of machine learning and prediction, I do feel that it is practiced less and less often these days though.
If you are in medical statistics/epidemiology, then you are usually presented with "rectangular" datasets, i.e. datasets where your rows correspond to individual participants, and columns are variables (features in machine learning terms). You typically only focus on the variables that are relevant to your questions, and that generally won't be more than a dozen or so. It is of course possible that you have more. For example, you may have data collected over time, or biomarkers, or even genetic data. In these cases, you will need to find out the best practices for dealing with these data first. Often this will involve some kind of dimension reduction or summarization. What we emphatically don't do is to just throw everything into a machine learning model and see what predictions it generates. In other words, there's a strong emphasis on understanding your model.
Given the emphasis on understanding the model, EDA is indispensable in that it helps us to identify reasons for various unexpected behaviour or bias in our model fitting. For example, there may be one variable you expect to be very important, and it turned out that it wasn't. You look at the histogram, and you see that the vast majority of it were 0. Or Likewise, there may be patterns in missing data, and you need to understand them and how they may bias your results.
In summary, EDA is not something you do before your main analysis and forget about. It's something you keep doing together with your main analysis, to try and understand the picture better. | Is exploratory data analysis (EDA) actually needed / useful
I come from a traditional biostatistics/epidemiology background, and EDA are definitely useful, although it doesn't mean doing histograms/correlation plots just for the sake of it. With the preeminenc |
35,068 | Who did first perform maximum likelihood estimation? | The most relevant reference imho is Steve Stigler's "Epic history of maximum likelihood" (2007)
"There were early intelligent comments related to this problem [of
seeking the most probable distribution for the observation] already
in the 1750s by Thomases Simpson and Bayes and by Johann Heinrich
Lambert in 1760, but the first serious assault related to our topic
was by Joseph Louis Lagrange in 1769." S. Stigler (2007)
"By introducing restrictions in the form of the curve only after
deriving the estimates of probabilities, Lagrange’s analysis had the
curious consequence of always arriving at method of moment estimates,
even though starting with maximum likelihood!" S. Stigler (2007)
He also points out at Daniel Bernoulli (1769) and Carl Friedrich Gauß (1809), albeit the later started using Bayesian arguments to see the MLE as a posterior mode.
"...a long memoir by Karl Pearson and Louis Napoléon George Filon,
published in the Transactions of the Royal Society of London in 1898
has a place in history, more for what in the end it seemed to suggest,
rather than for what it accomplished." S. Stiegler (2007)
"...the method of maximum likelihood was proposed independently by
Lambert and Daniel Bernoulli, but with no practical effect because the
maximum likelihood equation for the error distribution considered was
intractable." A. Hald (1999)
"It is an astounding fact that Edgeworth’s papers were unknown to Fisher when he wrote his paper on maximum likelihood estimation in 1912." A. Hald (1999)
A. Hald (1999) also mentions Encke (1832) and Hagen (1837) as maximising $p(\mathbf x|\theta)$ in $\theta$ to find the "most probable" sample. He further cites Chauvenet (1863) and Merriman (1884) before Edgeworth (1908).
"Edgeworth (1908) anticipated a good part of the (Fisher) 1922 version, but nobody noticed until a decade or so after Fisher had redone it." J. Aldrich (1997)
"...the [maximum likelihood] criterion appears at the head of the derivation of least squares in Chauvenet (1891, p.481), Bennett (1908, p.15) and Brunt (1917,p.77)" J. Aldrich (1997)
" Pearson (1896, p.265) states that the "best" value of r is found by
choosing the value for which "the observed result is the most
probable." J. Aldrich (1997)
Looking at Thurstone's bibliography, it does not appear a relevant paper predates 1912. | Who did first perform maximum likelihood estimation? | The most relevant reference imho is Steve Stigler's "Epic history of maximum likelihood" (2007)
"There were early intelligent comments related to this problem [of
seeking the most probable distributi | Who did first perform maximum likelihood estimation?
The most relevant reference imho is Steve Stigler's "Epic history of maximum likelihood" (2007)
"There were early intelligent comments related to this problem [of
seeking the most probable distribution for the observation] already
in the 1750s by Thomases Simpson and Bayes and by Johann Heinrich
Lambert in 1760, but the first serious assault related to our topic
was by Joseph Louis Lagrange in 1769." S. Stigler (2007)
"By introducing restrictions in the form of the curve only after
deriving the estimates of probabilities, Lagrange’s analysis had the
curious consequence of always arriving at method of moment estimates,
even though starting with maximum likelihood!" S. Stigler (2007)
He also points out at Daniel Bernoulli (1769) and Carl Friedrich Gauß (1809), albeit the later started using Bayesian arguments to see the MLE as a posterior mode.
"...a long memoir by Karl Pearson and Louis Napoléon George Filon,
published in the Transactions of the Royal Society of London in 1898
has a place in history, more for what in the end it seemed to suggest,
rather than for what it accomplished." S. Stiegler (2007)
"...the method of maximum likelihood was proposed independently by
Lambert and Daniel Bernoulli, but with no practical effect because the
maximum likelihood equation for the error distribution considered was
intractable." A. Hald (1999)
"It is an astounding fact that Edgeworth’s papers were unknown to Fisher when he wrote his paper on maximum likelihood estimation in 1912." A. Hald (1999)
A. Hald (1999) also mentions Encke (1832) and Hagen (1837) as maximising $p(\mathbf x|\theta)$ in $\theta$ to find the "most probable" sample. He further cites Chauvenet (1863) and Merriman (1884) before Edgeworth (1908).
"Edgeworth (1908) anticipated a good part of the (Fisher) 1922 version, but nobody noticed until a decade or so after Fisher had redone it." J. Aldrich (1997)
"...the [maximum likelihood] criterion appears at the head of the derivation of least squares in Chauvenet (1891, p.481), Bennett (1908, p.15) and Brunt (1917,p.77)" J. Aldrich (1997)
" Pearson (1896, p.265) states that the "best" value of r is found by
choosing the value for which "the observed result is the most
probable." J. Aldrich (1997)
Looking at Thurstone's bibliography, it does not appear a relevant paper predates 1912. | Who did first perform maximum likelihood estimation?
The most relevant reference imho is Steve Stigler's "Epic history of maximum likelihood" (2007)
"There were early intelligent comments related to this problem [of
seeking the most probable distributi |
35,069 | Do non-invertible MA models imply that the effect of past observations increases with the distance? | Not a big deal - it is strongly stationary and approaches white noise
The non-invertible $\text{MA}(1)$ process makes perfect sense, and it does not exhibit any particularly strange behaviour. Taking the Gaussian version of the process, for any vector $\mathbf{y} = (y_1,...,y_n)$ consisting of consecutive observations, we have $\mathbf{y} \sim \text{N}(\mathbf{0}, \mathbf{\Sigma})$ with covariance:
$$\mathbf{\Sigma} \equiv \frac{\sigma^2}{1+\theta^2} \begin{bmatrix}
1+\theta^2 & -\theta & 0 & \cdots & 0 & 0 & 0 \\
-\theta & 1+\theta^2 & -\theta & \cdots & 0 & 0 & 0 \\
0 & - \theta & 1+\theta^2 & \cdots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 1+\theta^2 & -\theta & 0 \\
0 & 0 & 0 & \cdots & -\theta & 1+\theta^2 & -\theta \\
0 & 0 & 0 & \cdots & 0 & -\theta & 1+\theta^2 \\
\end{bmatrix}.$$
As you can see, this is a strongly stationary process, and observations that are more than one lag apart are independent, even when $|\theta|>1$. This is unsurprising, in view of the fact that such observations do not share any influence from the underlying white noise process. There does not appear to be any behaviour in which "past observations increases with the distance", and the equation you have stated does not establish this (see below for further discussion).
In fact, as $|\theta| \rightarrow \infty$ (which is the most extreme case of the phenomenon you are considering) the model reduces asymptotically to a trivial white noise process. This is completely unsurprising, in view of the fact that a large coefficient on the first-lagged error term dominates the unit coefficient on the concurrent error term, and shifts the model asymptotically towards the form $y_t \rightarrow \theta \epsilon_{t-1}$, which is just a scaled and shifted version of the underlying white noise process.
A note on your equation: In the equation in your question you write the current value of the observable time series as a geometrically increasing sum of past values, plus the left-over error terms. This is asserted to show that "the effect of past observations increases with the distance". However, the equation involves a large number of cancelling terms. To see this, let's expand out the past observable terms to show the cancelling of terms:
$$\begin{equation} \begin{aligned}
y_t
&= \epsilon_t - \sum_{i=1}^{t-1} \theta^i y_{t-i} - \theta^t \epsilon_0 \\[6pt]
&= \epsilon_t - \sum_{i=1}^{t-1} \theta^i (\epsilon_{t-i} - \theta \epsilon_{t-i-1}) - \theta^t \epsilon_0 \\[6pt]
&= \epsilon_t - ( \theta \epsilon_{t-1} - \theta^2 \epsilon_{t-2} )
\\[6pt]
&\quad \quad \quad \quad \quad \ \ \ - ( \theta^2 \epsilon_{t-2} - \theta^3 \epsilon_{t-3} ) \\[6pt]
&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad - ( \theta^3 \epsilon_{t-3} - \theta^4 \epsilon_{t-4} ) \\[6pt]
&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \ - \ \cdots \\[6pt]
&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \ - ( \theta^{t-1} \epsilon_1 - \theta^t \epsilon_0 ). \\[6pt]
\end{aligned} \end{equation}$$
We can see from this expansion that the geometrically increasing sum of past values of the observable time series is there solely to get the previous error term:
$$\epsilon_{t-1} = \sum_{i=1}^{t-1} \theta^{i-1} y_{t-i} + \theta^{t-1} \epsilon_0.$$
All that is happening here is that you are trying to express the previous error term in an awkward way. The fact that a long cancelling sum of geometrically weighted values of the series is equal to the desired error term does not demonstrate that past observations are having "an effect" on the present time-series value. It merely means that if you want to express $\epsilon_{t-1}$ in terms of $\epsilon_0$ then the only way you can do it is to add in the geometrically weighted sum of the observable series. | Do non-invertible MA models imply that the effect of past observations increases with the distance? | Not a big deal - it is strongly stationary and approaches white noise
The non-invertible $\text{MA}(1)$ process makes perfect sense, and it does not exhibit any particularly strange behaviour. Taking | Do non-invertible MA models imply that the effect of past observations increases with the distance?
Not a big deal - it is strongly stationary and approaches white noise
The non-invertible $\text{MA}(1)$ process makes perfect sense, and it does not exhibit any particularly strange behaviour. Taking the Gaussian version of the process, for any vector $\mathbf{y} = (y_1,...,y_n)$ consisting of consecutive observations, we have $\mathbf{y} \sim \text{N}(\mathbf{0}, \mathbf{\Sigma})$ with covariance:
$$\mathbf{\Sigma} \equiv \frac{\sigma^2}{1+\theta^2} \begin{bmatrix}
1+\theta^2 & -\theta & 0 & \cdots & 0 & 0 & 0 \\
-\theta & 1+\theta^2 & -\theta & \cdots & 0 & 0 & 0 \\
0 & - \theta & 1+\theta^2 & \cdots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 1+\theta^2 & -\theta & 0 \\
0 & 0 & 0 & \cdots & -\theta & 1+\theta^2 & -\theta \\
0 & 0 & 0 & \cdots & 0 & -\theta & 1+\theta^2 \\
\end{bmatrix}.$$
As you can see, this is a strongly stationary process, and observations that are more than one lag apart are independent, even when $|\theta|>1$. This is unsurprising, in view of the fact that such observations do not share any influence from the underlying white noise process. There does not appear to be any behaviour in which "past observations increases with the distance", and the equation you have stated does not establish this (see below for further discussion).
In fact, as $|\theta| \rightarrow \infty$ (which is the most extreme case of the phenomenon you are considering) the model reduces asymptotically to a trivial white noise process. This is completely unsurprising, in view of the fact that a large coefficient on the first-lagged error term dominates the unit coefficient on the concurrent error term, and shifts the model asymptotically towards the form $y_t \rightarrow \theta \epsilon_{t-1}$, which is just a scaled and shifted version of the underlying white noise process.
A note on your equation: In the equation in your question you write the current value of the observable time series as a geometrically increasing sum of past values, plus the left-over error terms. This is asserted to show that "the effect of past observations increases with the distance". However, the equation involves a large number of cancelling terms. To see this, let's expand out the past observable terms to show the cancelling of terms:
$$\begin{equation} \begin{aligned}
y_t
&= \epsilon_t - \sum_{i=1}^{t-1} \theta^i y_{t-i} - \theta^t \epsilon_0 \\[6pt]
&= \epsilon_t - \sum_{i=1}^{t-1} \theta^i (\epsilon_{t-i} - \theta \epsilon_{t-i-1}) - \theta^t \epsilon_0 \\[6pt]
&= \epsilon_t - ( \theta \epsilon_{t-1} - \theta^2 \epsilon_{t-2} )
\\[6pt]
&\quad \quad \quad \quad \quad \ \ \ - ( \theta^2 \epsilon_{t-2} - \theta^3 \epsilon_{t-3} ) \\[6pt]
&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad - ( \theta^3 \epsilon_{t-3} - \theta^4 \epsilon_{t-4} ) \\[6pt]
&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \ - \ \cdots \\[6pt]
&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \ - ( \theta^{t-1} \epsilon_1 - \theta^t \epsilon_0 ). \\[6pt]
\end{aligned} \end{equation}$$
We can see from this expansion that the geometrically increasing sum of past values of the observable time series is there solely to get the previous error term:
$$\epsilon_{t-1} = \sum_{i=1}^{t-1} \theta^{i-1} y_{t-i} + \theta^{t-1} \epsilon_0.$$
All that is happening here is that you are trying to express the previous error term in an awkward way. The fact that a long cancelling sum of geometrically weighted values of the series is equal to the desired error term does not demonstrate that past observations are having "an effect" on the present time-series value. It merely means that if you want to express $\epsilon_{t-1}$ in terms of $\epsilon_0$ then the only way you can do it is to add in the geometrically weighted sum of the observable series. | Do non-invertible MA models imply that the effect of past observations increases with the distance?
Not a big deal - it is strongly stationary and approaches white noise
The non-invertible $\text{MA}(1)$ process makes perfect sense, and it does not exhibit any particularly strange behaviour. Taking |
35,070 | Do non-invertible MA models imply that the effect of past observations increases with the distance? | I don't think it makes sense to ask for an example "from the real world where they [non-invertible MA models] occur". All you observe is $y_1,y_2,\dots,y_n$. As I try to explain in the post you link to, the joint distribution of these data can almost always (except in the case were the MA polynomial has one or more unit roots) be identically modelled as generated by either a number of non-invertible MA models or by a corresponding invertible MA model. Based on the data alone, there is therefore no way of knowing if the "real world" underlying mechanism corresponds to that of a non-invertible or invertible model. And ARIMA models are anyhow not intended as mechanistic models of the data-generating process in the first place.
So this just boils down to restricting the parameter space to that of invertible models to make the model identifiable with the added benefit of having a model that is easily put into AR$(\infty)$ form. | Do non-invertible MA models imply that the effect of past observations increases with the distance? | I don't think it makes sense to ask for an example "from the real world where they [non-invertible MA models] occur". All you observe is $y_1,y_2,\dots,y_n$. As I try to explain in the post you link | Do non-invertible MA models imply that the effect of past observations increases with the distance?
I don't think it makes sense to ask for an example "from the real world where they [non-invertible MA models] occur". All you observe is $y_1,y_2,\dots,y_n$. As I try to explain in the post you link to, the joint distribution of these data can almost always (except in the case were the MA polynomial has one or more unit roots) be identically modelled as generated by either a number of non-invertible MA models or by a corresponding invertible MA model. Based on the data alone, there is therefore no way of knowing if the "real world" underlying mechanism corresponds to that of a non-invertible or invertible model. And ARIMA models are anyhow not intended as mechanistic models of the data-generating process in the first place.
So this just boils down to restricting the parameter space to that of invertible models to make the model identifiable with the added benefit of having a model that is easily put into AR$(\infty)$ form. | Do non-invertible MA models imply that the effect of past observations increases with the distance?
I don't think it makes sense to ask for an example "from the real world where they [non-invertible MA models] occur". All you observe is $y_1,y_2,\dots,y_n$. As I try to explain in the post you link |
35,071 | Confidence interval for difference between two predicted probabilities in R | One way to do this will be profile likelihood. If we have a parameter vector $\psi$, profile likelihood is usually calculated for one of the components of $\psi$, but it can be defined for any parametric function of $\psi$. Below is a definition, suppose $L(\psi)$ is the likelihood function and interest (or focus) is on a scalar function $\theta = \theta(\psi)$, then
$$ L_P(\theta) = \max_{\{\psi\colon \theta(\psi)=\theta \}} L(\psi)$$
The implementations of profile likelihood in R (elsewhere?) is not of these generality, so let us make it "by hand".
The model is
$$ \DeclareMathOperator{\P}{\mathbb{P}}
p_x= \P(Y=1 \mid X=x)= \frac1{1+e^{-\beta_0 - \beta_1 x}} $$ and the interest parameter $\theta$ is
$$ \theta = p_{0.75} - p_{0.25} $$
It doesn't look promising to try to solve the optimization symbolically, so we try numerically. This is a first attempt, so maybe we can do better. First, a plot of the (negative) profile likelihood for $\theta$, using the data simulated in the question:
the two blue lines are cutoffs for confidence intervals of 95 and 99%, respectively, based on quantiles from the reference chi-square distribution with 1 df. R code is below:
### First run code from question
library(bbmle)
make_negloglik <- function(y, x) {
n <- length(y)
stopifnot( n == length(x) )
Vectorize( function(beta0, beta1)
sum(ifelse(y==0, log1p(exp(beta0 + beta1*x)),
log1p(exp(-beta0 - beta1*x)))) )
}
negloglik <- make_negloglik(y, x)
mod.bb <- bbmle::mle2(negloglik, start=list(beta0=-2, beta1=4))
mod.prof <- bbmle::profile(mod.bb)
plot(mod.prof) # Not shown
grid <- expand.grid(beta0=seq(-2.8, -0.5, len=100),
beta1=seq(1.8, 7.1, len=100))
grid$negloglik <- with(grid, negloglik(beta0, beta1))
P <- function(beta0, beta1, x) 1/( 1 + exp( -beta0 -beta1 * x))
theta <- function(beta0, beta1) P(beta0, beta1, 0.75) - P(beta0, beta1, 0.25)
### Adding theta as a column to data.frame grid:
grid$theta <- with(grid, theta(beta0, beta1))
profile_negloglik <- function(grid) {
rt <- with(grid, range(theta))
seq_theta <- seq(rt[1], rt[2], len=201)
delta <- diff(seq_theta[1:2])
npl <- numeric(length=length(seq_theta))
for (t in seq_along(seq_theta)) {
tt <- seq_theta[t]
npl[t] <- with(grid, min(grid[ (tt-delta/2 <= theta) & (theta <= tt + delta/2),
"negloglik" ]))
}
return(data.frame(theta=seq_theta, npl=npl))
}
npl_frame <- profile_negloglik(grid)
npl_min <- with(npl_frame, min(npl))
library(ggplot2)
ggplot(npl_frame, aes(theta, npl)) + geom_line(color="red") +
ggtitle("Profile negative loglikelihood for theta") +
geom_hline(yintercept=npl_min) +
geom_hline(yintercept=npl_min +
qchisq(0.95, 1)/2, color="blue") +
geom_hline(yintercept=npl_min +
qchisq(0.99, 1)/2, color="blue") + ylim(52, 70)
The idea of the code is:
Define a rectangle in parameter space given by individual 99% confidence intervals (calculated by profiling with the R package bbmle)
use expand.grid to cover the rectangle
add to the grid data frame a column with the negative loglikelihood, another column with $\theta$
Find the range of $\theta$ and subdivide it in many small intervals
For each of the intervals, find the minimum negative log likelihood over the interval, and associate that with the midpoint
finally, plot this as an approximation of the negative profile loglikelihood function of $\theta$.
As a comparison, let us also calculate an approximate 95% confidence interval using the delta method. Calculations in R:
theta_grad <- deriv(expression( 1/( 1 + exp( -beta0 -beta1 * 0.75))
- 1/( 1 + exp( -beta0 -beta1 * 0.25))),
c("beta0", "beta1"), function.arg=TRUE)
grad <- theta_grad(coef(model)[1], coef(model)[2])
grad
(Intercept)
0.4880566
attr(,"gradient")
beta0 beta1
[1,] -0.05555914 0.06582565
grad <- attr(grad, "gradient")
V <- vcov(model)
theta.se <- sqrt( grad %*% V %*% t(grad) )
( CI <- c(0.4881 -2*theta.se, 0.4881 + 2*theta.se ) )
[1] 0.3154351 0.6607649
which is quite close to the profile interval. | Confidence interval for difference between two predicted probabilities in R | One way to do this will be profile likelihood. If we have a parameter vector $\psi$, profile likelihood is usually calculated for one of the components of $\psi$, but it can be defined for any paramet | Confidence interval for difference between two predicted probabilities in R
One way to do this will be profile likelihood. If we have a parameter vector $\psi$, profile likelihood is usually calculated for one of the components of $\psi$, but it can be defined for any parametric function of $\psi$. Below is a definition, suppose $L(\psi)$ is the likelihood function and interest (or focus) is on a scalar function $\theta = \theta(\psi)$, then
$$ L_P(\theta) = \max_{\{\psi\colon \theta(\psi)=\theta \}} L(\psi)$$
The implementations of profile likelihood in R (elsewhere?) is not of these generality, so let us make it "by hand".
The model is
$$ \DeclareMathOperator{\P}{\mathbb{P}}
p_x= \P(Y=1 \mid X=x)= \frac1{1+e^{-\beta_0 - \beta_1 x}} $$ and the interest parameter $\theta$ is
$$ \theta = p_{0.75} - p_{0.25} $$
It doesn't look promising to try to solve the optimization symbolically, so we try numerically. This is a first attempt, so maybe we can do better. First, a plot of the (negative) profile likelihood for $\theta$, using the data simulated in the question:
the two blue lines are cutoffs for confidence intervals of 95 and 99%, respectively, based on quantiles from the reference chi-square distribution with 1 df. R code is below:
### First run code from question
library(bbmle)
make_negloglik <- function(y, x) {
n <- length(y)
stopifnot( n == length(x) )
Vectorize( function(beta0, beta1)
sum(ifelse(y==0, log1p(exp(beta0 + beta1*x)),
log1p(exp(-beta0 - beta1*x)))) )
}
negloglik <- make_negloglik(y, x)
mod.bb <- bbmle::mle2(negloglik, start=list(beta0=-2, beta1=4))
mod.prof <- bbmle::profile(mod.bb)
plot(mod.prof) # Not shown
grid <- expand.grid(beta0=seq(-2.8, -0.5, len=100),
beta1=seq(1.8, 7.1, len=100))
grid$negloglik <- with(grid, negloglik(beta0, beta1))
P <- function(beta0, beta1, x) 1/( 1 + exp( -beta0 -beta1 * x))
theta <- function(beta0, beta1) P(beta0, beta1, 0.75) - P(beta0, beta1, 0.25)
### Adding theta as a column to data.frame grid:
grid$theta <- with(grid, theta(beta0, beta1))
profile_negloglik <- function(grid) {
rt <- with(grid, range(theta))
seq_theta <- seq(rt[1], rt[2], len=201)
delta <- diff(seq_theta[1:2])
npl <- numeric(length=length(seq_theta))
for (t in seq_along(seq_theta)) {
tt <- seq_theta[t]
npl[t] <- with(grid, min(grid[ (tt-delta/2 <= theta) & (theta <= tt + delta/2),
"negloglik" ]))
}
return(data.frame(theta=seq_theta, npl=npl))
}
npl_frame <- profile_negloglik(grid)
npl_min <- with(npl_frame, min(npl))
library(ggplot2)
ggplot(npl_frame, aes(theta, npl)) + geom_line(color="red") +
ggtitle("Profile negative loglikelihood for theta") +
geom_hline(yintercept=npl_min) +
geom_hline(yintercept=npl_min +
qchisq(0.95, 1)/2, color="blue") +
geom_hline(yintercept=npl_min +
qchisq(0.99, 1)/2, color="blue") + ylim(52, 70)
The idea of the code is:
Define a rectangle in parameter space given by individual 99% confidence intervals (calculated by profiling with the R package bbmle)
use expand.grid to cover the rectangle
add to the grid data frame a column with the negative loglikelihood, another column with $\theta$
Find the range of $\theta$ and subdivide it in many small intervals
For each of the intervals, find the minimum negative log likelihood over the interval, and associate that with the midpoint
finally, plot this as an approximation of the negative profile loglikelihood function of $\theta$.
As a comparison, let us also calculate an approximate 95% confidence interval using the delta method. Calculations in R:
theta_grad <- deriv(expression( 1/( 1 + exp( -beta0 -beta1 * 0.75))
- 1/( 1 + exp( -beta0 -beta1 * 0.25))),
c("beta0", "beta1"), function.arg=TRUE)
grad <- theta_grad(coef(model)[1], coef(model)[2])
grad
(Intercept)
0.4880566
attr(,"gradient")
beta0 beta1
[1,] -0.05555914 0.06582565
grad <- attr(grad, "gradient")
V <- vcov(model)
theta.se <- sqrt( grad %*% V %*% t(grad) )
( CI <- c(0.4881 -2*theta.se, 0.4881 + 2*theta.se ) )
[1] 0.3154351 0.6607649
which is quite close to the profile interval. | Confidence interval for difference between two predicted probabilities in R
One way to do this will be profile likelihood. If we have a parameter vector $\psi$, profile likelihood is usually calculated for one of the components of $\psi$, but it can be defined for any paramet |
35,072 | Confidence interval for difference between two predicted probabilities in R | The emmeans package provides an easy and reliable way to calculated an asymptotic confidence interval for the difference in probabilities via the multivariate delta method (see also @kjetilbhalvorsen's answer for computational details):
library(emmeans)
x_pred <- c(0.75, 0.25)
emm_resp <- emmeans(model, ~ x, at = list(x = x_pred), regrid = "response")
# infer = c(TRUE, TRUE) gives test statistic w/ corresponding p-value, and CI
pairs(emm_resp, infer = c(TRUE, TRUE))
# contrast estimate SE df asymp.LCL asymp.UCL z.ratio p.value
# x0.75 - x0.25 0.488 0.0863 Inf 0.319 0.657 5.653 <.0001
#
# Confidence level used: 0.95 | Confidence interval for difference between two predicted probabilities in R | The emmeans package provides an easy and reliable way to calculated an asymptotic confidence interval for the difference in probabilities via the multivariate delta method (see also @kjetilbhalvorsen' | Confidence interval for difference between two predicted probabilities in R
The emmeans package provides an easy and reliable way to calculated an asymptotic confidence interval for the difference in probabilities via the multivariate delta method (see also @kjetilbhalvorsen's answer for computational details):
library(emmeans)
x_pred <- c(0.75, 0.25)
emm_resp <- emmeans(model, ~ x, at = list(x = x_pred), regrid = "response")
# infer = c(TRUE, TRUE) gives test statistic w/ corresponding p-value, and CI
pairs(emm_resp, infer = c(TRUE, TRUE))
# contrast estimate SE df asymp.LCL asymp.UCL z.ratio p.value
# x0.75 - x0.25 0.488 0.0863 Inf 0.319 0.657 5.653 <.0001
#
# Confidence level used: 0.95 | Confidence interval for difference between two predicted probabilities in R
The emmeans package provides an easy and reliable way to calculated an asymptotic confidence interval for the difference in probabilities via the multivariate delta method (see also @kjetilbhalvorsen' |
35,073 | Confidence interval for difference between two predicted probabilities in R | Method: Simulation assuming multivariate normality of coefficients
You could use the coefficients and their covariance matrix returned by the model to do simulations (Carsey & Harden 2013, King et al. 2000). The procedure works as follows:
Estimate the model and store the coefficients and covariance matrix.
Draw randomly from a multivariate normal distribution in which the
means and covariance matrix are set to the stored values from step 1.
Repeat step 2 a large number of times and store the random draws of each simulation.
Calculate the quantities of interest for each draw of coefficients and store them.
Summarize the stored quantities of interest (e.g. with the mean and quantiles).
Here's the example with your simulated data:
library(MASS)
set.seed(1234)
x <- runif(100, 0, 1)
y <- rbinom(100, size=1, prob = x)
model <- glm(y ~ x, family = binomial("logit"))
# Store coefficients and covariance matrix
betas <- coef(model)
covmat <- vcov(model)
# Draw from a multivariate normal distribution
n_draws <- 1e5
model_sim <- mvrnorm(n_draws, mu = betas, Sigma = covmat)
# Function to calculate the quantity of interest (difference of probabilities)
calc_probdiff <- function(coefs, pred1, pred2) {
plogis(as.matrix(pred1) %*% coefs) - plogis(as.matrix(pred2) %*% coefs)
}
# Apply the function on the simulated coefficients
res <- apply(model_sim, 1, calc_probdiff, pred1 = data.frame(intercept = 1, x = 0.75), pred2 = data.frame(intercept = 1, x = 0.25))
# Visualize and summarize
hist(res)
mean(res)
[1] 0.4781587
quantile(res, c(0.025, 0.975))
2.5% 97.5%
0.2961805 0.6334994
Based on this simulation using $100\,000$ random draws, the difference of predicted probabilities is $0.478$ with a corresponding 95% confidence interval of $(0.296; 0.633)$. This is similar to the other answers.
The package clarify simplifies these steps greatly. Here is it in action (output is not shown):
library(MASS)
library(clarify)
set.seed(1234)
x <- runif(100, 0, 1)
y <- rbinom(100, size=1, prob = x)
model <- glm(y ~ x, family = binomial("logit"))
# Simulate coefficients from the model
s <- sim(model, n = 1e4)
# Define the function that calculates the difference between the predicted probabilities
my_fun <- function(fit) {
predict(fit, newdata = data.frame(x = 0.75), type = "response") - predict(fit, newdata = data.frame(x = 0.25), type = "response")
}
# Apply the function to the simulated coefficients earlier
est1 <- sim_apply(s, FUN = my_fun)
# Plot and summaries the results
plot(est1, method = "quantile", reference = TRUE)
summary(est1, ci = TRUE, level = 0.95, method = "quantile")
Method: Parametric bootstrap
A parametric bootsrap confidence interval can be calculated using the following steps (Adjei & Karim 2016):
Estimate the model and store the predicted probabilities $\hat{\pi}_i$.
Draw a bootstrap sample $(x, y^{*})$ where $y^{*}_i=\operatorname{Ber}(\hat{\pi}_i)$.
Fit the model with the data obtained in step 2.
Estimate the difference in predicted probabilities from model in step 3 and store them.
Repeat steps 2-4 a large number of times.
Summarize the stored differences.
Again in R:
set.seed(1234)
x <- runif(100, 0, 1)
y <- rbinom(100, size = 1, prob = x)
model <- glm(y ~ x, family = binomial("logit"))
pihat <- predict(model, type = "response")
param_boot <- replicate(1e4, {
ystar <- rbinom(100, 1, prob = pihat)
mod <- glm(ystar~x, family = binomial)
c(plogis(matrix(c(1, 0.75), ncol = 2) %*% matrix(coef(mod))) - plogis(matrix(c(1, 0.25), ncol = 2) %*% matrix(coef(mod))))
})
mean(param_boot)
[1] 0.4903014
quantile(param_boot, c(0.025, 0.975))
2.5% 97.5%
0.3185213 0.6580579
Based on this simulation using $10\,000$ replications, the difference of predicted probabilities is $0.49$ with a corresponding 95% confidence interval of $(0.319; 0.658)$.
Method: Nonaparametric bootstrap
A nonparametric bootsrap confidence interval can be calculated using the following steps (Adjei & Karim 2016, see also my answer here):
Draw $n$ observations from the original dataset of size $n$ with replacement.
Fit the model using the data obtained in step 1.
Estimate the difference in predicted probabilities from model in step 2 and store them.
Repeat steps 1-3 a large number of times.
Summarize the stored differences.
Here's how you could do it in R:
set.seed(1234)
x <- runif(100, 0, 1)
y <- rbinom(100, size = 1, prob = x)
dat <- data.frame(x, y)
nonparam_boot <- replicate(1e4, {
mod <- glm(y~x, family = binomial, data = dat[sample.int(100, replace = TRUE), ])
c(plogis(matrix(c(1, 0.75), ncol = 2) %*% matrix(coef(mod))) - plogis(matrix(c(1, 0.25), ncol = 2) %*% matrix(coef(mod))))
})
mean(nonparam_boot)
[1] 0.4906483
quantile(nonparam_boot, c(0.025, 0.975))
2.5% 97.5%
0.3266630 0.6551795
Based on the nonparametric bootstrap using $10\,000$ replications, the difference of predicted probabilities is $0.49$ with a corresponding 95% confidence interval of $(0.327; 0.655)$.
References
Adjei, I. A., & Karim, R. (2016). An application of bootstrapping in logistic regression model. Open Access Library Journal, 3(9), 1-9.
Carsey, T. M., & Harden, J. J. (2013). Monte Carlo simulation and resampling methods for social science. Sage Publications.
King, G., Tomz, M., & Wittenberg, J. (2000). Making the most of statistical analyses: Improving interpretation and presentation. American journal of political science, 347-361. | Confidence interval for difference between two predicted probabilities in R | Method: Simulation assuming multivariate normality of coefficients
You could use the coefficients and their covariance matrix returned by the model to do simulations (Carsey & Harden 2013, King et al. | Confidence interval for difference between two predicted probabilities in R
Method: Simulation assuming multivariate normality of coefficients
You could use the coefficients and their covariance matrix returned by the model to do simulations (Carsey & Harden 2013, King et al. 2000). The procedure works as follows:
Estimate the model and store the coefficients and covariance matrix.
Draw randomly from a multivariate normal distribution in which the
means and covariance matrix are set to the stored values from step 1.
Repeat step 2 a large number of times and store the random draws of each simulation.
Calculate the quantities of interest for each draw of coefficients and store them.
Summarize the stored quantities of interest (e.g. with the mean and quantiles).
Here's the example with your simulated data:
library(MASS)
set.seed(1234)
x <- runif(100, 0, 1)
y <- rbinom(100, size=1, prob = x)
model <- glm(y ~ x, family = binomial("logit"))
# Store coefficients and covariance matrix
betas <- coef(model)
covmat <- vcov(model)
# Draw from a multivariate normal distribution
n_draws <- 1e5
model_sim <- mvrnorm(n_draws, mu = betas, Sigma = covmat)
# Function to calculate the quantity of interest (difference of probabilities)
calc_probdiff <- function(coefs, pred1, pred2) {
plogis(as.matrix(pred1) %*% coefs) - plogis(as.matrix(pred2) %*% coefs)
}
# Apply the function on the simulated coefficients
res <- apply(model_sim, 1, calc_probdiff, pred1 = data.frame(intercept = 1, x = 0.75), pred2 = data.frame(intercept = 1, x = 0.25))
# Visualize and summarize
hist(res)
mean(res)
[1] 0.4781587
quantile(res, c(0.025, 0.975))
2.5% 97.5%
0.2961805 0.6334994
Based on this simulation using $100\,000$ random draws, the difference of predicted probabilities is $0.478$ with a corresponding 95% confidence interval of $(0.296; 0.633)$. This is similar to the other answers.
The package clarify simplifies these steps greatly. Here is it in action (output is not shown):
library(MASS)
library(clarify)
set.seed(1234)
x <- runif(100, 0, 1)
y <- rbinom(100, size=1, prob = x)
model <- glm(y ~ x, family = binomial("logit"))
# Simulate coefficients from the model
s <- sim(model, n = 1e4)
# Define the function that calculates the difference between the predicted probabilities
my_fun <- function(fit) {
predict(fit, newdata = data.frame(x = 0.75), type = "response") - predict(fit, newdata = data.frame(x = 0.25), type = "response")
}
# Apply the function to the simulated coefficients earlier
est1 <- sim_apply(s, FUN = my_fun)
# Plot and summaries the results
plot(est1, method = "quantile", reference = TRUE)
summary(est1, ci = TRUE, level = 0.95, method = "quantile")
Method: Parametric bootstrap
A parametric bootsrap confidence interval can be calculated using the following steps (Adjei & Karim 2016):
Estimate the model and store the predicted probabilities $\hat{\pi}_i$.
Draw a bootstrap sample $(x, y^{*})$ where $y^{*}_i=\operatorname{Ber}(\hat{\pi}_i)$.
Fit the model with the data obtained in step 2.
Estimate the difference in predicted probabilities from model in step 3 and store them.
Repeat steps 2-4 a large number of times.
Summarize the stored differences.
Again in R:
set.seed(1234)
x <- runif(100, 0, 1)
y <- rbinom(100, size = 1, prob = x)
model <- glm(y ~ x, family = binomial("logit"))
pihat <- predict(model, type = "response")
param_boot <- replicate(1e4, {
ystar <- rbinom(100, 1, prob = pihat)
mod <- glm(ystar~x, family = binomial)
c(plogis(matrix(c(1, 0.75), ncol = 2) %*% matrix(coef(mod))) - plogis(matrix(c(1, 0.25), ncol = 2) %*% matrix(coef(mod))))
})
mean(param_boot)
[1] 0.4903014
quantile(param_boot, c(0.025, 0.975))
2.5% 97.5%
0.3185213 0.6580579
Based on this simulation using $10\,000$ replications, the difference of predicted probabilities is $0.49$ with a corresponding 95% confidence interval of $(0.319; 0.658)$.
Method: Nonaparametric bootstrap
A nonparametric bootsrap confidence interval can be calculated using the following steps (Adjei & Karim 2016, see also my answer here):
Draw $n$ observations from the original dataset of size $n$ with replacement.
Fit the model using the data obtained in step 1.
Estimate the difference in predicted probabilities from model in step 2 and store them.
Repeat steps 1-3 a large number of times.
Summarize the stored differences.
Here's how you could do it in R:
set.seed(1234)
x <- runif(100, 0, 1)
y <- rbinom(100, size = 1, prob = x)
dat <- data.frame(x, y)
nonparam_boot <- replicate(1e4, {
mod <- glm(y~x, family = binomial, data = dat[sample.int(100, replace = TRUE), ])
c(plogis(matrix(c(1, 0.75), ncol = 2) %*% matrix(coef(mod))) - plogis(matrix(c(1, 0.25), ncol = 2) %*% matrix(coef(mod))))
})
mean(nonparam_boot)
[1] 0.4906483
quantile(nonparam_boot, c(0.025, 0.975))
2.5% 97.5%
0.3266630 0.6551795
Based on the nonparametric bootstrap using $10\,000$ replications, the difference of predicted probabilities is $0.49$ with a corresponding 95% confidence interval of $(0.327; 0.655)$.
References
Adjei, I. A., & Karim, R. (2016). An application of bootstrapping in logistic regression model. Open Access Library Journal, 3(9), 1-9.
Carsey, T. M., & Harden, J. J. (2013). Monte Carlo simulation and resampling methods for social science. Sage Publications.
King, G., Tomz, M., & Wittenberg, J. (2000). Making the most of statistical analyses: Improving interpretation and presentation. American journal of political science, 347-361. | Confidence interval for difference between two predicted probabilities in R
Method: Simulation assuming multivariate normality of coefficients
You could use the coefficients and their covariance matrix returned by the model to do simulations (Carsey & Harden 2013, King et al. |
35,074 | Confidence interval for difference between two predicted probabilities in R | As briefly discussed in my answer to the other poster, you can simply calculate the overall standard error from the two standard errors and use the normal approximation to get the confidence interval. The overall standard error is simply sqrt(se_0^2 + se_1^2). The following function implements this (in a very crude fashion):
get_diff_ci <- function(pred) {
diff <- pred$fit[2] - pred$fit[1]
diff_se <- sqrt(pred$se.fit[1]^2 + pred$se.fit[2]^2)
upper <- diff + 1.96 * diff_se
lower <- diff - 1.96 * diff_se
out_dat <- data.frame(diff=diff, lower=lower, upper=upper)
return(out_dat)
}
So in your example it would be:
set.seed(1234)
x = runif(100, 0, 1)
y = rbinom(100, size=1, prob = x)
model = glm(y ~ x, family = binomial("logit"))
newdata = data.frame(x = c(.25, .75))
predicted.probs = predict(model, newdata, type="response", se.fit = T)
get_diff(predicted.probs)
To "prove" that this is appropriate, a small monte-carlo simulation using the original example given:
set.seed(1234)
n_repeats <- 10000
out <- vector(mode="list", length=n_repeats)
for (i in 1:n_repeats) {
x = runif(100, 0, 1)
y = rbinom(100, size=1, prob = x)
model = glm(y ~ x, family = binomial("logit"))
newdata = data.frame(x = c(.25, .75))
pred <- predict(model, newdata, type="response", se.fit=T)
out[[i]] <- get_diff_ci(pred)
}
out <- dplyr::bind_rows(out)
true_diff <- mean(out$diff)
out$true_in_ci <- true_diff <= out$upper & true_diff >= out$lower
mean(out$true_in_ci)
Here I simply repeated your simulation 10000 times and calculated the difference and confidence interval of the two predicted probabilities for each repetition. The mean of the individual differences of the probabilities can be used as an estimate for the true underlying difference (because I am too lazy to derive the actual value). By simply checking what proportion of the estimated CIs contain this true value we can judge whether the confidence interval is correct. In this case the confidence intervals contain the true value 94.59% of the time, which is almost exactly equal to the desired 95%. The discrepancy is probably due to the rounding of the 1.96 z-value and simulation error.
EDIT (15.06.2022):
As pointed out by @Sextus Empiricus, this method does not seem to work as well as it seemy by just looking at the single monte-carlo study and should probably not be used. | Confidence interval for difference between two predicted probabilities in R | As briefly discussed in my answer to the other poster, you can simply calculate the overall standard error from the two standard errors and use the normal approximation to get the confidence interval. | Confidence interval for difference between two predicted probabilities in R
As briefly discussed in my answer to the other poster, you can simply calculate the overall standard error from the two standard errors and use the normal approximation to get the confidence interval. The overall standard error is simply sqrt(se_0^2 + se_1^2). The following function implements this (in a very crude fashion):
get_diff_ci <- function(pred) {
diff <- pred$fit[2] - pred$fit[1]
diff_se <- sqrt(pred$se.fit[1]^2 + pred$se.fit[2]^2)
upper <- diff + 1.96 * diff_se
lower <- diff - 1.96 * diff_se
out_dat <- data.frame(diff=diff, lower=lower, upper=upper)
return(out_dat)
}
So in your example it would be:
set.seed(1234)
x = runif(100, 0, 1)
y = rbinom(100, size=1, prob = x)
model = glm(y ~ x, family = binomial("logit"))
newdata = data.frame(x = c(.25, .75))
predicted.probs = predict(model, newdata, type="response", se.fit = T)
get_diff(predicted.probs)
To "prove" that this is appropriate, a small monte-carlo simulation using the original example given:
set.seed(1234)
n_repeats <- 10000
out <- vector(mode="list", length=n_repeats)
for (i in 1:n_repeats) {
x = runif(100, 0, 1)
y = rbinom(100, size=1, prob = x)
model = glm(y ~ x, family = binomial("logit"))
newdata = data.frame(x = c(.25, .75))
pred <- predict(model, newdata, type="response", se.fit=T)
out[[i]] <- get_diff_ci(pred)
}
out <- dplyr::bind_rows(out)
true_diff <- mean(out$diff)
out$true_in_ci <- true_diff <= out$upper & true_diff >= out$lower
mean(out$true_in_ci)
Here I simply repeated your simulation 10000 times and calculated the difference and confidence interval of the two predicted probabilities for each repetition. The mean of the individual differences of the probabilities can be used as an estimate for the true underlying difference (because I am too lazy to derive the actual value). By simply checking what proportion of the estimated CIs contain this true value we can judge whether the confidence interval is correct. In this case the confidence intervals contain the true value 94.59% of the time, which is almost exactly equal to the desired 95%. The discrepancy is probably due to the rounding of the 1.96 z-value and simulation error.
EDIT (15.06.2022):
As pointed out by @Sextus Empiricus, this method does not seem to work as well as it seemy by just looking at the single monte-carlo study and should probably not be used. | Confidence interval for difference between two predicted probabilities in R
As briefly discussed in my answer to the other poster, you can simply calculate the overall standard error from the two standard errors and use the normal approximation to get the confidence interval. |
35,075 | What is the output of an LSTM | The basic recurrent neural network (RNN) cell is something that takes as input previous hidden state $h_{t-1}$ and current input $x_t$ and returns current hidden state
$$ h_t = \tanh(W_{hh}h_{t-1} + W_{xh}x_t) $$
Same applies to LSTM, but it is just a little bit more complicated as described in this great blog post. So answering your second question, at each step the RNN cell returns an output that can be used to make predictions. There are two ways of using RNN's, you can either process whole input sequence and look only at the last output state (e.g. process a whole sentence and then classify the sentiment of the sentence), or use the intermediate outcomes (in Keras this is the return_sequence=True parameter) and process them further, or make some kind of predictions (e.g. named-entity recognition per each word of a sentence). The only difference in here is that in the first case you simply ignore the intermediate states. If this is too abstract, the following figure (from the blog post referred above) may be helpful.
As you can see, at each step you have some output $h_t$ that is a function of current input $x_t$ and all the history, as passed through the previous hidden state $h_{t-1}$.
As about shape of the hidden state, this is a matrix algebra, so the shape will depend on the shape of the inputs and weights. If you use some pre-build software, like Keras, then this is controlled by the parameters of LSTM cell (number of hidden units). If you code it by hand, this will depend on the shape of the weights. | What is the output of an LSTM | The basic recurrent neural network (RNN) cell is something that takes as input previous hidden state $h_{t-1}$ and current input $x_t$ and returns current hidden state
$$ h_t = \tanh(W_{hh}h_{t-1} + W | What is the output of an LSTM
The basic recurrent neural network (RNN) cell is something that takes as input previous hidden state $h_{t-1}$ and current input $x_t$ and returns current hidden state
$$ h_t = \tanh(W_{hh}h_{t-1} + W_{xh}x_t) $$
Same applies to LSTM, but it is just a little bit more complicated as described in this great blog post. So answering your second question, at each step the RNN cell returns an output that can be used to make predictions. There are two ways of using RNN's, you can either process whole input sequence and look only at the last output state (e.g. process a whole sentence and then classify the sentiment of the sentence), or use the intermediate outcomes (in Keras this is the return_sequence=True parameter) and process them further, or make some kind of predictions (e.g. named-entity recognition per each word of a sentence). The only difference in here is that in the first case you simply ignore the intermediate states. If this is too abstract, the following figure (from the blog post referred above) may be helpful.
As you can see, at each step you have some output $h_t$ that is a function of current input $x_t$ and all the history, as passed through the previous hidden state $h_{t-1}$.
As about shape of the hidden state, this is a matrix algebra, so the shape will depend on the shape of the inputs and weights. If you use some pre-build software, like Keras, then this is controlled by the parameters of LSTM cell (number of hidden units). If you code it by hand, this will depend on the shape of the weights. | What is the output of an LSTM
The basic recurrent neural network (RNN) cell is something that takes as input previous hidden state $h_{t-1}$ and current input $x_t$ and returns current hidden state
$$ h_t = \tanh(W_{hh}h_{t-1} + W |
35,076 | Pretext Task in Computer Vision | A pretext task is used in self-supervised learning to generate useful feature representations, where "useful" is defined nicely in this paper:
By “useful” we mean a representation that should be easily adaptable for other tasks, unknown during training time.
This paper gives a very clear explanation of the relationship of pretext and downstream tasks:
Pretext Task: Pretext tasks are pre-designed tasks for networks to solve, and visual features are learned by learning objective functions of pretext tasks.
Downstream Task: Downstream tasks are computer
vision applications that are used to evaluate the quality of features learned by self-supervised learning.
These applications can greatly benefit from the pretrained models when training data are scarce.
A popular pretext task is minimizing reconstruction error in autoencoders to create lower-dimensional feature representations. Those representations are then used for whatever task you like, with the idea that if the decoder was able to come close to reconstructing the original input, all the essential information exists in the bottleneck layer of the autoencoder, and you can use that lower-dimensional representation as a proxy for the full input.
Another pretext task in vision is image inpainting in context encoders where the network tries to fill in blanked out regions of an image based on surrounding pixels. Yet another one is grayscale colorization that, as the name suggests, tries to colorize a grayscale image, with the idea that in order to do that the network must represent the spatial layout of the image as well as some semantic knowledge. For example, coloring a grayscale school bus as yellow captures a common regularity about school buses as opposed to a city bus which might be any color. So, if your task were, say, classifying vehicles by type, you might perform better on this task predicting from this learned representation because it has encoded spatial and color information that correlates well with our semantic labeling of our environment.
Note that pretext tasks are not unique to computer vision, but since vision dominates a lot of active machine learning research these days, there are many good examples of pretext tasks that have been demonstrated to help in vision-related tasks. An interesting multimodal example is this paper where they train a network to predict whether or not the input audio and video streams are temporally aligned. Using those features, they are able to perform cool tasks like sound-source localization, action recognition, and on/off-screen prediction (i.e. separating out the audio associated with what is visible on screen and what is background audio coming from outside the visual frame). | Pretext Task in Computer Vision | A pretext task is used in self-supervised learning to generate useful feature representations, where "useful" is defined nicely in this paper:
By “useful” we mean a representation that should be easi | Pretext Task in Computer Vision
A pretext task is used in self-supervised learning to generate useful feature representations, where "useful" is defined nicely in this paper:
By “useful” we mean a representation that should be easily adaptable for other tasks, unknown during training time.
This paper gives a very clear explanation of the relationship of pretext and downstream tasks:
Pretext Task: Pretext tasks are pre-designed tasks for networks to solve, and visual features are learned by learning objective functions of pretext tasks.
Downstream Task: Downstream tasks are computer
vision applications that are used to evaluate the quality of features learned by self-supervised learning.
These applications can greatly benefit from the pretrained models when training data are scarce.
A popular pretext task is minimizing reconstruction error in autoencoders to create lower-dimensional feature representations. Those representations are then used for whatever task you like, with the idea that if the decoder was able to come close to reconstructing the original input, all the essential information exists in the bottleneck layer of the autoencoder, and you can use that lower-dimensional representation as a proxy for the full input.
Another pretext task in vision is image inpainting in context encoders where the network tries to fill in blanked out regions of an image based on surrounding pixels. Yet another one is grayscale colorization that, as the name suggests, tries to colorize a grayscale image, with the idea that in order to do that the network must represent the spatial layout of the image as well as some semantic knowledge. For example, coloring a grayscale school bus as yellow captures a common regularity about school buses as opposed to a city bus which might be any color. So, if your task were, say, classifying vehicles by type, you might perform better on this task predicting from this learned representation because it has encoded spatial and color information that correlates well with our semantic labeling of our environment.
Note that pretext tasks are not unique to computer vision, but since vision dominates a lot of active machine learning research these days, there are many good examples of pretext tasks that have been demonstrated to help in vision-related tasks. An interesting multimodal example is this paper where they train a network to predict whether or not the input audio and video streams are temporally aligned. Using those features, they are able to perform cool tasks like sound-source localization, action recognition, and on/off-screen prediction (i.e. separating out the audio associated with what is visible on screen and what is background audio coming from outside the visual frame). | Pretext Task in Computer Vision
A pretext task is used in self-supervised learning to generate useful feature representations, where "useful" is defined nicely in this paper:
By “useful” we mean a representation that should be easi |
35,077 | Pretext Task in Computer Vision | @darshak Pretext tasks are pre-designed tasks for networks to solve, and visual features are learned by learning objective functions of pretext tasks. | Pretext Task in Computer Vision | @darshak Pretext tasks are pre-designed tasks for networks to solve, and visual features are learned by learning objective functions of pretext tasks. | Pretext Task in Computer Vision
@darshak Pretext tasks are pre-designed tasks for networks to solve, and visual features are learned by learning objective functions of pretext tasks. | Pretext Task in Computer Vision
@darshak Pretext tasks are pre-designed tasks for networks to solve, and visual features are learned by learning objective functions of pretext tasks. |
35,078 | How can I use scaling and log transforming together? | You can form a pipeline and apply standard scaling and log transformation subsequently. In this way, you can just train your pipelined regressor on the train data and then use it on the test data. For every input, the pipelined regressor will standardize and log transform the input before making the prediction.
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import FunctionTransformer
from imblearn.pipeline import Pipeline
def log_transform(x):
print(x)
return np.log(x + 1)
scaler = StandardScaler()
transformer = FunctionTransformer(log_transform)
pipe = Pipeline(steps=[('scaler', scaler), ('transformer', transformer), ('regressor', your_regressor)], memory='sklearn_tmp_memory')
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test) | How can I use scaling and log transforming together? | You can form a pipeline and apply standard scaling and log transformation subsequently. In this way, you can just train your pipelined regressor on the train data and then use it on the test data. For | How can I use scaling and log transforming together?
You can form a pipeline and apply standard scaling and log transformation subsequently. In this way, you can just train your pipelined regressor on the train data and then use it on the test data. For every input, the pipelined regressor will standardize and log transform the input before making the prediction.
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import FunctionTransformer
from imblearn.pipeline import Pipeline
def log_transform(x):
print(x)
return np.log(x + 1)
scaler = StandardScaler()
transformer = FunctionTransformer(log_transform)
pipe = Pipeline(steps=[('scaler', scaler), ('transformer', transformer), ('regressor', your_regressor)], memory='sklearn_tmp_memory')
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test) | How can I use scaling and log transforming together?
You can form a pipeline and apply standard scaling and log transformation subsequently. In this way, you can just train your pipelined regressor on the train data and then use it on the test data. For |
35,079 | How can I use scaling and log transforming together? | To apply the log transform you would use numpy. Numpy as a dependency of scikit-learn and pandas so it will already be installed.
import numpy as np
X_train = np.log(X_train)
X_test = np.log(X_test)
You may also be interested in applying that transformation earlier in your pipeline before splitting data into training and test sets.
# Assumes X and y have already been defined
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
X = np.log(X)
X_train, X_test, y_train, y_test = train_test_split(X, y)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
``` | How can I use scaling and log transforming together? | To apply the log transform you would use numpy. Numpy as a dependency of scikit-learn and pandas so it will already be installed.
import numpy as np
X_train = np.log(X_train)
X_test = np.log(X_test)
| How can I use scaling and log transforming together?
To apply the log transform you would use numpy. Numpy as a dependency of scikit-learn and pandas so it will already be installed.
import numpy as np
X_train = np.log(X_train)
X_test = np.log(X_test)
You may also be interested in applying that transformation earlier in your pipeline before splitting data into training and test sets.
# Assumes X and y have already been defined
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
X = np.log(X)
X_train, X_test, y_train, y_test = train_test_split(X, y)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
``` | How can I use scaling and log transforming together?
To apply the log transform you would use numpy. Numpy as a dependency of scikit-learn and pandas so it will already be installed.
import numpy as np
X_train = np.log(X_train)
X_test = np.log(X_test)
|
35,080 | How can I use scaling and log transforming together? | I had the same issue, with the additional inconvenience of only wanting to apply the transforms to a subset of my features.
My solution is essentially the same as Panagiotis Koromilas's, with these key changes:
You can specify a subset of columns to transform
The log is applied before StandardScaler(). StandardScaler() typically results in ~half your values being below 0, and it's not possible to take the log of a negative value.
The inbuilt numpy function np.log1p is used. This allows you to easily pickle the model & pipeline with joblib.dump() and use it elsewhere without needing to make your custom log_transform() function available to the code importing the pickled model.
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler, FunctionTransformer
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
# these columns will have the scaling and transforms applied to them
COLS_TO_TRANSFORM = ['col1', 'col2']
# set up the log transformer
log_transform = ColumnTransformer(
transformers=[
('log', FunctionTransformer(np.log1p), COLS_TO_TRANSFORM),
],
verbose_feature_names_out=False, # if True, "log_" will be prefixed to the column names that have been transformed
remainder='passthrough' # this allows columns not being transformed to pass through unchanged
)
# this ensures that the transform outputs a DataFrame, so that the column names are available for the next step.
log_transform.set_output(transform='pandas')
# set up whatever other scaling you want
scale = ColumnTransformer(
transformers=[
('scale', StandardScaler(), COLS_TO_TRANSFORM),
],
verbose_feature_names_out=False,
remainder='passthrough'
)
scale.set_output(transform='pandas')
# put it all together
model = Pipeline(steps=[
("log", log_transform),
("scale", scale),
("regressor", LogisticRegression()])
PS:
set_output() is a new addition in scikit-learn 1.2.0. Before this it was quite awkward to preserve column names when using ColumnTransformer. More detail | How can I use scaling and log transforming together? | I had the same issue, with the additional inconvenience of only wanting to apply the transforms to a subset of my features.
My solution is essentially the same as Panagiotis Koromilas's, with these ke | How can I use scaling and log transforming together?
I had the same issue, with the additional inconvenience of only wanting to apply the transforms to a subset of my features.
My solution is essentially the same as Panagiotis Koromilas's, with these key changes:
You can specify a subset of columns to transform
The log is applied before StandardScaler(). StandardScaler() typically results in ~half your values being below 0, and it's not possible to take the log of a negative value.
The inbuilt numpy function np.log1p is used. This allows you to easily pickle the model & pipeline with joblib.dump() and use it elsewhere without needing to make your custom log_transform() function available to the code importing the pickled model.
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler, FunctionTransformer
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
# these columns will have the scaling and transforms applied to them
COLS_TO_TRANSFORM = ['col1', 'col2']
# set up the log transformer
log_transform = ColumnTransformer(
transformers=[
('log', FunctionTransformer(np.log1p), COLS_TO_TRANSFORM),
],
verbose_feature_names_out=False, # if True, "log_" will be prefixed to the column names that have been transformed
remainder='passthrough' # this allows columns not being transformed to pass through unchanged
)
# this ensures that the transform outputs a DataFrame, so that the column names are available for the next step.
log_transform.set_output(transform='pandas')
# set up whatever other scaling you want
scale = ColumnTransformer(
transformers=[
('scale', StandardScaler(), COLS_TO_TRANSFORM),
],
verbose_feature_names_out=False,
remainder='passthrough'
)
scale.set_output(transform='pandas')
# put it all together
model = Pipeline(steps=[
("log", log_transform),
("scale", scale),
("regressor", LogisticRegression()])
PS:
set_output() is a new addition in scikit-learn 1.2.0. Before this it was quite awkward to preserve column names when using ColumnTransformer. More detail | How can I use scaling and log transforming together?
I had the same issue, with the additional inconvenience of only wanting to apply the transforms to a subset of my features.
My solution is essentially the same as Panagiotis Koromilas's, with these ke |
35,081 | Does GAM (Generalized Additive Model) have collinearity problem? | GAM models can be afflicted by concurvity (the extension of GLM collinearity to GAM models).
According to https://stat.ethz.ch/R-manual/R-devel/library/mgcv/html/concurvity.html:
"Concurvity occurs when some smooth term in a model could be approximated by
one or more of the other smooth terms in the model. This is often the case
when a smooth of space is included in a model, along with smooths of other
covariates that also vary more or less smoothly in space. Similarly it tends
to be an issue in models including a smooth of time, along with smooths of
other time varying covariates.
Concurvity can be viewed as a generalization of co-linearity, and causes
similar problems of interpretation. It can also make estimates somewhat
unstable (so that they become sensitive to apparently innocuous modelling
details, for example)."
The above link explains how you can compute three different measures of concurvity for a GAM model fitted with the mgcv package in R, all of which are bounded between 0 and 1 (with 0 indicating no concurvity).
Thus, you do have to check for the potential presence of concurvity in your GAM models by computing appropriate measures of concurvity and making sure they are not too high (i.e., not too close to 1). See also gam smoother vs parametric term (concurvity difference), https://jroy042.github.io/nonlinear/week3.html and https://eric-pedersen.github.io/mgcv-esa-workshop/slides/02-model_checking.html#/. | Does GAM (Generalized Additive Model) have collinearity problem? | GAM models can be afflicted by concurvity (the extension of GLM collinearity to GAM models).
According to https://stat.ethz.ch/R-manual/R-devel/library/mgcv/html/concurvity.html:
"Concurvity occurs w | Does GAM (Generalized Additive Model) have collinearity problem?
GAM models can be afflicted by concurvity (the extension of GLM collinearity to GAM models).
According to https://stat.ethz.ch/R-manual/R-devel/library/mgcv/html/concurvity.html:
"Concurvity occurs when some smooth term in a model could be approximated by
one or more of the other smooth terms in the model. This is often the case
when a smooth of space is included in a model, along with smooths of other
covariates that also vary more or less smoothly in space. Similarly it tends
to be an issue in models including a smooth of time, along with smooths of
other time varying covariates.
Concurvity can be viewed as a generalization of co-linearity, and causes
similar problems of interpretation. It can also make estimates somewhat
unstable (so that they become sensitive to apparently innocuous modelling
details, for example)."
The above link explains how you can compute three different measures of concurvity for a GAM model fitted with the mgcv package in R, all of which are bounded between 0 and 1 (with 0 indicating no concurvity).
Thus, you do have to check for the potential presence of concurvity in your GAM models by computing appropriate measures of concurvity and making sure they are not too high (i.e., not too close to 1). See also gam smoother vs parametric term (concurvity difference), https://jroy042.github.io/nonlinear/week3.html and https://eric-pedersen.github.io/mgcv-esa-workshop/slides/02-model_checking.html#/. | Does GAM (Generalized Additive Model) have collinearity problem?
GAM models can be afflicted by concurvity (the extension of GLM collinearity to GAM models).
According to https://stat.ethz.ch/R-manual/R-devel/library/mgcv/html/concurvity.html:
"Concurvity occurs w |
35,082 | Confidence regions on bivariate normal distributions using $\hat{\Sigma}_{MLE}$ or $\mathbf{S}$ | Assume first the the parameters $\boldsymbol\mu$ and $\boldsymbol\Sigma$ are known. Just as $\frac{x-\mu}\sigma$ is standard normal and $\frac{(x-\mu)^2}{\sigma^2}$ chi-square with 1 degree of freedom in the univariate case, the quadratic form
$(\mathbf{x}-\boldsymbol\mu)^T\boldsymbol\Sigma^{-1}(\mathbf{x}-\boldsymbol\mu)$ is chi-square with $p$ degrees of freedom when $\mathbf{x}$ is multivariate normal. Hence, this pivot
$$
(\mathbf{x}-\boldsymbol\mu)^T\boldsymbol\Sigma^{-1}(\mathbf{x}-\boldsymbol\mu)\le \chi_{p,\alpha}^2 \tag{1}
$$
with probability $(1-\alpha)$. A probability region for $\mathbf{x}$ is found by inverting (1) with respect to $\mathbf{x}$. For points at the boundary of this set, ${\mathbf{L}^{-1}}(\mathbf{x}-\boldsymbol{\mu})$ lies on a circle with radius $\sqrt{\chi^2_{p,\alpha}}$ where $\mathbf L$ is the cholesky factor of $\boldsymbol\Sigma$ (or some other square root) such that
$$
\mathbf{L}^{-1}(\mathbf{x}-\boldsymbol{\mu})=\sqrt{\chi^2_{p,\alpha}}
\left[
\begin{matrix}
\cos(\theta)\\
\sin(\theta)
\end{matrix}
\right].
$$
Hence, the boundary of the set (an ellipse) is described by the parametric curve
$$
\mathbf{x}(\theta)=
\boldsymbol{\mu} + \sqrt{\chi^2_{p,\alpha}}\mathbf{L}
\left[
\begin{matrix}
\cos(\theta)\\
\sin(\theta)
\end{matrix}
\right],
$$
for $0<\theta <2\pi$.
If the parameters are unknown and we we use $\bar{\mathbf{x}}$ to estimate $\boldsymbol\mu$, $\mathbf{x}-\bar{\mathbf{x}} \sim N_p(0,(1+1/n))\boldsymbol{\Sigma})$. Hence, $(1+1/n)^{-1}(\mathbf{x}-\bar{\mathbf{x}})^T\boldsymbol\Sigma^{-1}(\mathbf{x}-\bar{\mathbf{x}})$ is chi-square with $p$ degrees of freedom. Substituting $\boldsymbol\Sigma$ by its estimate $\hat{\boldsymbol\Sigma}=\frac1{n-1}\mathbf{X}^T \mathbf{X}$ the resulting pivot is instead Hotelling $T$-squared distributed with $p$ and $n-p$ degrees of freedom (analogous to the $F_{1,n-1}$ distributed squared $t$-statistic in the univariate case) such that
$$
\Big(1+\frac1n\Big)^{-1}(\mathbf{x}-\bar{\mathbf{x}})^T\hat{\boldsymbol\Sigma}^{-1}(\mathbf{x}-\bar{\mathbf{x}})
\le T^2_{p,n-p,\alpha} \tag{2}
$$
with probability $(1-\alpha)$. Because the Hotelling $T$-squared is just a rescaled $F$-distribution, the above quantile equals $\frac{p(n-1)}{n-p}F_{p,n-p,\alpha}$.
Inverting (2) with respect to $\mathbf{x}$ leads to a prediction region with boundary described by the parametric curve
$$
\mathbf{x}(\theta)=
\bar{\mathbf x} + \sqrt{\Big(1+\frac1n\Big)\frac{p(n-1)}{n-p}F_{p,n-p,\alpha}}\hat{\mathbf{L}}
\left[
\begin{matrix}
\cos(\theta)\\
\sin(\theta)
\end{matrix}
\right]
$$
where $\hat{\mathbf L}$ is the cholesky factor of the sample variance matrix $\hat{\boldsymbol\Sigma}$.
Code computing this for the data in the original question:
pred.int.mvnorm <- function(x, alpha=.05) {
p <- ncol(x)
n <- nrow(x)
Sigmahat <- var(x)
xbar <- apply(x,2,mean)
xbar
theta <- seq(0, 2*pi, length=100)
polygon <- xbar +
sqrt(p*(n-1)/(n-p)*(1 + 1/n)*qf(alpha, p, n - p, lower.tail = FALSE))*
t(chol(Sigmahat)) %*%
rbind(cos(theta), sin(theta))
t(polygon)
}
x <- matrix(c(-0.9,2.4,-1.4,2.9,2.0,0.2,0.7,1.0,-0.5,-1.0),ncol=2)
plot(pred.int.mvnorm(x), type="l",xlab=expression(x[1]),ylab=expression(x[2]))
points(x)
More code testing the coverage
library(mvtnorm)
library(sp)
hits <- 0
for (i in 1:1e+5) {
x <- rmvnorm(6, sigma = diag(2))
pred.int <- pred.int.mvnorm(x[-1,])
x <- x[1,]
if (point.in.polygon(x[1], x[2], pred.int[,1], pred.int[,2])==1)
hits <- hits + 1
}
hits
[1] 94955 | Confidence regions on bivariate normal distributions using $\hat{\Sigma}_{MLE}$ or $\mathbf{S}$ | Assume first the the parameters $\boldsymbol\mu$ and $\boldsymbol\Sigma$ are known. Just as $\frac{x-\mu}\sigma$ is standard normal and $\frac{(x-\mu)^2}{\sigma^2}$ chi-square with 1 degree of freedo | Confidence regions on bivariate normal distributions using $\hat{\Sigma}_{MLE}$ or $\mathbf{S}$
Assume first the the parameters $\boldsymbol\mu$ and $\boldsymbol\Sigma$ are known. Just as $\frac{x-\mu}\sigma$ is standard normal and $\frac{(x-\mu)^2}{\sigma^2}$ chi-square with 1 degree of freedom in the univariate case, the quadratic form
$(\mathbf{x}-\boldsymbol\mu)^T\boldsymbol\Sigma^{-1}(\mathbf{x}-\boldsymbol\mu)$ is chi-square with $p$ degrees of freedom when $\mathbf{x}$ is multivariate normal. Hence, this pivot
$$
(\mathbf{x}-\boldsymbol\mu)^T\boldsymbol\Sigma^{-1}(\mathbf{x}-\boldsymbol\mu)\le \chi_{p,\alpha}^2 \tag{1}
$$
with probability $(1-\alpha)$. A probability region for $\mathbf{x}$ is found by inverting (1) with respect to $\mathbf{x}$. For points at the boundary of this set, ${\mathbf{L}^{-1}}(\mathbf{x}-\boldsymbol{\mu})$ lies on a circle with radius $\sqrt{\chi^2_{p,\alpha}}$ where $\mathbf L$ is the cholesky factor of $\boldsymbol\Sigma$ (or some other square root) such that
$$
\mathbf{L}^{-1}(\mathbf{x}-\boldsymbol{\mu})=\sqrt{\chi^2_{p,\alpha}}
\left[
\begin{matrix}
\cos(\theta)\\
\sin(\theta)
\end{matrix}
\right].
$$
Hence, the boundary of the set (an ellipse) is described by the parametric curve
$$
\mathbf{x}(\theta)=
\boldsymbol{\mu} + \sqrt{\chi^2_{p,\alpha}}\mathbf{L}
\left[
\begin{matrix}
\cos(\theta)\\
\sin(\theta)
\end{matrix}
\right],
$$
for $0<\theta <2\pi$.
If the parameters are unknown and we we use $\bar{\mathbf{x}}$ to estimate $\boldsymbol\mu$, $\mathbf{x}-\bar{\mathbf{x}} \sim N_p(0,(1+1/n))\boldsymbol{\Sigma})$. Hence, $(1+1/n)^{-1}(\mathbf{x}-\bar{\mathbf{x}})^T\boldsymbol\Sigma^{-1}(\mathbf{x}-\bar{\mathbf{x}})$ is chi-square with $p$ degrees of freedom. Substituting $\boldsymbol\Sigma$ by its estimate $\hat{\boldsymbol\Sigma}=\frac1{n-1}\mathbf{X}^T \mathbf{X}$ the resulting pivot is instead Hotelling $T$-squared distributed with $p$ and $n-p$ degrees of freedom (analogous to the $F_{1,n-1}$ distributed squared $t$-statistic in the univariate case) such that
$$
\Big(1+\frac1n\Big)^{-1}(\mathbf{x}-\bar{\mathbf{x}})^T\hat{\boldsymbol\Sigma}^{-1}(\mathbf{x}-\bar{\mathbf{x}})
\le T^2_{p,n-p,\alpha} \tag{2}
$$
with probability $(1-\alpha)$. Because the Hotelling $T$-squared is just a rescaled $F$-distribution, the above quantile equals $\frac{p(n-1)}{n-p}F_{p,n-p,\alpha}$.
Inverting (2) with respect to $\mathbf{x}$ leads to a prediction region with boundary described by the parametric curve
$$
\mathbf{x}(\theta)=
\bar{\mathbf x} + \sqrt{\Big(1+\frac1n\Big)\frac{p(n-1)}{n-p}F_{p,n-p,\alpha}}\hat{\mathbf{L}}
\left[
\begin{matrix}
\cos(\theta)\\
\sin(\theta)
\end{matrix}
\right]
$$
where $\hat{\mathbf L}$ is the cholesky factor of the sample variance matrix $\hat{\boldsymbol\Sigma}$.
Code computing this for the data in the original question:
pred.int.mvnorm <- function(x, alpha=.05) {
p <- ncol(x)
n <- nrow(x)
Sigmahat <- var(x)
xbar <- apply(x,2,mean)
xbar
theta <- seq(0, 2*pi, length=100)
polygon <- xbar +
sqrt(p*(n-1)/(n-p)*(1 + 1/n)*qf(alpha, p, n - p, lower.tail = FALSE))*
t(chol(Sigmahat)) %*%
rbind(cos(theta), sin(theta))
t(polygon)
}
x <- matrix(c(-0.9,2.4,-1.4,2.9,2.0,0.2,0.7,1.0,-0.5,-1.0),ncol=2)
plot(pred.int.mvnorm(x), type="l",xlab=expression(x[1]),ylab=expression(x[2]))
points(x)
More code testing the coverage
library(mvtnorm)
library(sp)
hits <- 0
for (i in 1:1e+5) {
x <- rmvnorm(6, sigma = diag(2))
pred.int <- pred.int.mvnorm(x[-1,])
x <- x[1,]
if (point.in.polygon(x[1], x[2], pred.int[,1], pred.int[,2])==1)
hits <- hits + 1
}
hits
[1] 94955 | Confidence regions on bivariate normal distributions using $\hat{\Sigma}_{MLE}$ or $\mathbf{S}$
Assume first the the parameters $\boldsymbol\mu$ and $\boldsymbol\Sigma$ are known. Just as $\frac{x-\mu}\sigma$ is standard normal and $\frac{(x-\mu)^2}{\sigma^2}$ chi-square with 1 degree of freedo |
35,083 | Choosing k in mgcv's gam() | There is some confusion here and in the answer by @Ira S in that linked post. The default value of the argument k is -1. This indicates that the default number of basis functions be computed for the specified basis type (the default is thin plate splines but you can ask for others via the bs argument). So for a univariate thin plate spline you will get 10 basis functions by default because k = -1 implies a default of 10, and in reality you get 9 basis functions as the constant basis function, which is confounded with the model intercept term, gets removed from the basis by the application of a sum-to-zero identifiabilty constraint.
Given a basis expansion, mgcv::gam() will fit the required model using penalised likelihood to estimate parameters for the basis functions and the intercept and any other parametric terms, conditional upon a smoothness parameter, and also estimate the smoothness parameter which is what actually chooses the complexity (wiggliness) of the final fitted smooth function.
mgcv::gam() can use GCV, or REML, or ML to estimate the coefficients and smoothness parameter(s) of the model. It will do this estimation for you whatever value you pass to k. You can only stop it doing this smoothness selection by adding the argument fx = TRUE to the s() call for each smooth.
With mgcv::gam() the main issue you face is to set the initial basis size. You don't need to choose knot locations with the thin plate splines (there is a knot at each unique data value, and then a low-rank version of the full basis expansion with k basis functions is found) and for most of the less exotic bases, knot placement typically makes little to no difference on the fitted model.
You want to set k to be large, as large as you can afford given the amount of data you have, but you don't want it to be too large as it takes a lot more computational effort to work with all those basis functions especially if many/most of them will be penalised away to zero in the resulting model fit.
So, in your case, I would set k to be some large enough value that the expected wiggliness of the true function is accommodated. If you have lots of data and can bear the computational burden, you can effectively set this as high as you you want.
Assuming you have the correct model specified, the penalty should deal with the extra wiggliness.
I have found GCV to be a little bit more robust to model mis-specification for some models I have fitted, I prefer to use REML for smoothness selection and this will become the default in a future version of mgcv so I recommend that you use that, not GCV. | Choosing k in mgcv's gam() | There is some confusion here and in the answer by @Ira S in that linked post. The default value of the argument k is -1. This indicates that the default number of basis functions be computed for the s | Choosing k in mgcv's gam()
There is some confusion here and in the answer by @Ira S in that linked post. The default value of the argument k is -1. This indicates that the default number of basis functions be computed for the specified basis type (the default is thin plate splines but you can ask for others via the bs argument). So for a univariate thin plate spline you will get 10 basis functions by default because k = -1 implies a default of 10, and in reality you get 9 basis functions as the constant basis function, which is confounded with the model intercept term, gets removed from the basis by the application of a sum-to-zero identifiabilty constraint.
Given a basis expansion, mgcv::gam() will fit the required model using penalised likelihood to estimate parameters for the basis functions and the intercept and any other parametric terms, conditional upon a smoothness parameter, and also estimate the smoothness parameter which is what actually chooses the complexity (wiggliness) of the final fitted smooth function.
mgcv::gam() can use GCV, or REML, or ML to estimate the coefficients and smoothness parameter(s) of the model. It will do this estimation for you whatever value you pass to k. You can only stop it doing this smoothness selection by adding the argument fx = TRUE to the s() call for each smooth.
With mgcv::gam() the main issue you face is to set the initial basis size. You don't need to choose knot locations with the thin plate splines (there is a knot at each unique data value, and then a low-rank version of the full basis expansion with k basis functions is found) and for most of the less exotic bases, knot placement typically makes little to no difference on the fitted model.
You want to set k to be large, as large as you can afford given the amount of data you have, but you don't want it to be too large as it takes a lot more computational effort to work with all those basis functions especially if many/most of them will be penalised away to zero in the resulting model fit.
So, in your case, I would set k to be some large enough value that the expected wiggliness of the true function is accommodated. If you have lots of data and can bear the computational burden, you can effectively set this as high as you you want.
Assuming you have the correct model specified, the penalty should deal with the extra wiggliness.
I have found GCV to be a little bit more robust to model mis-specification for some models I have fitted, I prefer to use REML for smoothness selection and this will become the default in a future version of mgcv so I recommend that you use that, not GCV. | Choosing k in mgcv's gam()
There is some confusion here and in the answer by @Ira S in that linked post. The default value of the argument k is -1. This indicates that the default number of basis functions be computed for the s |
35,084 | Econometrics: What are the assumptions of logistic regression for causal inference? | The capacity to interpret regression relationships as causal generally depends on experimental protocols rather than the assumed structure of the statistical model. Regression models allow us to relate the explanatory variables statistically to the response variable, where this relationship is made conditional on all the explanatory variables in the model. As a default position, that is still just a predictive relationship, and should not be interpreted causally. That is the case in standard linear regression using OLS estimation, and it is also true in logistic regression.
Suppose we want to interpret a regression relationship causally ---e.g., we have an explanatory variable $x_k$ and we want to interpret its regression relationship with the response variable $Y$ as a causal relationship (the former causing the latter). The thing we are scared of here is the possibility that the predictive relationship might actually be due to a relationship with some confounding factor, which is an additional variable outside the regression that is statistically related to $x_k$ and is the real cause of $Y$. If such a confounding factor exists, it will induce a statistical relationship between these variables that we will see in our regression. (The other mistake you can make is to condition on a mediator variable, which also leads to an incorrect causal inference.)
So, in order to interpret regression relationships causally, we want to be confident that what we are seeing is not the result of confounding factors outside our analysis. The best way to ensure this is to use controlled experimentation to set $x_k$ via randomisation/blinding, thereby severing any statistical link between this explanatory variable and any would-be confounding factor. In the absence of this, the next best thing is to use uncontrolled analysis, but try to bring in as many possible confounding factors as we can, to filter them out in the regression. (No guarantees that we have found them all!) There are also other methods, such as using instrumental variables, but these generally hinge on strong assumptions about the nature of those variables.
None of the assumptions you mention are necessary or sufficient to infer causality. Those are just model assumptions for the logistic regression, and if they do not hold you can vary your model accordingly. The main assumption you need for causal inference is to assume that confounding factors are absent. That can be done by using a randomisation/blinding protocol in your experiment, or it can be left as a (hope-and-pray) assumption. | Econometrics: What are the assumptions of logistic regression for causal inference? | The capacity to interpret regression relationships as causal generally depends on experimental protocols rather than the assumed structure of the statistical model. Regression models allow us to rela | Econometrics: What are the assumptions of logistic regression for causal inference?
The capacity to interpret regression relationships as causal generally depends on experimental protocols rather than the assumed structure of the statistical model. Regression models allow us to relate the explanatory variables statistically to the response variable, where this relationship is made conditional on all the explanatory variables in the model. As a default position, that is still just a predictive relationship, and should not be interpreted causally. That is the case in standard linear regression using OLS estimation, and it is also true in logistic regression.
Suppose we want to interpret a regression relationship causally ---e.g., we have an explanatory variable $x_k$ and we want to interpret its regression relationship with the response variable $Y$ as a causal relationship (the former causing the latter). The thing we are scared of here is the possibility that the predictive relationship might actually be due to a relationship with some confounding factor, which is an additional variable outside the regression that is statistically related to $x_k$ and is the real cause of $Y$. If such a confounding factor exists, it will induce a statistical relationship between these variables that we will see in our regression. (The other mistake you can make is to condition on a mediator variable, which also leads to an incorrect causal inference.)
So, in order to interpret regression relationships causally, we want to be confident that what we are seeing is not the result of confounding factors outside our analysis. The best way to ensure this is to use controlled experimentation to set $x_k$ via randomisation/blinding, thereby severing any statistical link between this explanatory variable and any would-be confounding factor. In the absence of this, the next best thing is to use uncontrolled analysis, but try to bring in as many possible confounding factors as we can, to filter them out in the regression. (No guarantees that we have found them all!) There are also other methods, such as using instrumental variables, but these generally hinge on strong assumptions about the nature of those variables.
None of the assumptions you mention are necessary or sufficient to infer causality. Those are just model assumptions for the logistic regression, and if they do not hold you can vary your model accordingly. The main assumption you need for causal inference is to assume that confounding factors are absent. That can be done by using a randomisation/blinding protocol in your experiment, or it can be left as a (hope-and-pray) assumption. | Econometrics: What are the assumptions of logistic regression for causal inference?
The capacity to interpret regression relationships as causal generally depends on experimental protocols rather than the assumed structure of the statistical model. Regression models allow us to rela |
35,085 | Econometrics: What are the assumptions of logistic regression for causal inference? | To add to Ben's great answer here's a basic example of how a regression model (regardless of its type) might not be able to infer causality even if you think you've addressed every "assumption." Let's say we have a dataset from a survey of a bunch of people at a single time point. We run a logistic regression model with "being depressed" as the dependent variable and "opiate use" as the independent variable. Assume that we've totally accounted for all OTHER variables that might confound this relationship, and that all of the other assumptions of the model are satisfied as well. We find a significant, positive relationship.
Does this mean that opiate use causes depression? Maybe. But it might also mean that depression causes opiate use. Or maybe both are true at the same time (but one effect is stronger than the other). If all of the variables are collected at the same point in time, the model is not going to be able to distinguish between these VERY DIFFERENT causal processes. Only by adjusting our research design (e.g. measuring opiate use in one year and depression in the next year) can we solve this problem. Regression alone can't help us. | Econometrics: What are the assumptions of logistic regression for causal inference? | To add to Ben's great answer here's a basic example of how a regression model (regardless of its type) might not be able to infer causality even if you think you've addressed every "assumption." Let's | Econometrics: What are the assumptions of logistic regression for causal inference?
To add to Ben's great answer here's a basic example of how a regression model (regardless of its type) might not be able to infer causality even if you think you've addressed every "assumption." Let's say we have a dataset from a survey of a bunch of people at a single time point. We run a logistic regression model with "being depressed" as the dependent variable and "opiate use" as the independent variable. Assume that we've totally accounted for all OTHER variables that might confound this relationship, and that all of the other assumptions of the model are satisfied as well. We find a significant, positive relationship.
Does this mean that opiate use causes depression? Maybe. But it might also mean that depression causes opiate use. Or maybe both are true at the same time (but one effect is stronger than the other). If all of the variables are collected at the same point in time, the model is not going to be able to distinguish between these VERY DIFFERENT causal processes. Only by adjusting our research design (e.g. measuring opiate use in one year and depression in the next year) can we solve this problem. Regression alone can't help us. | Econometrics: What are the assumptions of logistic regression for causal inference?
To add to Ben's great answer here's a basic example of how a regression model (regardless of its type) might not be able to infer causality even if you think you've addressed every "assumption." Let's |
35,086 | Econometrics: What are the assumptions of logistic regression for causal inference? | Answering your question about non-identically distributed error terms: In logistic regression, the logit of the dependent variable is regressed on the predictors and the errors of this regression are, in fact, identically distributed and follow a logistic distribution. However, when back-transformed to the response scale, the error term can only take two values at each level of the linear predictor: $$e_i = 1-\pi_i \quad\vert Y_i = 1\\e_i = -\pi_i \quad\,\,\,\,\,\,\vert Y_i = 0$$ Because $e_i = Y_i - \pi_i$ (and $\pi_i$ is constant), the variance of this error term is equal to the variance of the binary variable $Y_i$. The variance of the binary variable $Y_i$ is given by $\sigma^2(Y_i) = \pi_i(1-\pi_i)$ and is non-constant because it is dependent on the mean $\pi_i$.
Kutner et al. (2005). Applied Linear Statistical Models (Ch. 14) | Econometrics: What are the assumptions of logistic regression for causal inference? | Answering your question about non-identically distributed error terms: In logistic regression, the logit of the dependent variable is regressed on the predictors and the errors of this regression are, | Econometrics: What are the assumptions of logistic regression for causal inference?
Answering your question about non-identically distributed error terms: In logistic regression, the logit of the dependent variable is regressed on the predictors and the errors of this regression are, in fact, identically distributed and follow a logistic distribution. However, when back-transformed to the response scale, the error term can only take two values at each level of the linear predictor: $$e_i = 1-\pi_i \quad\vert Y_i = 1\\e_i = -\pi_i \quad\,\,\,\,\,\,\vert Y_i = 0$$ Because $e_i = Y_i - \pi_i$ (and $\pi_i$ is constant), the variance of this error term is equal to the variance of the binary variable $Y_i$. The variance of the binary variable $Y_i$ is given by $\sigma^2(Y_i) = \pi_i(1-\pi_i)$ and is non-constant because it is dependent on the mean $\pi_i$.
Kutner et al. (2005). Applied Linear Statistical Models (Ch. 14) | Econometrics: What are the assumptions of logistic regression for causal inference?
Answering your question about non-identically distributed error terms: In logistic regression, the logit of the dependent variable is regressed on the predictors and the errors of this regression are, |
35,087 | Do Autoencoders preserve distances? | No, they don't. We basically design them so that they cannot preserve distances. An autoencoder is a neural network which learns a "meaningful" representation of the input, preserving its "semantic" features. The quoted words (like so many terms in Deep Learning papers) have no rigorous definition, but let's say that, trained on a set of inputs, the autoencoder should learn some common features of these inputs, which allow to reconstruct an unseen input with small error 1.
The simplest way for the autoencoder to minimize the differences between input and output (reconstructed input) would be to just output the input, i.e., to learn the identity function, which is an isometry, thus it preserves distances. However, we don't want the autoencoder to simply learn the identity map, because otherwise we don't learn "meaningful" representation, or, to say it better, we don't learn to "compress" the input by learning its basic semantic features and "throwing away" the minute details (the noise, in the case of denoising autoencoder).
To prevent the autoencoder from learning the identity transformation, and forcing it to compress the input, we reduce the number of units in the hidden layers of the autoencoder (bottleneck layer or layers). In other words, we force it to learn a form of nonlinear dimensionality reduction: not for nothing, there is a deep connection between linear autoencoders and PCA, a well-known statistical procedure for linear dimensionality reduction.
However, this comes to a cost: by forcing the autoencoder to perform some kind of nonlinear dimensionality reduction, we prevent it from preserving distances. As a matter of fact, you can prove that there exists no isometry, i.e., no distance preserving transformation, between two Euclidean spaces $\mathbb{E}^n$ and $\mathbb{E}^m$ if $m < n$ (this is implicitly proven in this proof of another statement). In other words, a dimension-reducing transformation cannot be an isometry. This is quite intuitive, actually: if the autoencoder must learn to map elements of a high-dimensional vector space $V$, to elements of a lower-dimensional manifold $M$ embedded in $V$, it will have to "sacrifice" some directions in $V$, which means that two vectors differing only along these directions will be mapped to the same element of $M$. Thus, their distance, initially nonzero, is not preserved (it becomes 0).
NOTE: it can be possible to learn a mapping of a finite set of elements of $V$ $S=\{v_1,\dots,v_n\}$, to a finite set of elements $O=\{w_1,\dots,w_n\}\in M$, such that the pairwise distances are conserved. This is what multidimensional scaling attempts to do. However, it's impossible to map all the elements of $V$ to elements of a lower-dimensional space $W$ while preserving distances.
1things gets more complicated when we refer to my favourite flavour of autoencoder, the Variational Autoencoder, but I won't focus on them here. | Do Autoencoders preserve distances? | No, they don't. We basically design them so that they cannot preserve distances. An autoencoder is a neural network which learns a "meaningful" representation of the input, preserving its "semantic" f | Do Autoencoders preserve distances?
No, they don't. We basically design them so that they cannot preserve distances. An autoencoder is a neural network which learns a "meaningful" representation of the input, preserving its "semantic" features. The quoted words (like so many terms in Deep Learning papers) have no rigorous definition, but let's say that, trained on a set of inputs, the autoencoder should learn some common features of these inputs, which allow to reconstruct an unseen input with small error 1.
The simplest way for the autoencoder to minimize the differences between input and output (reconstructed input) would be to just output the input, i.e., to learn the identity function, which is an isometry, thus it preserves distances. However, we don't want the autoencoder to simply learn the identity map, because otherwise we don't learn "meaningful" representation, or, to say it better, we don't learn to "compress" the input by learning its basic semantic features and "throwing away" the minute details (the noise, in the case of denoising autoencoder).
To prevent the autoencoder from learning the identity transformation, and forcing it to compress the input, we reduce the number of units in the hidden layers of the autoencoder (bottleneck layer or layers). In other words, we force it to learn a form of nonlinear dimensionality reduction: not for nothing, there is a deep connection between linear autoencoders and PCA, a well-known statistical procedure for linear dimensionality reduction.
However, this comes to a cost: by forcing the autoencoder to perform some kind of nonlinear dimensionality reduction, we prevent it from preserving distances. As a matter of fact, you can prove that there exists no isometry, i.e., no distance preserving transformation, between two Euclidean spaces $\mathbb{E}^n$ and $\mathbb{E}^m$ if $m < n$ (this is implicitly proven in this proof of another statement). In other words, a dimension-reducing transformation cannot be an isometry. This is quite intuitive, actually: if the autoencoder must learn to map elements of a high-dimensional vector space $V$, to elements of a lower-dimensional manifold $M$ embedded in $V$, it will have to "sacrifice" some directions in $V$, which means that two vectors differing only along these directions will be mapped to the same element of $M$. Thus, their distance, initially nonzero, is not preserved (it becomes 0).
NOTE: it can be possible to learn a mapping of a finite set of elements of $V$ $S=\{v_1,\dots,v_n\}$, to a finite set of elements $O=\{w_1,\dots,w_n\}\in M$, such that the pairwise distances are conserved. This is what multidimensional scaling attempts to do. However, it's impossible to map all the elements of $V$ to elements of a lower-dimensional space $W$ while preserving distances.
1things gets more complicated when we refer to my favourite flavour of autoencoder, the Variational Autoencoder, but I won't focus on them here. | Do Autoencoders preserve distances?
No, they don't. We basically design them so that they cannot preserve distances. An autoencoder is a neural network which learns a "meaningful" representation of the input, preserving its "semantic" f |
35,088 | Do Autoencoders preserve distances? | You can train a network with any loss function you like. Thus, approach 1, you can create a loss function that pushes the network to ensure that the distance between pairs in a mini-batch in the output equals that between pairs in the input. If you do it on a mini-batch basis, and batch-size is say 16 or 32, that seems not unworkable. Or you could sample a few pairs, and calculate the loss on those (same number of pairs each mini-batch, eg sampled randomly).
As far as creating a non-linear network that is guaranteed to preserve distance, an approach 2, I think one approach could be to build the network out of blocks which themselves preserve distances, eg rotations. I'm not sure that this network could be anything other than a linear transformation, and just a rotation at that. Any non-linearity, such as a sigmoid squashing, would deform the distances.
I think approach 1 sounds workable to me, although no guarantee that distances are always preserved, and they won't be very exactly preserved. The second approach sounds intuitively to me that you'd be limited to a single rotation transformation?
Edit: to clarify. I'm answering the question "how can one make an auto-encoder preserve distance?". The implicit answer I'm giving to "Does an auto-encoder preserve distance?" is "Not by default; though you could put in a bunch of leg-work to encourage this to be the case, ie approach 1 above".
Edit 2: @DeltaIV has a good point about dimension reduction. Note that the existence of t-SNE and so on, ie low-dimensional projections of high-dimensional space, shows both the limitations of trying to preserve distance (conflict between global distances and local distances; challenge of preserving distances in reduced dimensions), but also that it is somewhat possible, given certain caveats/compromises. | Do Autoencoders preserve distances? | You can train a network with any loss function you like. Thus, approach 1, you can create a loss function that pushes the network to ensure that the distance between pairs in a mini-batch in the outpu | Do Autoencoders preserve distances?
You can train a network with any loss function you like. Thus, approach 1, you can create a loss function that pushes the network to ensure that the distance between pairs in a mini-batch in the output equals that between pairs in the input. If you do it on a mini-batch basis, and batch-size is say 16 or 32, that seems not unworkable. Or you could sample a few pairs, and calculate the loss on those (same number of pairs each mini-batch, eg sampled randomly).
As far as creating a non-linear network that is guaranteed to preserve distance, an approach 2, I think one approach could be to build the network out of blocks which themselves preserve distances, eg rotations. I'm not sure that this network could be anything other than a linear transformation, and just a rotation at that. Any non-linearity, such as a sigmoid squashing, would deform the distances.
I think approach 1 sounds workable to me, although no guarantee that distances are always preserved, and they won't be very exactly preserved. The second approach sounds intuitively to me that you'd be limited to a single rotation transformation?
Edit: to clarify. I'm answering the question "how can one make an auto-encoder preserve distance?". The implicit answer I'm giving to "Does an auto-encoder preserve distance?" is "Not by default; though you could put in a bunch of leg-work to encourage this to be the case, ie approach 1 above".
Edit 2: @DeltaIV has a good point about dimension reduction. Note that the existence of t-SNE and so on, ie low-dimensional projections of high-dimensional space, shows both the limitations of trying to preserve distance (conflict between global distances and local distances; challenge of preserving distances in reduced dimensions), but also that it is somewhat possible, given certain caveats/compromises. | Do Autoencoders preserve distances?
You can train a network with any loss function you like. Thus, approach 1, you can create a loss function that pushes the network to ensure that the distance between pairs in a mini-batch in the outpu |
35,089 | How can I understand REINFORCE with baseline is not a actor-critic algorithm? | The difference is in how (and when) the prediction error estimate $\delta$ is calculated.
In REINFORCE with baseline:
$\qquad \delta \leftarrow G - \hat{v}(S_t,\mathbf{w})\qquad$ ; after the episode is complete
In Actor-critic:
$\qquad \delta \leftarrow R +\gamma \hat{v}(S',\mathbf{w}) - \hat{v}(S,\mathbf{w})\qquad$ ; online
Bootstrapping in RL is when the learned estimate $\hat{v}$ from a successor state $S'$ is used to construct the update for a preceding state $S$. This kind of self-reference to the learned model so far allows for updates at every step, but at the expense of initial bias towards however the model was initialised. On balance, the faster updates can often lead to more efficient learning. However the bias can lead to instability.
In REINFORCE, the final return $G$ is used instead, which is the same value as you would use in Monte Carlo control. The value of $G$ is not a bootstrap estimate, it is a direct sample of the return seen when behaving with the current policy. As a result it is not biased, but you have to wait to the end of each episode before applying updates. | How can I understand REINFORCE with baseline is not a actor-critic algorithm? | The difference is in how (and when) the prediction error estimate $\delta$ is calculated.
In REINFORCE with baseline:
$\qquad \delta \leftarrow G - \hat{v}(S_t,\mathbf{w})\qquad$ ; after the episode i | How can I understand REINFORCE with baseline is not a actor-critic algorithm?
The difference is in how (and when) the prediction error estimate $\delta$ is calculated.
In REINFORCE with baseline:
$\qquad \delta \leftarrow G - \hat{v}(S_t,\mathbf{w})\qquad$ ; after the episode is complete
In Actor-critic:
$\qquad \delta \leftarrow R +\gamma \hat{v}(S',\mathbf{w}) - \hat{v}(S,\mathbf{w})\qquad$ ; online
Bootstrapping in RL is when the learned estimate $\hat{v}$ from a successor state $S'$ is used to construct the update for a preceding state $S$. This kind of self-reference to the learned model so far allows for updates at every step, but at the expense of initial bias towards however the model was initialised. On balance, the faster updates can often lead to more efficient learning. However the bias can lead to instability.
In REINFORCE, the final return $G$ is used instead, which is the same value as you would use in Monte Carlo control. The value of $G$ is not a bootstrap estimate, it is a direct sample of the return seen when behaving with the current policy. As a result it is not biased, but you have to wait to the end of each episode before applying updates. | How can I understand REINFORCE with baseline is not a actor-critic algorithm?
The difference is in how (and when) the prediction error estimate $\delta$ is calculated.
In REINFORCE with baseline:
$\qquad \delta \leftarrow G - \hat{v}(S_t,\mathbf{w})\qquad$ ; after the episode i |
35,090 | How can I understand REINFORCE with baseline is not a actor-critic algorithm? | I would complement The answer given by @Neil Slater and say that you have to know that there's 2 ways of reducing the variance of MC Reinforce and these are :
Substracting a baseline
Approximating the expected return rather than estimating it in a MC fashion
Reinforce with baseline only uses the first method, while the Actor-critic is using the second.
The algorithm you showed here and called actor-critic in Sutton's book is actually an Advantage Actor Critic and is using both techniques for reducing the variance. | How can I understand REINFORCE with baseline is not a actor-critic algorithm? | I would complement The answer given by @Neil Slater and say that you have to know that there's 2 ways of reducing the variance of MC Reinforce and these are :
Substracting a baseline
Approximating th | How can I understand REINFORCE with baseline is not a actor-critic algorithm?
I would complement The answer given by @Neil Slater and say that you have to know that there's 2 ways of reducing the variance of MC Reinforce and these are :
Substracting a baseline
Approximating the expected return rather than estimating it in a MC fashion
Reinforce with baseline only uses the first method, while the Actor-critic is using the second.
The algorithm you showed here and called actor-critic in Sutton's book is actually an Advantage Actor Critic and is using both techniques for reducing the variance. | How can I understand REINFORCE with baseline is not a actor-critic algorithm?
I would complement The answer given by @Neil Slater and say that you have to know that there's 2 ways of reducing the variance of MC Reinforce and these are :
Substracting a baseline
Approximating th |
35,091 | How can I understand REINFORCE with baseline is not a actor-critic algorithm? | Four years late to this post. Still have something to add...
I think REINFORCE-with-baseline and actor-critic are similar and it is hard for beginners to tell apart.
Neil's answer is great. But I guess the explanation in Sutton Barto's book sheds great light on above quoted doubt.
(RLBook, pdf page 353, book page 331, section 13.5 Actor-Critic Methods)
In REINFORCE with baseline, the learned state-value function estimates the value of
the only the first state of each state transition. This estimate sets a baseline for the
subsequent return, but is made prior to the transition’s action and thus cannot be used
to assess that action. In actor–critic methods, on the other hand, the state-value function is applied also to the second state of the transition. The estimated value of the second state, when discounted and added to the reward, constitutes the one-step return, $G_{t:t+1}$, which is a useful estimate of the actual return and thus is a way of assessing the action.
As we have seen in the TD learning of value functions throughout this book, the one-step
return is often superior to the actual return in terms of its variance and computational
congeniality, even though it introduces bias. We also know how we can flexibly modulate
the extent of the bias with n-step returns and eligibility traces.
When the state-value function is used to assess actions in this way it is called a critic, and the overall policy-gradient method is termed an actor–critic method. Note that the bias in the gradient estimate is not due to bootstrapping as such; the actor would be biased even if the critic was learned by a Monte Carlo method.
To make it more intuitive, lets look at the update rools in both. More precisely, note that REINFORCE with baseline uses $G_t=\sum_{k=t+1}^T\gamma^{k-t-1}R_k$ and actor-critic uses $G_{t:t+1}=R_{t+1}+\gamma \hat{v}(S_{t+1},\bf{w})$:
PS1:
I didnt find your quoted text in Sutton Barto's book:
Although the REINFORCE-with-baseline method learns both a policy and a state-value function, we do not consider it to be an actor–critic method because its state-value function is used only as a baseline, not as a critic. That is, it is not used for bootstrapping (updating the value estimate for a state from the estimated values of subsequent states), but only as a baseline for the state whose estimate is being updated. | How can I understand REINFORCE with baseline is not a actor-critic algorithm? | Four years late to this post. Still have something to add...
I think REINFORCE-with-baseline and actor-critic are similar and it is hard for beginners to tell apart.
Neil's answer is great. But I gu | How can I understand REINFORCE with baseline is not a actor-critic algorithm?
Four years late to this post. Still have something to add...
I think REINFORCE-with-baseline and actor-critic are similar and it is hard for beginners to tell apart.
Neil's answer is great. But I guess the explanation in Sutton Barto's book sheds great light on above quoted doubt.
(RLBook, pdf page 353, book page 331, section 13.5 Actor-Critic Methods)
In REINFORCE with baseline, the learned state-value function estimates the value of
the only the first state of each state transition. This estimate sets a baseline for the
subsequent return, but is made prior to the transition’s action and thus cannot be used
to assess that action. In actor–critic methods, on the other hand, the state-value function is applied also to the second state of the transition. The estimated value of the second state, when discounted and added to the reward, constitutes the one-step return, $G_{t:t+1}$, which is a useful estimate of the actual return and thus is a way of assessing the action.
As we have seen in the TD learning of value functions throughout this book, the one-step
return is often superior to the actual return in terms of its variance and computational
congeniality, even though it introduces bias. We also know how we can flexibly modulate
the extent of the bias with n-step returns and eligibility traces.
When the state-value function is used to assess actions in this way it is called a critic, and the overall policy-gradient method is termed an actor–critic method. Note that the bias in the gradient estimate is not due to bootstrapping as such; the actor would be biased even if the critic was learned by a Monte Carlo method.
To make it more intuitive, lets look at the update rools in both. More precisely, note that REINFORCE with baseline uses $G_t=\sum_{k=t+1}^T\gamma^{k-t-1}R_k$ and actor-critic uses $G_{t:t+1}=R_{t+1}+\gamma \hat{v}(S_{t+1},\bf{w})$:
PS1:
I didnt find your quoted text in Sutton Barto's book:
Although the REINFORCE-with-baseline method learns both a policy and a state-value function, we do not consider it to be an actor–critic method because its state-value function is used only as a baseline, not as a critic. That is, it is not used for bootstrapping (updating the value estimate for a state from the estimated values of subsequent states), but only as a baseline for the state whose estimate is being updated. | How can I understand REINFORCE with baseline is not a actor-critic algorithm?
Four years late to this post. Still have something to add...
I think REINFORCE-with-baseline and actor-critic are similar and it is hard for beginners to tell apart.
Neil's answer is great. But I gu |
35,092 | Competing negative binomials | You are performing the equivalent of throwing a coin with a probability $p=1/6$ of heads until either $a=5$ heads or $b=20$ tails ("non-heads") have appeared. If you have thrown it $n$ times, the chance of this event not happening is given by the Binomial distribution as
$$S(n;a,b,p) = \sum_{k=\max(0,n-b+1)}^{\min(n,a-1)} \binom{n}{k} p^k(1-p)^{n-k}.$$
(The sum equals zero whenever its lower limit exceeds its upper limit.)
Therefore the chance that $n\gt 0$ is the throw when either $a$ heads or $b$ tails are first observed is
$$f(n;a,b,p) = S(n-1;a,b,p) - S(n;a,b,p).$$
Obviously this must equal $0$ for $n \lt \min(a,b)$ or $n \ge a+b$. We therefore may easily report the entire distribution: here is the plot of its probability function $f$ between $0$ and $a+b=25,$ as computed by these formulas:
This simple solution becomes even simpler (and yields additional information about whether the tosses terminated with $a$ heads or $b$ tails) when we recognize the question can be framed as a random walk in the $(x,y)$ plane.
Start at the origin $(0,0)$. Whenever the coin comes up heads, move one unit up; otherwise, move one unit to the right. Stop the first time one of the absorbing barriers $y=a$ or $x=b$ is hit.
The geometry of this situation is shown in the second figure. It plots the points that can be reached on this walk, showing the absorbing barriers as black lines. The possible terminal points along those barriers are marked with black dots.
The number of times each terminal point was reached in 1000 iterations of this walk are depicted by the colors and sizes of the larger points. The path shown in red corresponds to a sequence in which one tail was observed, then one head, then 10 tails, a head, a tail, two heads, four tails, and a head. It comprised 21 coin tosses altogether.
Each path that reaches any particular point $(x,y)$ on the absorbing barrier consists of $x$ tails and $y$ heads and therefore has a chance of $p^y(1-p)^x$.
Clearly, the last outcome in any path that terminates at $(x,a)$ was a heads. The number of such paths therefore is the number of distinct paths connecting $(0,0)$ to $(x,a-1)$, of which there are $\binom{x+a-1}{a-1}$. Consequently the chance of terminating at $(x,a)$ is
$$\Pr(x,a) = \binom{x+a-1}{a-1} p^{a}(1-p)^x.$$
Similarly the chance of terminating at $(b,y)$ is
$$\Pr(b,y) = \binom{y+b-1}{b-1} p^y(1-p)^b.$$
The chance of terminating after $n$ steps, with $\min(a,b)\le n \lt a+b-1$, therefore is the sum of two such expressions (one of which may be zero):
$$f(n;a,b,p) = \binom{n-1}{a-1} p^a(1-p)^{n-a} + \binom{n-1}{b-1} p^{n-b}(1-p)^b\text{ if }\min(a,b)\le n \lt a+b.$$
This counts the number of $n$-step paths that reach the absorbing barrier at the top or to the right, respectively, weighting each one by its probability.
The sudden leap in probability at $n=20$ in the first figure is now explained: for the first time (compared to smaller values of $n$), it becomes possible to end the tosses at the righthand barrier. This happens in a great number of cases, because it's (slightly) more likely that the right barrier will be reached before the top barrier is. (The chance of reaching the right barrier first is readily found by summing the probabilities associated with its five points, which is almost $63\%$.) We know that ending the walk at the right barrier is more probable because on average a path will rise by one unit $p=1/6$ of the time but will move to the right one unit $1-p=5/6$ of the time, for an average slope of $1/6:5/6 = 1/5$. A path with that slope reach the absorbing region at the location $(20, 20/5)=(20,4)$: on the righthand barrier. | Competing negative binomials | You are performing the equivalent of throwing a coin with a probability $p=1/6$ of heads until either $a=5$ heads or $b=20$ tails ("non-heads") have appeared. If you have thrown it $n$ times, the cha | Competing negative binomials
You are performing the equivalent of throwing a coin with a probability $p=1/6$ of heads until either $a=5$ heads or $b=20$ tails ("non-heads") have appeared. If you have thrown it $n$ times, the chance of this event not happening is given by the Binomial distribution as
$$S(n;a,b,p) = \sum_{k=\max(0,n-b+1)}^{\min(n,a-1)} \binom{n}{k} p^k(1-p)^{n-k}.$$
(The sum equals zero whenever its lower limit exceeds its upper limit.)
Therefore the chance that $n\gt 0$ is the throw when either $a$ heads or $b$ tails are first observed is
$$f(n;a,b,p) = S(n-1;a,b,p) - S(n;a,b,p).$$
Obviously this must equal $0$ for $n \lt \min(a,b)$ or $n \ge a+b$. We therefore may easily report the entire distribution: here is the plot of its probability function $f$ between $0$ and $a+b=25,$ as computed by these formulas:
This simple solution becomes even simpler (and yields additional information about whether the tosses terminated with $a$ heads or $b$ tails) when we recognize the question can be framed as a random walk in the $(x,y)$ plane.
Start at the origin $(0,0)$. Whenever the coin comes up heads, move one unit up; otherwise, move one unit to the right. Stop the first time one of the absorbing barriers $y=a$ or $x=b$ is hit.
The geometry of this situation is shown in the second figure. It plots the points that can be reached on this walk, showing the absorbing barriers as black lines. The possible terminal points along those barriers are marked with black dots.
The number of times each terminal point was reached in 1000 iterations of this walk are depicted by the colors and sizes of the larger points. The path shown in red corresponds to a sequence in which one tail was observed, then one head, then 10 tails, a head, a tail, two heads, four tails, and a head. It comprised 21 coin tosses altogether.
Each path that reaches any particular point $(x,y)$ on the absorbing barrier consists of $x$ tails and $y$ heads and therefore has a chance of $p^y(1-p)^x$.
Clearly, the last outcome in any path that terminates at $(x,a)$ was a heads. The number of such paths therefore is the number of distinct paths connecting $(0,0)$ to $(x,a-1)$, of which there are $\binom{x+a-1}{a-1}$. Consequently the chance of terminating at $(x,a)$ is
$$\Pr(x,a) = \binom{x+a-1}{a-1} p^{a}(1-p)^x.$$
Similarly the chance of terminating at $(b,y)$ is
$$\Pr(b,y) = \binom{y+b-1}{b-1} p^y(1-p)^b.$$
The chance of terminating after $n$ steps, with $\min(a,b)\le n \lt a+b-1$, therefore is the sum of two such expressions (one of which may be zero):
$$f(n;a,b,p) = \binom{n-1}{a-1} p^a(1-p)^{n-a} + \binom{n-1}{b-1} p^{n-b}(1-p)^b\text{ if }\min(a,b)\le n \lt a+b.$$
This counts the number of $n$-step paths that reach the absorbing barrier at the top or to the right, respectively, weighting each one by its probability.
The sudden leap in probability at $n=20$ in the first figure is now explained: for the first time (compared to smaller values of $n$), it becomes possible to end the tosses at the righthand barrier. This happens in a great number of cases, because it's (slightly) more likely that the right barrier will be reached before the top barrier is. (The chance of reaching the right barrier first is readily found by summing the probabilities associated with its five points, which is almost $63\%$.) We know that ending the walk at the right barrier is more probable because on average a path will rise by one unit $p=1/6$ of the time but will move to the right one unit $1-p=5/6$ of the time, for an average slope of $1/6:5/6 = 1/5$. A path with that slope reach the absorbing region at the location $(20, 20/5)=(20,4)$: on the righthand barrier. | Competing negative binomials
You are performing the equivalent of throwing a coin with a probability $p=1/6$ of heads until either $a=5$ heads or $b=20$ tails ("non-heads") have appeared. If you have thrown it $n$ times, the cha |
35,093 | Competing negative binomials | Having slept on it, I think the strategy may be this:
Convert each of the negative binomial probability distributions to conditional probabilities. i.e. conditional on not having gotten 5 ones in n-1 rolls, what is the probability of getting a 5th one on the nth roll?
From n=1 to sufficiently large,
sum the two conditional probabilities, and multiply the complement by S(n-1), the cumulative "survival" through the (n-1)th roll.
Take successive differences S(n-1)-S(n) to recover the probability distribution.
The setting is one comparative monitoring safety of marketed health products. You have two compared groups, possibly of unequal size, followed over time. Each adverse event is a binomial trial, as the event can derive from either drug A or drug B. | Competing negative binomials | Having slept on it, I think the strategy may be this:
Convert each of the negative binomial probability distributions to conditional probabilities. i.e. conditional on not having gotten 5 ones in n-1 | Competing negative binomials
Having slept on it, I think the strategy may be this:
Convert each of the negative binomial probability distributions to conditional probabilities. i.e. conditional on not having gotten 5 ones in n-1 rolls, what is the probability of getting a 5th one on the nth roll?
From n=1 to sufficiently large,
sum the two conditional probabilities, and multiply the complement by S(n-1), the cumulative "survival" through the (n-1)th roll.
Take successive differences S(n-1)-S(n) to recover the probability distribution.
The setting is one comparative monitoring safety of marketed health products. You have two compared groups, possibly of unequal size, followed over time. Each adverse event is a binomial trial, as the event can derive from either drug A or drug B. | Competing negative binomials
Having slept on it, I think the strategy may be this:
Convert each of the negative binomial probability distributions to conditional probabilities. i.e. conditional on not having gotten 5 ones in n-1 |
35,094 | Machine learning on non-fixed-length sequential data? | It seems you are asking two questions here:
How to deal with the situation where different samples have different numbers of features, i.e. when some features are either not applicable to some samples or are not available
How to perform supervised classification on time-series data
With regards to question 1, it depends. Each sample does need to have the same number of features. Some models, i.e. decision-tree based ones, can explicitly deal with missing/NA data. Others, like logistic regression, need ordinal features and cannot deal with categorical features. In this case, it may be worth introducing additional binary features (representing whether feature X is present/applicable), and choosing some appropriate value for feature X in case it is missing / not applicable. A good choice would depend on the specific problem.
Question 2: you have a choice of manually engineering features, or trying a model that can attempt to deal with the temporal structure of your data automatically. Most models assume that each sample is independent of the others; ideally, you would apply some feature engineering to make your time series stationary and use your domain knowledge to decide what historical data is important for each sample and how it should be represented. Z-scores, moving averages, variances etc. could all be useful here. If you have a lot of data, you may attempt to use RNNs, but in my experience it is only worth it if you have a lot of data and you otherwise have no intuition about which features may be useful.
Regardless of which model you choose to use, setting up appropriate validation and testing frameworks is absolutely crucial. With time series you need to be extra careful. E.g. you need to decide if using data from the future to train your model is appropriate, whether you need to throw some data around your training set away etc. Do not just blindly randomly sample data into validation/test sets, this will likely give you wildly biased estimates that will not be useful.
I would also recommend researching each question independently, both have been addressed on this stackexchange before. Good luck! | Machine learning on non-fixed-length sequential data? | It seems you are asking two questions here:
How to deal with the situation where different samples have different numbers of features, i.e. when some features are either not applicable to some sample | Machine learning on non-fixed-length sequential data?
It seems you are asking two questions here:
How to deal with the situation where different samples have different numbers of features, i.e. when some features are either not applicable to some samples or are not available
How to perform supervised classification on time-series data
With regards to question 1, it depends. Each sample does need to have the same number of features. Some models, i.e. decision-tree based ones, can explicitly deal with missing/NA data. Others, like logistic regression, need ordinal features and cannot deal with categorical features. In this case, it may be worth introducing additional binary features (representing whether feature X is present/applicable), and choosing some appropriate value for feature X in case it is missing / not applicable. A good choice would depend on the specific problem.
Question 2: you have a choice of manually engineering features, or trying a model that can attempt to deal with the temporal structure of your data automatically. Most models assume that each sample is independent of the others; ideally, you would apply some feature engineering to make your time series stationary and use your domain knowledge to decide what historical data is important for each sample and how it should be represented. Z-scores, moving averages, variances etc. could all be useful here. If you have a lot of data, you may attempt to use RNNs, but in my experience it is only worth it if you have a lot of data and you otherwise have no intuition about which features may be useful.
Regardless of which model you choose to use, setting up appropriate validation and testing frameworks is absolutely crucial. With time series you need to be extra careful. E.g. you need to decide if using data from the future to train your model is appropriate, whether you need to throw some data around your training set away etc. Do not just blindly randomly sample data into validation/test sets, this will likely give you wildly biased estimates that will not be useful.
I would also recommend researching each question independently, both have been addressed on this stackexchange before. Good luck! | Machine learning on non-fixed-length sequential data?
It seems you are asking two questions here:
How to deal with the situation where different samples have different numbers of features, i.e. when some features are either not applicable to some sample |
35,095 | Machine learning on non-fixed-length sequential data? | It looks like an RNN would be a good model for your problem.
They can deal with time-series sequences of different lengths. Basically they implements in their internal state a memory mechanism which allow to "remember" what happened in order to consider the past for taking decision about the future.
However they are quite generic models and can be used in lots of domains. In general you can find different types of RNNs, probably the common ones are the LSTM and the GRU.
In my opinion on Colah's blog there is a great explanation of LSTM
If, instead, you are looking for something more practical I would suggest the PyTorch tutorial. | Machine learning on non-fixed-length sequential data? | It looks like an RNN would be a good model for your problem.
They can deal with time-series sequences of different lengths. Basically they implements in their internal state a memory mechanism which a | Machine learning on non-fixed-length sequential data?
It looks like an RNN would be a good model for your problem.
They can deal with time-series sequences of different lengths. Basically they implements in their internal state a memory mechanism which allow to "remember" what happened in order to consider the past for taking decision about the future.
However they are quite generic models and can be used in lots of domains. In general you can find different types of RNNs, probably the common ones are the LSTM and the GRU.
In my opinion on Colah's blog there is a great explanation of LSTM
If, instead, you are looking for something more practical I would suggest the PyTorch tutorial. | Machine learning on non-fixed-length sequential data?
It looks like an RNN would be a good model for your problem.
They can deal with time-series sequences of different lengths. Basically they implements in their internal state a memory mechanism which a |
35,096 | Bootstrap sample with size greater than the original sample | The objective of bootstrapping is (usually) to get some idea of the distribution of the parameter estimate(s). Since the parameter estimates were formed on the basis of a sample of size $N$, their distribution is conditional upon that sample size. Resampling to larger or smaller sample sizes will, consequently. give a more distorted view of the distribution of the parameter estimates than resampling with a sample size of $N$.
In this case, however, you are not actually performing the Efron bootstrap. You are simply generating simulated values of the sample path based upon the 500 estimated errors. Consequently, the issue with whether or not you can generate more than 500 such sample paths is moot; you can, as Johan points out, generate as many as you want.
Since you are basing all your results on the one set of initial parameter estimates, the sample paths are conditional upon that set being correct. The variability in the end result does not take into account parameter uncertainty, and it is this additional variability that the Efron bootstrap is designed to help with. A process that incorporates the bootstrap might be:
Select a sample (with replacement) of 500 values from the initial set of standardized residuals (this 500 is the "500" that gave you so much trouble in your thinking about the problem and that Efron refers to in the book,)
Calculate a simulated version of the original series using those standardized residuals and your initial parameter estimates,
Re-estimate the parameters using the simulated version of the original series,
Use the standardized residuals from the re-estimated parameters and the original data to generate some (smallish) number $M$ of future sample paths,
If you've generated enough overall sample paths, exit, else go to 1.
Steps 1 through 3 are where the Efron bootstrap comes into play. Step 4 is the simulation as it is currently performed. Note that at each iteration you are generating new standardized residuals for use in the simulator; this will lessen the dependence of the results on the initial set of parameter estimates / standardized residuals and take into account, to some extent, the inaccuracy in the parameter estimates themselves.
If you generate $K$ bootstrap estimates in steps 1 and 2, you will have generated $KM$ total sample paths at the end of the exercise. How you should divide those between $K$ and $M$ depends to some extent on the various computational burdens involved but also upon how the contributions to randomness are split between parameter estimation error and sample path variability. As a general rule, the more accurate your parameter estimates are, the smaller $K$ can be; conversely, the less the sample paths vary for a given value of the parameter estimates, the smaller $M$ can be. | Bootstrap sample with size greater than the original sample | The objective of bootstrapping is (usually) to get some idea of the distribution of the parameter estimate(s). Since the parameter estimates were formed on the basis of a sample of size $N$, their di | Bootstrap sample with size greater than the original sample
The objective of bootstrapping is (usually) to get some idea of the distribution of the parameter estimate(s). Since the parameter estimates were formed on the basis of a sample of size $N$, their distribution is conditional upon that sample size. Resampling to larger or smaller sample sizes will, consequently. give a more distorted view of the distribution of the parameter estimates than resampling with a sample size of $N$.
In this case, however, you are not actually performing the Efron bootstrap. You are simply generating simulated values of the sample path based upon the 500 estimated errors. Consequently, the issue with whether or not you can generate more than 500 such sample paths is moot; you can, as Johan points out, generate as many as you want.
Since you are basing all your results on the one set of initial parameter estimates, the sample paths are conditional upon that set being correct. The variability in the end result does not take into account parameter uncertainty, and it is this additional variability that the Efron bootstrap is designed to help with. A process that incorporates the bootstrap might be:
Select a sample (with replacement) of 500 values from the initial set of standardized residuals (this 500 is the "500" that gave you so much trouble in your thinking about the problem and that Efron refers to in the book,)
Calculate a simulated version of the original series using those standardized residuals and your initial parameter estimates,
Re-estimate the parameters using the simulated version of the original series,
Use the standardized residuals from the re-estimated parameters and the original data to generate some (smallish) number $M$ of future sample paths,
If you've generated enough overall sample paths, exit, else go to 1.
Steps 1 through 3 are where the Efron bootstrap comes into play. Step 4 is the simulation as it is currently performed. Note that at each iteration you are generating new standardized residuals for use in the simulator; this will lessen the dependence of the results on the initial set of parameter estimates / standardized residuals and take into account, to some extent, the inaccuracy in the parameter estimates themselves.
If you generate $K$ bootstrap estimates in steps 1 and 2, you will have generated $KM$ total sample paths at the end of the exercise. How you should divide those between $K$ and $M$ depends to some extent on the various computational burdens involved but also upon how the contributions to randomness are split between parameter estimation error and sample path variability. As a general rule, the more accurate your parameter estimates are, the smaller $K$ can be; conversely, the less the sample paths vary for a given value of the parameter estimates, the smaller $M$ can be. | Bootstrap sample with size greater than the original sample
The objective of bootstrapping is (usually) to get some idea of the distribution of the parameter estimate(s). Since the parameter estimates were formed on the basis of a sample of size $N$, their di |
35,097 | Bootstrap sample with size greater than the original sample | It is perfectly fine to sample more than 500 draws from the empirical distribution.
The 500 standardized residuals make up the empirical distribution from which you sample your realizations of $z_{t+h}$ that is needed for multi-step forecast. In the one-step ahead case no draws are needed since the conditional volatility at time $t+1$ is known based on the informationset at time $t$.
As you correctly do, one samples with replacement from the empirical distribution. Hence, you can obtain as many draws as you want. You just need to think of the empirical distribution in the same way as if you draw from an assumed iid N(0,1).
Simulation based forecasts are based on the mean of the simulated volatility paths. When increasing the number of simulations, the mean will be closer to the "true" forecast. One interesting exercise will be to assume $z_t$ iid N(0,1) and compare it with the analytical GARCH forecast - with a large number of bootstrap samples, the two forecasts will be identical.
An alternative approach is to fit a distribution, either parametric or non-parametric, to the obtained standardized residuals and draw from that. | Bootstrap sample with size greater than the original sample | It is perfectly fine to sample more than 500 draws from the empirical distribution.
The 500 standardized residuals make up the empirical distribution from which you sample your realizations of $z_{t+ | Bootstrap sample with size greater than the original sample
It is perfectly fine to sample more than 500 draws from the empirical distribution.
The 500 standardized residuals make up the empirical distribution from which you sample your realizations of $z_{t+h}$ that is needed for multi-step forecast. In the one-step ahead case no draws are needed since the conditional volatility at time $t+1$ is known based on the informationset at time $t$.
As you correctly do, one samples with replacement from the empirical distribution. Hence, you can obtain as many draws as you want. You just need to think of the empirical distribution in the same way as if you draw from an assumed iid N(0,1).
Simulation based forecasts are based on the mean of the simulated volatility paths. When increasing the number of simulations, the mean will be closer to the "true" forecast. One interesting exercise will be to assume $z_t$ iid N(0,1) and compare it with the analytical GARCH forecast - with a large number of bootstrap samples, the two forecasts will be identical.
An alternative approach is to fit a distribution, either parametric or non-parametric, to the obtained standardized residuals and draw from that. | Bootstrap sample with size greater than the original sample
It is perfectly fine to sample more than 500 draws from the empirical distribution.
The 500 standardized residuals make up the empirical distribution from which you sample your realizations of $z_{t+ |
35,098 | Encoding of categorical variables (dummy vs. effects coding) in mixed models | As said by @amoeba in the comment, the question is not so much a mixed model question, but more a general question on how to parameterize a regression model with interactions. The full quote from our chapter also provides an answer to your second question (i.e., the why):
A common contrast scheme, which is the default in R, is called
treatment contrasts (i.e., contr.treatment; also called dummy
coding). With treatment contrasts the first factor level serves as the
baseline whereas all other levels are mapped onto exactly one of the
contrast variables with a value of 1. As a consequence, the intercept
corresponds to the mean of the baseline group and not the grand mean.
When fitting models without interactions, this type of contrast has
the advantage that the estimates (i.e., the parameters corresponding
to the contrast variables) indicate whether there is a difference
between the corresponding factor level and the baseline. However, when
including interactions, treatment contrasts lead to results that are
often difficult to interpret. Whereas the highest-order interaction is
unaffected, the lower-order effects (such as main effects) are
estimated at the level of the baseline, ultimately yielding what are
known as simple effects rather than the usually expected lower-order
effects. Importantly, this applies to both the resulting parameter
estimates of the lower order effects as well as their Type III tests.
In other words, a mixed model (or any other regression type model)
that includes interactions with factors using treatment contrasts
produces parameter estimates as well as Type III tests that often do
not correspond to what one wants (e.g., main effects are not what is
commonly understood as a main effect). Therefore we generally
recommend to avoid treatment contrasts for models that include
interactions.
Orthogonal sum-to-zero contrasts are better because they avoid potentially difficult to interpret lower-order effects. That is, for those contrasts all lower order effects are evaluated at the grand mean. For a quick explanation of dummy vs. effect coding difference, please see: http://www.lrdc.pitt.edu/maplelab/slides/Simple_Main_Effects_Fraundorf.pdf
This means for your case, almost all your interpretations are correct with one exception.
ConditionB - what is the difference in Intercept for Condition B from Condition A, when X is zero.
Hence, if zero is somewhat meaningless for your variable (e.g., it is age and you only observe adult participants) your estimate of Condition (which is now a simple effect of condition at X = 0) becomes meaningless as well.
In general, having interactions with continuous covariates is not trivial and there are at least two books and several papers which extensively discuss this issue. A common solution is centering the covariate on the mean. Whether or not this makes sense depends on your covariate. What I sometimes do when having a variable with a restricted range (e.g., it goes from 0 to 100) is to center on the midpoint of the scale (see e.g., here).
More information on centering can be found in the following references. I recommend you read the first one at least:
Dalal, D. K., & Zickar, M. J. (2012). Some Common Myths About Centering Predictor Variables in Moderated Multiple Regression and Polynomial Regression. Organizational Research Methods, 15(3), 339-362. doi:10.1177/1094428111430540 [free pdf]
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2002). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. New York: Routledge Academic. [great book]
Aiken, L. S., & West, S. G. (1991). Multiple regression: testing and interpreting interactions. Newbury Park, Calif.: Sage Publications.
There is also some mixed-model specific discussion on centering, but to me this appears to be mainly relevant for hierarchical structures (i.e., at least two-levels of nesting), e.g.,
Wang, L., & Maxwell, S. E. (2015). On disaggregating between-person and within-person effects with longitudinal data using multilevel models. Psychological Methods, 20(1), 63–83. https://doi.org/10.1037/met0000030
Potentially also relevant:
Iacobucci, D., Schneider, M. J., Popovich, D. L., & Bakamitsos, G. A. (2016). Mean centering helps alleviate “micro” but not “macro” multicollinearity. Behavior Research Methods, 48(4), 1308–1317. https://doi.org/10.3758/s13428-015-0624-x | Encoding of categorical variables (dummy vs. effects coding) in mixed models | As said by @amoeba in the comment, the question is not so much a mixed model question, but more a general question on how to parameterize a regression model with interactions. The full quote from our | Encoding of categorical variables (dummy vs. effects coding) in mixed models
As said by @amoeba in the comment, the question is not so much a mixed model question, but more a general question on how to parameterize a regression model with interactions. The full quote from our chapter also provides an answer to your second question (i.e., the why):
A common contrast scheme, which is the default in R, is called
treatment contrasts (i.e., contr.treatment; also called dummy
coding). With treatment contrasts the first factor level serves as the
baseline whereas all other levels are mapped onto exactly one of the
contrast variables with a value of 1. As a consequence, the intercept
corresponds to the mean of the baseline group and not the grand mean.
When fitting models without interactions, this type of contrast has
the advantage that the estimates (i.e., the parameters corresponding
to the contrast variables) indicate whether there is a difference
between the corresponding factor level and the baseline. However, when
including interactions, treatment contrasts lead to results that are
often difficult to interpret. Whereas the highest-order interaction is
unaffected, the lower-order effects (such as main effects) are
estimated at the level of the baseline, ultimately yielding what are
known as simple effects rather than the usually expected lower-order
effects. Importantly, this applies to both the resulting parameter
estimates of the lower order effects as well as their Type III tests.
In other words, a mixed model (or any other regression type model)
that includes interactions with factors using treatment contrasts
produces parameter estimates as well as Type III tests that often do
not correspond to what one wants (e.g., main effects are not what is
commonly understood as a main effect). Therefore we generally
recommend to avoid treatment contrasts for models that include
interactions.
Orthogonal sum-to-zero contrasts are better because they avoid potentially difficult to interpret lower-order effects. That is, for those contrasts all lower order effects are evaluated at the grand mean. For a quick explanation of dummy vs. effect coding difference, please see: http://www.lrdc.pitt.edu/maplelab/slides/Simple_Main_Effects_Fraundorf.pdf
This means for your case, almost all your interpretations are correct with one exception.
ConditionB - what is the difference in Intercept for Condition B from Condition A, when X is zero.
Hence, if zero is somewhat meaningless for your variable (e.g., it is age and you only observe adult participants) your estimate of Condition (which is now a simple effect of condition at X = 0) becomes meaningless as well.
In general, having interactions with continuous covariates is not trivial and there are at least two books and several papers which extensively discuss this issue. A common solution is centering the covariate on the mean. Whether or not this makes sense depends on your covariate. What I sometimes do when having a variable with a restricted range (e.g., it goes from 0 to 100) is to center on the midpoint of the scale (see e.g., here).
More information on centering can be found in the following references. I recommend you read the first one at least:
Dalal, D. K., & Zickar, M. J. (2012). Some Common Myths About Centering Predictor Variables in Moderated Multiple Regression and Polynomial Regression. Organizational Research Methods, 15(3), 339-362. doi:10.1177/1094428111430540 [free pdf]
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2002). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. New York: Routledge Academic. [great book]
Aiken, L. S., & West, S. G. (1991). Multiple regression: testing and interpreting interactions. Newbury Park, Calif.: Sage Publications.
There is also some mixed-model specific discussion on centering, but to me this appears to be mainly relevant for hierarchical structures (i.e., at least two-levels of nesting), e.g.,
Wang, L., & Maxwell, S. E. (2015). On disaggregating between-person and within-person effects with longitudinal data using multilevel models. Psychological Methods, 20(1), 63–83. https://doi.org/10.1037/met0000030
Potentially also relevant:
Iacobucci, D., Schneider, M. J., Popovich, D. L., & Bakamitsos, G. A. (2016). Mean centering helps alleviate “micro” but not “macro” multicollinearity. Behavior Research Methods, 48(4), 1308–1317. https://doi.org/10.3758/s13428-015-0624-x | Encoding of categorical variables (dummy vs. effects coding) in mixed models
As said by @amoeba in the comment, the question is not so much a mixed model question, but more a general question on how to parameterize a regression model with interactions. The full quote from our |
35,099 | Expectation when cumulative distribution function is given | The discrete case, assume that $X \ge 0$ takes non-negative integer values. Then we can write the expectation as
$$ \DeclareMathOperator{\E}{\mathbb{E}}
\DeclareMathOperator{\P}{\mathbb{P}}
\E X = \sum_{k=0}^\infty k \P(X=k)
$$
Now, we will first write this as a double sum, and then change the order of summation. Observe that $k = \sum_{j=0}^{k-1} 1$ (the case $k=0$ gives a lower upper than lower limit, we take that as the empty sum, which is zero). This gives
$$
\E X = \sum_{k=0}^\infty \sum_{j=0}^{k-1} 1 \cdot \P(X=k)
$$
Now, in this double sum we sum first on $j$, which clearly goes to $\infty$. Observe that in the inner summation the indices satisfy the inequality
$$
0 \le j \le k-1
$$
Solving that for $k$ gives $ k \ge j+1$, which then gives the limits of summation in the new inner sum:
$$
\E X = \sum_{j=0}^\infty \sum_{k=j+1}^\infty \P(X=k) = \sum_{j=0}^\infty \P(X > j)
$$
which is the result. The continuous case is similar. | Expectation when cumulative distribution function is given | The discrete case, assume that $X \ge 0$ takes non-negative integer values. Then we can write the expectation as
$$ \DeclareMathOperator{\E}{\mathbb{E}}
\DeclareMathOperator{\P}{\mathbb{P}}
\E X | Expectation when cumulative distribution function is given
The discrete case, assume that $X \ge 0$ takes non-negative integer values. Then we can write the expectation as
$$ \DeclareMathOperator{\E}{\mathbb{E}}
\DeclareMathOperator{\P}{\mathbb{P}}
\E X = \sum_{k=0}^\infty k \P(X=k)
$$
Now, we will first write this as a double sum, and then change the order of summation. Observe that $k = \sum_{j=0}^{k-1} 1$ (the case $k=0$ gives a lower upper than lower limit, we take that as the empty sum, which is zero). This gives
$$
\E X = \sum_{k=0}^\infty \sum_{j=0}^{k-1} 1 \cdot \P(X=k)
$$
Now, in this double sum we sum first on $j$, which clearly goes to $\infty$. Observe that in the inner summation the indices satisfy the inequality
$$
0 \le j \le k-1
$$
Solving that for $k$ gives $ k \ge j+1$, which then gives the limits of summation in the new inner sum:
$$
\E X = \sum_{j=0}^\infty \sum_{k=j+1}^\infty \P(X=k) = \sum_{j=0}^\infty \P(X > j)
$$
which is the result. The continuous case is similar. | Expectation when cumulative distribution function is given
The discrete case, assume that $X \ge 0$ takes non-negative integer values. Then we can write the expectation as
$$ \DeclareMathOperator{\E}{\mathbb{E}}
\DeclareMathOperator{\P}{\mathbb{P}}
\E X |
35,100 | Why do some formulas have the coefficient in the front in logistic regression likelihood, and some don't? | The second is a special case of the first. Your first reference discusses the case where each $y_i$ is distributed as a Binomial distribution with sample size $n_i$, while the second reference assumes each $y_i$ is a Bernoulli random variable. That is the difference: when each $n_i = 1$, $\frac{n_i} {y_i!(n_i-y_i)!} = 1$.
Some quotes supporting this: from 2.1.2 in the first reference:
Since the probability of success for any one of the $n_i$ trials is
$\pi_i$...
And from the first section in the second reference 12.1:
Let's pick one of the classes and call it "$1$" and the other
"$0$"... | Why do some formulas have the coefficient in the front in logistic regression likelihood, and some d | The second is a special case of the first. Your first reference discusses the case where each $y_i$ is distributed as a Binomial distribution with sample size $n_i$, while the second reference assumes | Why do some formulas have the coefficient in the front in logistic regression likelihood, and some don't?
The second is a special case of the first. Your first reference discusses the case where each $y_i$ is distributed as a Binomial distribution with sample size $n_i$, while the second reference assumes each $y_i$ is a Bernoulli random variable. That is the difference: when each $n_i = 1$, $\frac{n_i} {y_i!(n_i-y_i)!} = 1$.
Some quotes supporting this: from 2.1.2 in the first reference:
Since the probability of success for any one of the $n_i$ trials is
$\pi_i$...
And from the first section in the second reference 12.1:
Let's pick one of the classes and call it "$1$" and the other
"$0$"... | Why do some formulas have the coefficient in the front in logistic regression likelihood, and some d
The second is a special case of the first. Your first reference discusses the case where each $y_i$ is distributed as a Binomial distribution with sample size $n_i$, while the second reference assumes |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.