idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
35,201 | Does ARIMA require normally distributed errors or normally distributed input data? | To answer your first question and to quote from [1],
Mathematics tell us that linear predictor can be optimal only when the
process is Gaussian. When the process is non-Gaussian, a better
predictor may be given by a non-linear dynamic model (Masani and
Wiener, 1959).
Why statisticians, both young and old, keep themselves silent about it I do not know.
References:
[1] Ozaki, T. & Iino, M. An innovation approach to non-Gaussian time series analysis Journal of Applied Probability, 2001, 38, 78-92 | Does ARIMA require normally distributed errors or normally distributed input data? | To answer your first question and to quote from [1],
Mathematics tell us that linear predictor can be optimal only when the
process is Gaussian. When the process is non-Gaussian, a better
predic | Does ARIMA require normally distributed errors or normally distributed input data?
To answer your first question and to quote from [1],
Mathematics tell us that linear predictor can be optimal only when the
process is Gaussian. When the process is non-Gaussian, a better
predictor may be given by a non-linear dynamic model (Masani and
Wiener, 1959).
Why statisticians, both young and old, keep themselves silent about it I do not know.
References:
[1] Ozaki, T. & Iino, M. An innovation approach to non-Gaussian time series analysis Journal of Applied Probability, 2001, 38, 78-92 | Does ARIMA require normally distributed errors or normally distributed input data?
To answer your first question and to quote from [1],
Mathematics tell us that linear predictor can be optimal only when the
process is Gaussian. When the process is non-Gaussian, a better
predic |
35,202 | Significance of dummy variables in regression | Categorical variables can be represented several different ways in a regression model. The most common, by far, is reference cell coding. From your description (and my prior), I suspect that is what was used in your case. The standard statistical output will give you two tests. Let's say that A is the reference level, you will have a test of B vs. A, and a test of C vs. A (n.b., C can significantly differ from B, but not A, and not show up in these tests). These tests are usually not what you really want to know. You should test a multi-category variable by dropping both dummy variables and performing a nested model test. Unless you had an a-priori plan to test if a pre-specified level is necessary and it is not 'significant', you should retain the entire variable (i.e., all levels). If you did have such an a-priori hypothesis (i.e., that was the point of your study), you can drop only the level in question and perform a nested model test.
It may help you to read about some of these topics. Here are some references for further study:
Coding strategies for categorical variables:
UCLA's stats help website
I discuss reference cell coding here: Regression based for example on days of week
Problems with modifying your model based on what you find, when you didn't have a pre-specified hypothesis:
While it's not framed exactly like your situation, you may be able to get the idea from my answer here: Algorithms for automatic model selection
Issues with multiple comparisons:
You might skim some of the CV threads categorized under the multiple-comparisons tag
the Wikipedia page for multiple comparisons
Nested model tests:
Although discussed in terms of testing for moderation, my answer here should be clear enough to get the idea: Testing for moderation with continuous vs. categorical moderators | Significance of dummy variables in regression | Categorical variables can be represented several different ways in a regression model. The most common, by far, is reference cell coding. From your description (and my prior), I suspect that is what | Significance of dummy variables in regression
Categorical variables can be represented several different ways in a regression model. The most common, by far, is reference cell coding. From your description (and my prior), I suspect that is what was used in your case. The standard statistical output will give you two tests. Let's say that A is the reference level, you will have a test of B vs. A, and a test of C vs. A (n.b., C can significantly differ from B, but not A, and not show up in these tests). These tests are usually not what you really want to know. You should test a multi-category variable by dropping both dummy variables and performing a nested model test. Unless you had an a-priori plan to test if a pre-specified level is necessary and it is not 'significant', you should retain the entire variable (i.e., all levels). If you did have such an a-priori hypothesis (i.e., that was the point of your study), you can drop only the level in question and perform a nested model test.
It may help you to read about some of these topics. Here are some references for further study:
Coding strategies for categorical variables:
UCLA's stats help website
I discuss reference cell coding here: Regression based for example on days of week
Problems with modifying your model based on what you find, when you didn't have a pre-specified hypothesis:
While it's not framed exactly like your situation, you may be able to get the idea from my answer here: Algorithms for automatic model selection
Issues with multiple comparisons:
You might skim some of the CV threads categorized under the multiple-comparisons tag
the Wikipedia page for multiple comparisons
Nested model tests:
Although discussed in terms of testing for moderation, my answer here should be clear enough to get the idea: Testing for moderation with continuous vs. categorical moderators | Significance of dummy variables in regression
Categorical variables can be represented several different ways in a regression model. The most common, by far, is reference cell coding. From your description (and my prior), I suspect that is what |
35,203 | Significance of dummy variables in regression | There is no need to include indicator variables for each of the categories. Let's say category A is coming out significant. Your results are suggesting that you consider collapsing the categories into "category A" and "all other categories".
Of course, you should perform an F-test for nested model vs. full model to check if removing indicator variables for other categories make sense. | Significance of dummy variables in regression | There is no need to include indicator variables for each of the categories. Let's say category A is coming out significant. Your results are suggesting that you consider collapsing the categories into | Significance of dummy variables in regression
There is no need to include indicator variables for each of the categories. Let's say category A is coming out significant. Your results are suggesting that you consider collapsing the categories into "category A" and "all other categories".
Of course, you should perform an F-test for nested model vs. full model to check if removing indicator variables for other categories make sense. | Significance of dummy variables in regression
There is no need to include indicator variables for each of the categories. Let's say category A is coming out significant. Your results are suggesting that you consider collapsing the categories into |
35,204 | Categorical response variable prediction | You could use ANY classifier. Including Linear Discriminants, multinomial logit as Bill pointed out, Support Vector Machines, Neural Nets, CART, random forest, C5 trees, there are a world of different models that can help you predict $v.a$ using $v.b$ and $v.c$. Here is an example using the R implementation of random forest:
# packages
library(randomForest)
#variables
v.a= c('cat','dog','dog','goat','cat','goat','dog','dog')
v.b= c(1,2,1,2,1,2,1,2)
v.c= c('blue', 'red', 'blue', 'red', 'red', 'blue', 'yellow', 'yellow')
# model fit
# note that you must turn the ordinal variables into factor or R wont use
# them properly
model <- randomForest(y=as.factor(v.a),x=cbind(v.b,as.factor(v.c)),ntree=10)
#plot of model accuracy by class
plot(model)
# model confusion matrix
model$confusion
Clearly these variables don't show a strong relation. | Categorical response variable prediction | You could use ANY classifier. Including Linear Discriminants, multinomial logit as Bill pointed out, Support Vector Machines, Neural Nets, CART, random forest, C5 trees, there are a world of different | Categorical response variable prediction
You could use ANY classifier. Including Linear Discriminants, multinomial logit as Bill pointed out, Support Vector Machines, Neural Nets, CART, random forest, C5 trees, there are a world of different models that can help you predict $v.a$ using $v.b$ and $v.c$. Here is an example using the R implementation of random forest:
# packages
library(randomForest)
#variables
v.a= c('cat','dog','dog','goat','cat','goat','dog','dog')
v.b= c(1,2,1,2,1,2,1,2)
v.c= c('blue', 'red', 'blue', 'red', 'red', 'blue', 'yellow', 'yellow')
# model fit
# note that you must turn the ordinal variables into factor or R wont use
# them properly
model <- randomForest(y=as.factor(v.a),x=cbind(v.b,as.factor(v.c)),ntree=10)
#plot of model accuracy by class
plot(model)
# model confusion matrix
model$confusion
Clearly these variables don't show a strong relation. | Categorical response variable prediction
You could use ANY classifier. Including Linear Discriminants, multinomial logit as Bill pointed out, Support Vector Machines, Neural Nets, CART, random forest, C5 trees, there are a world of different |
35,205 | Categorical response variable prediction | This is a more a partial practical answer, but it works for me to do some exercises before getting deeply into theory.
This ats.ucla.edu link is a reference that might help beggining to understand about multinomial logistic regression (as pointed out by Bill), in a more practical way.
It presents reproducible code to understand function multinom from nmet package in R and also gives a briefing about outputs interpretation.
Consider this code:
va = c('cat','dog','dog','goat','cat','goat','dog','dog')
# cat will be the outcome baseline
vb = c(1,2,1,2,1,2,1,2)
vc = c('blue','red','blue','red','red','blue','yellow','yellow')
# blue will be the vc predictor baseline
set.seed(12)
vd = round(rnorm(8),2)
data = data.frame(cbind(va,vb,vc,vd))
library(nnet)
fit <- multinom(va ~ as.numeric(vb) + vc + as.numeric(vd), data=data)
# weights: 18 (10 variable)
initial value 8.788898
iter 10 value 0.213098
iter 20 value 0.000278
final value 0.000070
converged
fit
Call:
multinom(formula = va ~ as.numeric(vb) + vc + as.numeric(vd),
data = data)
Coefficients:
(Intercept) as.numeric(vb) vcred vcyellow as.numeric(vd)
dog -1.044866 120.3495 -6.705314 77.41661 -21.97069
goat 47.493155 126.4840 49.856414 -41.46955 -47.72585
Residual Deviance: 0.0001656705
AIC: 20.00017
This is how you can interpret the log-linear fitted multinomial logistic model:
\begin{align}
\ln\left(\frac{P(va={\rm cat})}{P(va={\rm dog})}\right) &= b_{10} + b_{11}vb + b_{12}(vc={\rm red}) + b_{13}(vc={\rm yellow}) + b_{14}vd \\
&\ \\
\ln\left(\frac{P(va={\rm cat})}{P(va={\rm goat})}\right) &= b_{20} + b_{21}vb + b_{22}(vc={\rm red}) + b_{23}(vc={\rm yellow}) + b_{24}vd
\end{align}
Here is an excerpt about how the model parameters can be interpreted:
A one-unit increase in the variable vd is associated with the decrease in the log odds of being "dog" vs. "cat" in the amount of 21.97069 ($b_{14}$).
the same logic for the second line but, considering "goat" vs. "cat" with ($b_{24}$=-47.72585).
The log odds of being "dog" vs. "cat" will increase by 6.705314 if moving from vc="blue" to vc="red"($b_{12}$).
.....
There is much more in the article, but I thought this part to be the core.
Reference:
R Data Analysis Examples: Multinomial Logistic Regression. UCLA: Statistical Consulting Group.
from http://www.ats.ucla.edu/stat/r/dae/mlogit.htm (accessed November 05, 2013). | Categorical response variable prediction | This is a more a partial practical answer, but it works for me to do some exercises before getting deeply into theory.
This ats.ucla.edu link is a reference that might help beggining to understand a | Categorical response variable prediction
This is a more a partial practical answer, but it works for me to do some exercises before getting deeply into theory.
This ats.ucla.edu link is a reference that might help beggining to understand about multinomial logistic regression (as pointed out by Bill), in a more practical way.
It presents reproducible code to understand function multinom from nmet package in R and also gives a briefing about outputs interpretation.
Consider this code:
va = c('cat','dog','dog','goat','cat','goat','dog','dog')
# cat will be the outcome baseline
vb = c(1,2,1,2,1,2,1,2)
vc = c('blue','red','blue','red','red','blue','yellow','yellow')
# blue will be the vc predictor baseline
set.seed(12)
vd = round(rnorm(8),2)
data = data.frame(cbind(va,vb,vc,vd))
library(nnet)
fit <- multinom(va ~ as.numeric(vb) + vc + as.numeric(vd), data=data)
# weights: 18 (10 variable)
initial value 8.788898
iter 10 value 0.213098
iter 20 value 0.000278
final value 0.000070
converged
fit
Call:
multinom(formula = va ~ as.numeric(vb) + vc + as.numeric(vd),
data = data)
Coefficients:
(Intercept) as.numeric(vb) vcred vcyellow as.numeric(vd)
dog -1.044866 120.3495 -6.705314 77.41661 -21.97069
goat 47.493155 126.4840 49.856414 -41.46955 -47.72585
Residual Deviance: 0.0001656705
AIC: 20.00017
This is how you can interpret the log-linear fitted multinomial logistic model:
\begin{align}
\ln\left(\frac{P(va={\rm cat})}{P(va={\rm dog})}\right) &= b_{10} + b_{11}vb + b_{12}(vc={\rm red}) + b_{13}(vc={\rm yellow}) + b_{14}vd \\
&\ \\
\ln\left(\frac{P(va={\rm cat})}{P(va={\rm goat})}\right) &= b_{20} + b_{21}vb + b_{22}(vc={\rm red}) + b_{23}(vc={\rm yellow}) + b_{24}vd
\end{align}
Here is an excerpt about how the model parameters can be interpreted:
A one-unit increase in the variable vd is associated with the decrease in the log odds of being "dog" vs. "cat" in the amount of 21.97069 ($b_{14}$).
the same logic for the second line but, considering "goat" vs. "cat" with ($b_{24}$=-47.72585).
The log odds of being "dog" vs. "cat" will increase by 6.705314 if moving from vc="blue" to vc="red"($b_{12}$).
.....
There is much more in the article, but I thought this part to be the core.
Reference:
R Data Analysis Examples: Multinomial Logistic Regression. UCLA: Statistical Consulting Group.
from http://www.ats.ucla.edu/stat/r/dae/mlogit.htm (accessed November 05, 2013). | Categorical response variable prediction
This is a more a partial practical answer, but it works for me to do some exercises before getting deeply into theory.
This ats.ucla.edu link is a reference that might help beggining to understand a |
35,206 | Integral of random variable | Doesn't make much sense. Remember that $C$ is a (measurable) map from $\Omega$ (the underlying sample space) to $\mathbb{R}$. Hence, strictly speaking
$$
\int_a^b C \,dx = \int_a^b C(\omega) \,dx = C(\omega) \int_a^b \,dx = C(\omega) \cdot (b-a) \, .
$$
(P.S. When you have a stochastic process $Z:\Omega\times\mathbb{R}\to\mathbb{R}$, then under certain conditions $I:\Omega\to\mathbb{R}$ defined by
$$
I(\omega) = \int_a^b Z(\omega,x)\,dx
$$
is a random variable. For details, take a look at the classic probability books by Loève, Neuveu and Doob.) | Integral of random variable | Doesn't make much sense. Remember that $C$ is a (measurable) map from $\Omega$ (the underlying sample space) to $\mathbb{R}$. Hence, strictly speaking
$$
\int_a^b C \,dx = \int_a^b C(\omega) \,dx = | Integral of random variable
Doesn't make much sense. Remember that $C$ is a (measurable) map from $\Omega$ (the underlying sample space) to $\mathbb{R}$. Hence, strictly speaking
$$
\int_a^b C \,dx = \int_a^b C(\omega) \,dx = C(\omega) \int_a^b \,dx = C(\omega) \cdot (b-a) \, .
$$
(P.S. When you have a stochastic process $Z:\Omega\times\mathbb{R}\to\mathbb{R}$, then under certain conditions $I:\Omega\to\mathbb{R}$ defined by
$$
I(\omega) = \int_a^b Z(\omega,x)\,dx
$$
is a random variable. For details, take a look at the classic probability books by Loève, Neuveu and Doob.) | Integral of random variable
Doesn't make much sense. Remember that $C$ is a (measurable) map from $\Omega$ (the underlying sample space) to $\mathbb{R}$. Hence, strictly speaking
$$
\int_a^b C \,dx = \int_a^b C(\omega) \,dx = |
35,207 | What's the difference between Hedges' g and Cohen's d? [duplicate] | You are right, the difference between them is very small and with large $N$ will disappear. In fact most people (at least in my experience) are not aware of any of this; "Cohen's $d$" is often used generically, many people have not heard of Hedges' $g$, but they use the latter formula and call it by the former name. The difference is that Cohen used the maximum likelihood estimator for the variance, which is biased with small $N$, whereas Hedges used Bessel's correction to estimate the variance. (For more on this topic, it may help you to read this CV thread: What is the difference between N and N-1 in calculating population variance?) The corresponding formulas are often known as the population formula for the variance and the sample formula. Recall that these are:
\begin{align}
\text{Var}(X)_\text{population} &= \frac{\sum (x_i-\bar x)^2}{N} \\
~ \\
~ \\
\text{Var}(X)_\text{sample} &= \frac{\sum (x_i-\bar x)^2}{N-1}
\end{align}
As $N$ increases indefinitely, these two estimates will converge to the same value. However, with small samples, the population formula will underestimate the variance because it does not take into account the fact that the mean, $\bar x$, was estimated from the same dataset. When these estimates are subsequently used to estimate the standardized mean difference, that implies that the former will overestimate the effect size.
Thus, with small samples, Hedges' $g$ provides a superior estimate of the standardized mean difference, but the superior performance fades as the sample size increases. | What's the difference between Hedges' g and Cohen's d? [duplicate] | You are right, the difference between them is very small and with large $N$ will disappear. In fact most people (at least in my experience) are not aware of any of this; "Cohen's $d$" is often used g | What's the difference between Hedges' g and Cohen's d? [duplicate]
You are right, the difference between them is very small and with large $N$ will disappear. In fact most people (at least in my experience) are not aware of any of this; "Cohen's $d$" is often used generically, many people have not heard of Hedges' $g$, but they use the latter formula and call it by the former name. The difference is that Cohen used the maximum likelihood estimator for the variance, which is biased with small $N$, whereas Hedges used Bessel's correction to estimate the variance. (For more on this topic, it may help you to read this CV thread: What is the difference between N and N-1 in calculating population variance?) The corresponding formulas are often known as the population formula for the variance and the sample formula. Recall that these are:
\begin{align}
\text{Var}(X)_\text{population} &= \frac{\sum (x_i-\bar x)^2}{N} \\
~ \\
~ \\
\text{Var}(X)_\text{sample} &= \frac{\sum (x_i-\bar x)^2}{N-1}
\end{align}
As $N$ increases indefinitely, these two estimates will converge to the same value. However, with small samples, the population formula will underestimate the variance because it does not take into account the fact that the mean, $\bar x$, was estimated from the same dataset. When these estimates are subsequently used to estimate the standardized mean difference, that implies that the former will overestimate the effect size.
Thus, with small samples, Hedges' $g$ provides a superior estimate of the standardized mean difference, but the superior performance fades as the sample size increases. | What's the difference between Hedges' g and Cohen's d? [duplicate]
You are right, the difference between them is very small and with large $N$ will disappear. In fact most people (at least in my experience) are not aware of any of this; "Cohen's $d$" is often used g |
35,208 | What do we call multiple testing? | If all aspects of a test are specified in advance and its assumptions are met, you can safely conclude that the null hypothesis will be rejected erroneously at the frequency defined by the error level. If you conduct several tests (a “family” of tests), each of these tests is an additional occasion to commit this error.
Each individual test might still have its nominal error level but the probability that you reject at least one null hypothesis erroneously in the family will be higher. To the extent that you had reasons to set an error level in the first place, this is a problem as the probability to commit at least one error is higher than said error level. This is the core of the concern about multiple testing and it seems to apply to all four situations you describe.
Now, if the tests are independent and all null hypotheses are true, you know what the probability of committing at least one error over the whole family is (incidentally, you also know that any rejection must be erroneous). If they are not independent or some of the null hypotheses are in fact not true, not only is the actual family-wise error level higher than the nominal level but it's difficult to know exactly how high (you can however put bounds on it; that's the reasoning behind the Bonferroni adjustment). If the various hypotheses are related in some way, specific solutions might apply (for example classical “multiple comparison” techniques, multivariate tests, sequential procedures in clinical trials) but even if they do not, the problem is still there.
Repeatedly testing as you collect data (also known as optional stopping or “sampling to a foregone conclusion”), trying various techniques, analyzing various subsamples or dependent variables also expose you to multiple testing issues. These situations are not always discussed together but there is no reason why they should not. Different techniques testing the same hypothesis or related ones (your point 4) might be closely related and possibly not entail as much of an increase in the family-wise error level as multiple tests on completely unrelated samples but you are still conducting several tests.
Possibly the most delicate issue is point 3. In such a setting, you could very well run a single statistical test. How could that lead to a multiple testing problem? One argument in favor of this view is that the p-value depends on the distribution of a test statistic over hypothetical replications. If you would replicate this experiment, you would carry out a different test each time depending on how the data “look”. The distribution of this test statistic would not be the same as if you would blindly test the same comparison every time as it is also influenced by this prior informal visual inspection of the data. In fact, you are implicitly considering many possible comparisons in your study, a multiple testing situation.
A similar reasoning also applies to the situation described in point 4. It might or might not correspond to what is usually called “multiple testing issues” (the perennial problem of is-this-really-called-X questions) but the consequence is the same: The tests are uninterpretable as they could be far from their nominal error level. The situation is further muddled by the fact that you propose conducting further tests based on the results of the earlier ones but you are in any case willing to run multiple tests. (Note that this is based on the fact that you claimed to make decisions based on significance alone. Picking a model based on the residuals or some other diagnostics and only conduct one significance test seems a much better approach.)
My reasoning on the last two points is inspired in particular by Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems of p values. Psychonomic Bulletin and Review, 14 (5), 779-804. | What do we call multiple testing? | If all aspects of a test are specified in advance and its assumptions are met, you can safely conclude that the null hypothesis will be rejected erroneously at the frequency defined by the error level | What do we call multiple testing?
If all aspects of a test are specified in advance and its assumptions are met, you can safely conclude that the null hypothesis will be rejected erroneously at the frequency defined by the error level. If you conduct several tests (a “family” of tests), each of these tests is an additional occasion to commit this error.
Each individual test might still have its nominal error level but the probability that you reject at least one null hypothesis erroneously in the family will be higher. To the extent that you had reasons to set an error level in the first place, this is a problem as the probability to commit at least one error is higher than said error level. This is the core of the concern about multiple testing and it seems to apply to all four situations you describe.
Now, if the tests are independent and all null hypotheses are true, you know what the probability of committing at least one error over the whole family is (incidentally, you also know that any rejection must be erroneous). If they are not independent or some of the null hypotheses are in fact not true, not only is the actual family-wise error level higher than the nominal level but it's difficult to know exactly how high (you can however put bounds on it; that's the reasoning behind the Bonferroni adjustment). If the various hypotheses are related in some way, specific solutions might apply (for example classical “multiple comparison” techniques, multivariate tests, sequential procedures in clinical trials) but even if they do not, the problem is still there.
Repeatedly testing as you collect data (also known as optional stopping or “sampling to a foregone conclusion”), trying various techniques, analyzing various subsamples or dependent variables also expose you to multiple testing issues. These situations are not always discussed together but there is no reason why they should not. Different techniques testing the same hypothesis or related ones (your point 4) might be closely related and possibly not entail as much of an increase in the family-wise error level as multiple tests on completely unrelated samples but you are still conducting several tests.
Possibly the most delicate issue is point 3. In such a setting, you could very well run a single statistical test. How could that lead to a multiple testing problem? One argument in favor of this view is that the p-value depends on the distribution of a test statistic over hypothetical replications. If you would replicate this experiment, you would carry out a different test each time depending on how the data “look”. The distribution of this test statistic would not be the same as if you would blindly test the same comparison every time as it is also influenced by this prior informal visual inspection of the data. In fact, you are implicitly considering many possible comparisons in your study, a multiple testing situation.
A similar reasoning also applies to the situation described in point 4. It might or might not correspond to what is usually called “multiple testing issues” (the perennial problem of is-this-really-called-X questions) but the consequence is the same: The tests are uninterpretable as they could be far from their nominal error level. The situation is further muddled by the fact that you propose conducting further tests based on the results of the earlier ones but you are in any case willing to run multiple tests. (Note that this is based on the fact that you claimed to make decisions based on significance alone. Picking a model based on the residuals or some other diagnostics and only conduct one significance test seems a much better approach.)
My reasoning on the last two points is inspired in particular by Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems of p values. Psychonomic Bulletin and Review, 14 (5), 779-804. | What do we call multiple testing?
If all aspects of a test are specified in advance and its assumptions are met, you can safely conclude that the null hypothesis will be rejected erroneously at the frequency defined by the error level |
35,209 | What do we call multiple testing? | Items 3 and 4 on your list appear to be most closely related to the problem of controlling the error rate. I hope someone else will comment on items 1 and 2. I have never quite known how to think of multiple testing of regression coefficients from the same model.
Even if you don't actually test the data, but visually look for patterns that you can then test, you will have an error rate problem. Quantifying it may be hard, but it's there.
The essence is that by random chance you will find patterns in most data sets. The hard part is making the case that they are real (i.e., exist in the population from which the data came) rather than a chance function of a random draw from the population. It's easier to claim they are real if you haven't scoured the data for patterns.
Scientists concerned with publication (or more generally concerned with getting small $p$-values) who do not control the error rate and look for patterns around which a story can be built are said to be capitalizing on chance. They are taking advantage of the fact that patterns, random fluctuations or not, can always be found. This can happen consciously or subconsciously. | What do we call multiple testing? | Items 3 and 4 on your list appear to be most closely related to the problem of controlling the error rate. I hope someone else will comment on items 1 and 2. I have never quite known how to think of m | What do we call multiple testing?
Items 3 and 4 on your list appear to be most closely related to the problem of controlling the error rate. I hope someone else will comment on items 1 and 2. I have never quite known how to think of multiple testing of regression coefficients from the same model.
Even if you don't actually test the data, but visually look for patterns that you can then test, you will have an error rate problem. Quantifying it may be hard, but it's there.
The essence is that by random chance you will find patterns in most data sets. The hard part is making the case that they are real (i.e., exist in the population from which the data came) rather than a chance function of a random draw from the population. It's easier to claim they are real if you haven't scoured the data for patterns.
Scientists concerned with publication (or more generally concerned with getting small $p$-values) who do not control the error rate and look for patterns around which a story can be built are said to be capitalizing on chance. They are taking advantage of the fact that patterns, random fluctuations or not, can always be found. This can happen consciously or subconsciously. | What do we call multiple testing?
Items 3 and 4 on your list appear to be most closely related to the problem of controlling the error rate. I hope someone else will comment on items 1 and 2. I have never quite known how to think of m |
35,210 | What do the "coefficients" in R's HoltWinters function represent? | +1 this is confusing. If your time series has length $N$ and frequency $p$, then the so-called "coefficients" (which can be accessed as HW$coeff if HW is the object returned by HoltWinters) are exactly the values $a[N]$, $b[N]$ and $s[N-p+1]$, $s[N-p+2], \cdots s[N]$ where these are defined by the formulas in the Holt Winters help page, which can be accessed from within R with ?HoltWinters.
For the additive model, which is the default, suppose my.ts is a time series object with positive frequency $p$. The values of $a[N-1]$, $b[N-1]$ and all the earlier $s[t]$ up to $s[N-p]$ are given in the table HoltWinters(my.ts)$fitted. The values in HoltWinters(my.ts)$coeff are calculated from these using the formulas
$$a[t] = α (Y[t] - s[t-p]) + (1-α) (a[t-1] + b[t-1])$$
$$b[t] = β (a[t] - a[t-1]) + (1-β) b[t-1]$$
with $t=N$ and $\alpha = $ HoltWinters(my.ts)$alpha, $\beta = $ HoltWinters(my.ts)$beta, and
$$s[t] = γ (Y[t] - a[t]) + (1-γ) s[t-p]$$
with $t=N-p+1, \ldots, N$ and $\alpha = $ HoltWinters(my.ts)$alpha, $\beta = $ HoltWinters(my.ts)$beta, and $\gamma = $ HoltWinters(my.ts)$gamma.
This works for $a$ and $b$ (the level and trend) but when I do the calculation for the seasonals, I get slightly different values (within 5% or so) than are given in the output. I hope somebody can edit this answer to explain what is going on with the seasonals. Here is a link to the C code for the hw function which is called by the HoltWinters function:
https://svn.r-project.org/R/trunk/src/library/stats/src/HoltWinters.c | What do the "coefficients" in R's HoltWinters function represent? | +1 this is confusing. If your time series has length $N$ and frequency $p$, then the so-called "coefficients" (which can be accessed as HW$coeff if HW is the object returned by HoltWinters) are exactl | What do the "coefficients" in R's HoltWinters function represent?
+1 this is confusing. If your time series has length $N$ and frequency $p$, then the so-called "coefficients" (which can be accessed as HW$coeff if HW is the object returned by HoltWinters) are exactly the values $a[N]$, $b[N]$ and $s[N-p+1]$, $s[N-p+2], \cdots s[N]$ where these are defined by the formulas in the Holt Winters help page, which can be accessed from within R with ?HoltWinters.
For the additive model, which is the default, suppose my.ts is a time series object with positive frequency $p$. The values of $a[N-1]$, $b[N-1]$ and all the earlier $s[t]$ up to $s[N-p]$ are given in the table HoltWinters(my.ts)$fitted. The values in HoltWinters(my.ts)$coeff are calculated from these using the formulas
$$a[t] = α (Y[t] - s[t-p]) + (1-α) (a[t-1] + b[t-1])$$
$$b[t] = β (a[t] - a[t-1]) + (1-β) b[t-1]$$
with $t=N$ and $\alpha = $ HoltWinters(my.ts)$alpha, $\beta = $ HoltWinters(my.ts)$beta, and
$$s[t] = γ (Y[t] - a[t]) + (1-γ) s[t-p]$$
with $t=N-p+1, \ldots, N$ and $\alpha = $ HoltWinters(my.ts)$alpha, $\beta = $ HoltWinters(my.ts)$beta, and $\gamma = $ HoltWinters(my.ts)$gamma.
This works for $a$ and $b$ (the level and trend) but when I do the calculation for the seasonals, I get slightly different values (within 5% or so) than are given in the output. I hope somebody can edit this answer to explain what is going on with the seasonals. Here is a link to the C code for the hw function which is called by the HoltWinters function:
https://svn.r-project.org/R/trunk/src/library/stats/src/HoltWinters.c | What do the "coefficients" in R's HoltWinters function represent?
+1 this is confusing. If your time series has length $N$ and frequency $p$, then the so-called "coefficients" (which can be accessed as HW$coeff if HW is the object returned by HoltWinters) are exactl |
35,211 | What do the "coefficients" in R's HoltWinters function represent? | The meaning of the a, b, s, alpha, beta and gamma parameters is described in the help on the HoltWinters function (try ?HoltWinters in R), under Details.
e.g. the additive model is described so:
Yhat[t+h] = a[t] + h * b[t] + s[t - p + 1 + (h - 1) mod p],
where a[t], b[t] and s[t] are given by
a[t] = α (Y[t] - s[t-p]) + (1-α) (a[t-1] + b[t-1])
b[t] = β (a[t] - a[t-1]) + (1-β) b[t-1]
s[t] = γ (Y[t] - a[t]) + (1-γ) s[t-p]
If we look at the help, one of the examples is:
(m <- HoltWinters(co2))
plot(m)
plot(fitted(m))
With output:
Holt-Winters exponential smoothing with trend and additive seasonal component.
Call:
HoltWinters(x = co2)
Smoothing parameters:
alpha: 0.5126484
beta : 0.009497669
gamma: 0.4728868
Coefficients:
[,1]
a 364.7616237
b 0.1247438
s1 0.2215275
s2 0.9552801
s3 1.5984744
s4 2.8758029
s5 3.2820088
s6 2.4406990
s7 0.8969433
s8 -1.3796428
s9 -3.4112376
s10 -3.2570163
s11 -1.9134850
s12 -0.5844250
Now let's look at the output of calling coefficients:
coefficients(m)
a b s1 s2 s3 s4
364.7616237 0.1247438 0.2215275 0.9552801 1.5984744 2.8758029
s5 s6 s7 s8 s9 s10
3.2820088 2.4406990 0.8969433 -1.3796428 -3.4112376 -3.2570163
s11 s12
-1.9134850 -0.5844250
Which correspond exactly to the output of the same quantities generated before.
Taking into account the description of a, b, s, alpha, beta and gamma on the help page, which parts are unclear to you? | What do the "coefficients" in R's HoltWinters function represent? | The meaning of the a, b, s, alpha, beta and gamma parameters is described in the help on the HoltWinters function (try ?HoltWinters in R), under Details.
e.g. the additive model is described so:
Yhat[ | What do the "coefficients" in R's HoltWinters function represent?
The meaning of the a, b, s, alpha, beta and gamma parameters is described in the help on the HoltWinters function (try ?HoltWinters in R), under Details.
e.g. the additive model is described so:
Yhat[t+h] = a[t] + h * b[t] + s[t - p + 1 + (h - 1) mod p],
where a[t], b[t] and s[t] are given by
a[t] = α (Y[t] - s[t-p]) + (1-α) (a[t-1] + b[t-1])
b[t] = β (a[t] - a[t-1]) + (1-β) b[t-1]
s[t] = γ (Y[t] - a[t]) + (1-γ) s[t-p]
If we look at the help, one of the examples is:
(m <- HoltWinters(co2))
plot(m)
plot(fitted(m))
With output:
Holt-Winters exponential smoothing with trend and additive seasonal component.
Call:
HoltWinters(x = co2)
Smoothing parameters:
alpha: 0.5126484
beta : 0.009497669
gamma: 0.4728868
Coefficients:
[,1]
a 364.7616237
b 0.1247438
s1 0.2215275
s2 0.9552801
s3 1.5984744
s4 2.8758029
s5 3.2820088
s6 2.4406990
s7 0.8969433
s8 -1.3796428
s9 -3.4112376
s10 -3.2570163
s11 -1.9134850
s12 -0.5844250
Now let's look at the output of calling coefficients:
coefficients(m)
a b s1 s2 s3 s4
364.7616237 0.1247438 0.2215275 0.9552801 1.5984744 2.8758029
s5 s6 s7 s8 s9 s10
3.2820088 2.4406990 0.8969433 -1.3796428 -3.4112376 -3.2570163
s11 s12
-1.9134850 -0.5844250
Which correspond exactly to the output of the same quantities generated before.
Taking into account the description of a, b, s, alpha, beta and gamma on the help page, which parts are unclear to you? | What do the "coefficients" in R's HoltWinters function represent?
The meaning of the a, b, s, alpha, beta and gamma parameters is described in the help on the HoltWinters function (try ?HoltWinters in R), under Details.
e.g. the additive model is described so:
Yhat[ |
35,212 | What do the "coefficients" in R's HoltWinters function represent? | I agree that there is puzzle. To see the puzzle I considered
the co2 series available in R. The answer is long. May you just to the *-part that I added today
I have expected that
co2HWBis$coefficients[2]
equals
co2HW$fitted[length(co2HW$fitted[,3]),3]
i.e. the coeff equals the last extimated trend.
Below you can check that this is not the case.
However
co2HW$fitted[length(co2HW$fitted[,3]),3]
equals coefficient you were to obtain if you drop the last value
of the series as is deomestrated below. I suspect that the coefficient is
somehow "written forward". I find it furthermore puzzling the matters are
different if you allow beta to be estimated.
I reading the source code
(http://svn.r-project.org/R/trunk/src/library/stats/R/HoltWinters.R)
but I am not yet sure what goes on.
This is the complete code
rm(list=ls())
co2HW = HoltWinters(co2, alpha = 0.2, gamma = 0.2, beta = 0.5)
# co2HW$coeff[2]
co2HW$fitted[length(co2HW$fitted[,3]),3]
co2Bis = window(co2,end=c(1997,11))
co2HWBis = HoltWinters(co2Bis, alpha=0.2, gamma=0.2, beta=0.5)
co2HWBis$coefficients[2]
# co2HWBis$fitted[length(co2HWBis$fitted[,3])-1,3]
co2HW$beta*(co2HW$fitted[length(co2HW$fitted[,2]),2] -
co2HW$fitted[length(co2HW$fitted[,2])-1,2]) +
(1 - co2HW$beta)*co2HW$fitted[length(co2HW$fitted[,3])-1,3]
#####################
co2HW = HoltWinters(co2, alpha = 0.2, gamma = 0.2)
# co2HW$coeff[2]
co2HW$fitted[length(co2HW$fitted[,3]),3]
co2Bis = window(co2,end=c(1997,11))
co2HWBis = HoltWinters(co2Bis, alpha=0.2, gamma=0.2)
co2HWBis$coefficients[2]
# co2HWBis$fitted[length(co2HWBis$fitted[,3])-1,3]
co2HW$beta*(co2HW$fitted[length(co2HW$fitted[,2]),2] -
co2HW$fitted[length(co2HW$fitted[,2])-1,2]) +
(1 - co2HW$beta)*co2HW$fitted[length(co2HW$fitted[,3])-1,3]
*-part
... one night later I think I can give an answer that looks like an answer. In my opinion the problem is the timing of the table co2HW$fitted. The last line is not the estimated trend level and saison of the last period in the sample. The coefficients are the estimated level, trend and saison of the last period but these value are not displayed in the table. I hope th following code is convincing
rm(list=ls())
x = co2
m = HoltWinters(x)
# m$fitted[length(m$fitted[,3]),3]
aux1 = m$alpha*( x[length(x)] - m$fitted[length(m$fitted[,3]),4] ) +
( 1 - m$alpha )*( m$fitted[length(m$fitted[,3]),3] +
m$fitted[length(m$fitted[,3]),2] );
aux1
m$coeff[1]
aux2 = m$beta*(aux1 - m$fitted[length(m$fitted[,3]),2] ) +
(1-m$beta)*m$fitted[length(m$fitted[,3]),3]
aux2
m$coeff[2]
m$coeff[14]
aux3 = m$gamma*(x[length(x)] - aux1) +
( 1 - m$gamma )*m$fitted[length(m$fitted[,3]),4]
aux3 | What do the "coefficients" in R's HoltWinters function represent? | I agree that there is puzzle. To see the puzzle I considered
the co2 series available in R. The answer is long. May you just to the *-part that I added today
I have expected that
co2HWBis$coefficient | What do the "coefficients" in R's HoltWinters function represent?
I agree that there is puzzle. To see the puzzle I considered
the co2 series available in R. The answer is long. May you just to the *-part that I added today
I have expected that
co2HWBis$coefficients[2]
equals
co2HW$fitted[length(co2HW$fitted[,3]),3]
i.e. the coeff equals the last extimated trend.
Below you can check that this is not the case.
However
co2HW$fitted[length(co2HW$fitted[,3]),3]
equals coefficient you were to obtain if you drop the last value
of the series as is deomestrated below. I suspect that the coefficient is
somehow "written forward". I find it furthermore puzzling the matters are
different if you allow beta to be estimated.
I reading the source code
(http://svn.r-project.org/R/trunk/src/library/stats/R/HoltWinters.R)
but I am not yet sure what goes on.
This is the complete code
rm(list=ls())
co2HW = HoltWinters(co2, alpha = 0.2, gamma = 0.2, beta = 0.5)
# co2HW$coeff[2]
co2HW$fitted[length(co2HW$fitted[,3]),3]
co2Bis = window(co2,end=c(1997,11))
co2HWBis = HoltWinters(co2Bis, alpha=0.2, gamma=0.2, beta=0.5)
co2HWBis$coefficients[2]
# co2HWBis$fitted[length(co2HWBis$fitted[,3])-1,3]
co2HW$beta*(co2HW$fitted[length(co2HW$fitted[,2]),2] -
co2HW$fitted[length(co2HW$fitted[,2])-1,2]) +
(1 - co2HW$beta)*co2HW$fitted[length(co2HW$fitted[,3])-1,3]
#####################
co2HW = HoltWinters(co2, alpha = 0.2, gamma = 0.2)
# co2HW$coeff[2]
co2HW$fitted[length(co2HW$fitted[,3]),3]
co2Bis = window(co2,end=c(1997,11))
co2HWBis = HoltWinters(co2Bis, alpha=0.2, gamma=0.2)
co2HWBis$coefficients[2]
# co2HWBis$fitted[length(co2HWBis$fitted[,3])-1,3]
co2HW$beta*(co2HW$fitted[length(co2HW$fitted[,2]),2] -
co2HW$fitted[length(co2HW$fitted[,2])-1,2]) +
(1 - co2HW$beta)*co2HW$fitted[length(co2HW$fitted[,3])-1,3]
*-part
... one night later I think I can give an answer that looks like an answer. In my opinion the problem is the timing of the table co2HW$fitted. The last line is not the estimated trend level and saison of the last period in the sample. The coefficients are the estimated level, trend and saison of the last period but these value are not displayed in the table. I hope th following code is convincing
rm(list=ls())
x = co2
m = HoltWinters(x)
# m$fitted[length(m$fitted[,3]),3]
aux1 = m$alpha*( x[length(x)] - m$fitted[length(m$fitted[,3]),4] ) +
( 1 - m$alpha )*( m$fitted[length(m$fitted[,3]),3] +
m$fitted[length(m$fitted[,3]),2] );
aux1
m$coeff[1]
aux2 = m$beta*(aux1 - m$fitted[length(m$fitted[,3]),2] ) +
(1-m$beta)*m$fitted[length(m$fitted[,3]),3]
aux2
m$coeff[2]
m$coeff[14]
aux3 = m$gamma*(x[length(x)] - aux1) +
( 1 - m$gamma )*m$fitted[length(m$fitted[,3]),4]
aux3 | What do the "coefficients" in R's HoltWinters function represent?
I agree that there is puzzle. To see the puzzle I considered
the co2 series available in R. The answer is long. May you just to the *-part that I added today
I have expected that
co2HWBis$coefficient |
35,213 | What do the "coefficients" in R's HoltWinters function represent? | I think the key point about the coefficients, which I couldn't see in the other answers but may have missed, is that they are the values of smoothed level and smoothed trend for the last period in the time series on which the forecast was based/made; and smoothed seasonal components for the last 12 months of that time series.
Understanding the table of fitted values for the forecast also helps. For each row corresponding to time t, the values of level and trend are the smoothed values for time t-1, and the value of season is the smoothed value for t-p. These are added to give the estimated true level for time t, Xhat.
I have only begun to use R fairly recently, so apologies if my terminology isn't fully accurate. | What do the "coefficients" in R's HoltWinters function represent? | I think the key point about the coefficients, which I couldn't see in the other answers but may have missed, is that they are the values of smoothed level and smoothed trend for the last period in th | What do the "coefficients" in R's HoltWinters function represent?
I think the key point about the coefficients, which I couldn't see in the other answers but may have missed, is that they are the values of smoothed level and smoothed trend for the last period in the time series on which the forecast was based/made; and smoothed seasonal components for the last 12 months of that time series.
Understanding the table of fitted values for the forecast also helps. For each row corresponding to time t, the values of level and trend are the smoothed values for time t-1, and the value of season is the smoothed value for t-p. These are added to give the estimated true level for time t, Xhat.
I have only begun to use R fairly recently, so apologies if my terminology isn't fully accurate. | What do the "coefficients" in R's HoltWinters function represent?
I think the key point about the coefficients, which I couldn't see in the other answers but may have missed, is that they are the values of smoothed level and smoothed trend for the last period in th |
35,214 | What do the "coefficients" in R's HoltWinters function represent? | This is from the HoltWinters documentation in R. I had the same question and this answers why I could not calculate the same seasonal values. The function is using a decomposition method to find all the initial values when incorporating seasonality, whereas for single and double exponential smoothing it doesn't do this.
"For seasonal models, start values for a, b and s are inferred by performing a simple decomposition in trend and seasonal component using moving averages (see function decompose) on the start.periods first periods (a simple linear regression on the trend component is used for starting level and trend). For level/trend-models (no seasonal component), start values for a and b are x[2] and x[2] - x[1], respectively. For level-only models (ordinary exponential smoothing), the start value for a is x[1]."
Found this website that explains how to get initial values: https://robjhyndman.com/hyndsight/hw-initialization/ | What do the "coefficients" in R's HoltWinters function represent? | This is from the HoltWinters documentation in R. I had the same question and this answers why I could not calculate the same seasonal values. The function is using a decomposition method to find all t | What do the "coefficients" in R's HoltWinters function represent?
This is from the HoltWinters documentation in R. I had the same question and this answers why I could not calculate the same seasonal values. The function is using a decomposition method to find all the initial values when incorporating seasonality, whereas for single and double exponential smoothing it doesn't do this.
"For seasonal models, start values for a, b and s are inferred by performing a simple decomposition in trend and seasonal component using moving averages (see function decompose) on the start.periods first periods (a simple linear regression on the trend component is used for starting level and trend). For level/trend-models (no seasonal component), start values for a and b are x[2] and x[2] - x[1], respectively. For level-only models (ordinary exponential smoothing), the start value for a is x[1]."
Found this website that explains how to get initial values: https://robjhyndman.com/hyndsight/hw-initialization/ | What do the "coefficients" in R's HoltWinters function represent?
This is from the HoltWinters documentation in R. I had the same question and this answers why I could not calculate the same seasonal values. The function is using a decomposition method to find all t |
35,215 | What do the "coefficients" in R's HoltWinters function represent? | Coefficients in Holtwinter's Method
In the HoltWinter's model with seasonal= 'additive', I am able to produce the model output for coefficients manually. It seems like we need to use a[N-p],b[N-p] instead of a[N-1],b[N-1] in the HW formula. This may be because 'a' and 'b' depend on 's' component. I think the coefficients are the smoothed out values for the Nth data and for the last season. If the season has length p, then there will be p many smoothed values.
result=HoltWinters(co2,alpha=NULL,beta=NULL,gamma=NULL,seasonal = 'additive')
result$coef
let us produce the coefficients manually
alphat=0.5126484
betat = 0.009497669
gammat=0.4728868
N=length(co2)
p=length(result\$coef)-2
A_N=alphat*(co2[N]-result\$fitted[N-p,4] )+(1-alphat)*(result\$fitted[N-p,2]+
result\$fitted[N-p,3])
A_N
B_N=betat*(A_N-result\$fitted[N-p,2])+(1-betat)*(result\$fitted[N-p,3])
B_N
S_N=gammat*(co2[N]-A_N)+(1-gammat)*result\$fitted[N-p,4]
S_N | What do the "coefficients" in R's HoltWinters function represent? | Coefficients in Holtwinter's Method
In the HoltWinter's model with seasonal= 'additive', I am able to produce the model output for coefficients manually. It seems like we need to use a[N-p],b[N-p] in | What do the "coefficients" in R's HoltWinters function represent?
Coefficients in Holtwinter's Method
In the HoltWinter's model with seasonal= 'additive', I am able to produce the model output for coefficients manually. It seems like we need to use a[N-p],b[N-p] instead of a[N-1],b[N-1] in the HW formula. This may be because 'a' and 'b' depend on 's' component. I think the coefficients are the smoothed out values for the Nth data and for the last season. If the season has length p, then there will be p many smoothed values.
result=HoltWinters(co2,alpha=NULL,beta=NULL,gamma=NULL,seasonal = 'additive')
result$coef
let us produce the coefficients manually
alphat=0.5126484
betat = 0.009497669
gammat=0.4728868
N=length(co2)
p=length(result\$coef)-2
A_N=alphat*(co2[N]-result\$fitted[N-p,4] )+(1-alphat)*(result\$fitted[N-p,2]+
result\$fitted[N-p,3])
A_N
B_N=betat*(A_N-result\$fitted[N-p,2])+(1-betat)*(result\$fitted[N-p,3])
B_N
S_N=gammat*(co2[N]-A_N)+(1-gammat)*result\$fitted[N-p,4]
S_N | What do the "coefficients" in R's HoltWinters function represent?
Coefficients in Holtwinter's Method
In the HoltWinter's model with seasonal= 'additive', I am able to produce the model output for coefficients manually. It seems like we need to use a[N-p],b[N-p] in |
35,216 | Statistical Difference from Zero | A t-test would be able to test if the average of all the values is different from 0. There is no second set of data, you want a one-sample t-test. In R:
x <- c(1,2,3,4) #PUT YOUR DATA HERE
t.test(x)
would do.
But if you want to test whether the trend is different from 0, that's a different question. How to do that would depend on whether the time intervals in your data are equal, whether you want to look at possible seasonality and so on. A simple (but possibly incorrect) method would be:
x <- c(1,2,3,4) #PUT YOUR DATA HERE
time <- seq(1,length(x))
model1 <- lm(x~time)
before that, though, I would make some plots, e.g
plot(x~time)
and perhaps look at some smoothers. | Statistical Difference from Zero | A t-test would be able to test if the average of all the values is different from 0. There is no second set of data, you want a one-sample t-test. In R:
x <- c(1,2,3,4) #PUT YOUR DATA HERE
t.test(x)
| Statistical Difference from Zero
A t-test would be able to test if the average of all the values is different from 0. There is no second set of data, you want a one-sample t-test. In R:
x <- c(1,2,3,4) #PUT YOUR DATA HERE
t.test(x)
would do.
But if you want to test whether the trend is different from 0, that's a different question. How to do that would depend on whether the time intervals in your data are equal, whether you want to look at possible seasonality and so on. A simple (but possibly incorrect) method would be:
x <- c(1,2,3,4) #PUT YOUR DATA HERE
time <- seq(1,length(x))
model1 <- lm(x~time)
before that, though, I would make some plots, e.g
plot(x~time)
and perhaps look at some smoothers. | Statistical Difference from Zero
A t-test would be able to test if the average of all the values is different from 0. There is no second set of data, you want a one-sample t-test. In R:
x <- c(1,2,3,4) #PUT YOUR DATA HERE
t.test(x)
|
35,217 | Statistical Difference from Zero | No, don't use 0s, use the one sample t-test and test whether the mean differs from 0 (it does).
In R it goes as follows:
x <- c(0.2245, 0.243, 0.2312, 0.1795, 0.1923, 0.17, 0.2025, 0.2059,
0.2394, 0.205, 0.2201, 0.2261, 0.1817, 0.2143, 0.2126, 0.237,
0.1984, 0.228, 0.2292, 0.2236, 0.2096, 0.2258, 0.2155)
> t.test(x)
One Sample t-test
data: x
t = 52.3, df = 22, p-value < 0.00000000000000022
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
0.2052 0.2222
sample estimates:
mean of x
0.2137 | Statistical Difference from Zero | No, don't use 0s, use the one sample t-test and test whether the mean differs from 0 (it does).
In R it goes as follows:
x <- c(0.2245, 0.243, 0.2312, 0.1795, 0.1923, 0.17, 0.2025, 0.2059,
0.2394, 0. | Statistical Difference from Zero
No, don't use 0s, use the one sample t-test and test whether the mean differs from 0 (it does).
In R it goes as follows:
x <- c(0.2245, 0.243, 0.2312, 0.1795, 0.1923, 0.17, 0.2025, 0.2059,
0.2394, 0.205, 0.2201, 0.2261, 0.1817, 0.2143, 0.2126, 0.237,
0.1984, 0.228, 0.2292, 0.2236, 0.2096, 0.2258, 0.2155)
> t.test(x)
One Sample t-test
data: x
t = 52.3, df = 22, p-value < 0.00000000000000022
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
0.2052 0.2222
sample estimates:
mean of x
0.2137 | Statistical Difference from Zero
No, don't use 0s, use the one sample t-test and test whether the mean differs from 0 (it does).
In R it goes as follows:
x <- c(0.2245, 0.243, 0.2312, 0.1795, 0.1923, 0.17, 0.2025, 0.2059,
0.2394, 0. |
35,218 | Sum standard deviation vs standard error | The sum standard deviation is, as the name suggests, the standard deviation of the sum of $n$ random variables. The standard error you're talking about is just another name for the standard deviation of the mean of $n$ random variables. As you noted, the two formulas are closely related; since the sum of $n$ random variables is $n$ times the mean of $n$ random variables, the standard deviation of the sum is also $n$ times the standard deviation of the mean:
$\sigma_{X_{sum}} = \sqrt n\sigma_X = n \times \frac{\sigma_X}{\sqrt n} = n\times \sigma_\bar{X}$.
In the first problem you are dealing with a mean, the average of twelve bottles, so you use the standard deviation of the mean, which is called standard error. In the second problem you are dealing with a sum, the total weight of 20 packages, so you use the standard deviation of the sum.
Summary: use standard error when dealing with the mean (averages); use sum standard deviation when dealing with the sum (totals). | Sum standard deviation vs standard error | The sum standard deviation is, as the name suggests, the standard deviation of the sum of $n$ random variables. The standard error you're talking about is just another name for the standard deviation | Sum standard deviation vs standard error
The sum standard deviation is, as the name suggests, the standard deviation of the sum of $n$ random variables. The standard error you're talking about is just another name for the standard deviation of the mean of $n$ random variables. As you noted, the two formulas are closely related; since the sum of $n$ random variables is $n$ times the mean of $n$ random variables, the standard deviation of the sum is also $n$ times the standard deviation of the mean:
$\sigma_{X_{sum}} = \sqrt n\sigma_X = n \times \frac{\sigma_X}{\sqrt n} = n\times \sigma_\bar{X}$.
In the first problem you are dealing with a mean, the average of twelve bottles, so you use the standard deviation of the mean, which is called standard error. In the second problem you are dealing with a sum, the total weight of 20 packages, so you use the standard deviation of the sum.
Summary: use standard error when dealing with the mean (averages); use sum standard deviation when dealing with the sum (totals). | Sum standard deviation vs standard error
The sum standard deviation is, as the name suggests, the standard deviation of the sum of $n$ random variables. The standard error you're talking about is just another name for the standard deviation |
35,219 | Sum standard deviation vs standard error | The first standard deviation formula you gave is the SD for a sum. The standard error is the SD of the sample mean. Remember that:
$\text{Var}(aX)=a^2 \text{Var}(X)$ and the variance of the sum is the sum of the variances (First formula). So
$\text{Var}(\bar{X})=\frac{n\sigma^2}{n^2}=\sigma^2/n$. Taking the square root gives the result.
Recall:
$\text{Var}(\sum X_i)=\sum (\text{Var}(X_i)=n \sigma^2.$ The Variance of the sums.
Problem 1 is looking for a statement about the sample mean; Problem 2 is about the sum, since the weight of the package is the sum of the weights of individual tea bags. | Sum standard deviation vs standard error | The first standard deviation formula you gave is the SD for a sum. The standard error is the SD of the sample mean. Remember that:
$\text{Var}(aX)=a^2 \text{Var}(X)$ and the variance of the sum is the | Sum standard deviation vs standard error
The first standard deviation formula you gave is the SD for a sum. The standard error is the SD of the sample mean. Remember that:
$\text{Var}(aX)=a^2 \text{Var}(X)$ and the variance of the sum is the sum of the variances (First formula). So
$\text{Var}(\bar{X})=\frac{n\sigma^2}{n^2}=\sigma^2/n$. Taking the square root gives the result.
Recall:
$\text{Var}(\sum X_i)=\sum (\text{Var}(X_i)=n \sigma^2.$ The Variance of the sums.
Problem 1 is looking for a statement about the sample mean; Problem 2 is about the sum, since the weight of the package is the sum of the weights of individual tea bags. | Sum standard deviation vs standard error
The first standard deviation formula you gave is the SD for a sum. The standard error is the SD of the sample mean. Remember that:
$\text{Var}(aX)=a^2 \text{Var}(X)$ and the variance of the sum is the |
35,220 | Is PCA appropriate when $n<p$? | Yes, you surely can do that. I don’t know applications in ecology, but you may be interested to know that this is widely used in genetics (epidemiology and population genetics), with $n \ll p$, typically $n = 1000$ or $5000$ individuals and $p = 500\,000$ genotypes.
To adjust analyses for population mixture, the first 10 or 50 PC are used. The first two PC give already lots of informations, as shown in Novembre J (2008). Pay special attention to figure 1 where you see that the two first PC obtained from genomic data retrieve roughly the spacial arrangement off populations within Europe. | Is PCA appropriate when $n<p$? | Yes, you surely can do that. I don’t know applications in ecology, but you may be interested to know that this is widely used in genetics (epidemiology and population genetics), with $n \ll p$, typica | Is PCA appropriate when $n<p$?
Yes, you surely can do that. I don’t know applications in ecology, but you may be interested to know that this is widely used in genetics (epidemiology and population genetics), with $n \ll p$, typically $n = 1000$ or $5000$ individuals and $p = 500\,000$ genotypes.
To adjust analyses for population mixture, the first 10 or 50 PC are used. The first two PC give already lots of informations, as shown in Novembre J (2008). Pay special attention to figure 1 where you see that the two first PC obtained from genomic data retrieve roughly the spacial arrangement off populations within Europe. | Is PCA appropriate when $n<p$?
Yes, you surely can do that. I don’t know applications in ecology, but you may be interested to know that this is widely used in genetics (epidemiology and population genetics), with $n \ll p$, typica |
35,221 | Filtering using ARMA model in R | I think this does what you want:
library(forecast)
fit <- Arima(x,order=c(1,0,1))
yfiltered <- residuals(Arima(y,model=fit)) | Filtering using ARMA model in R | I think this does what you want:
library(forecast)
fit <- Arima(x,order=c(1,0,1))
yfiltered <- residuals(Arima(y,model=fit)) | Filtering using ARMA model in R
I think this does what you want:
library(forecast)
fit <- Arima(x,order=c(1,0,1))
yfiltered <- residuals(Arima(y,model=fit)) | Filtering using ARMA model in R
I think this does what you want:
library(forecast)
fit <- Arima(x,order=c(1,0,1))
yfiltered <- residuals(Arima(y,model=fit)) |
35,222 | Filtering using ARMA model in R | I suggest three different functions:
stats:::arima
forecast:::Arima
forecast:::auto.arima
forecast:::auto.arima will automatically seelct the p and q lags for you. | Filtering using ARMA model in R | I suggest three different functions:
stats:::arima
forecast:::Arima
forecast:::auto.arima
forecast:::auto.arima will automatically seelct the p and q lags for you. | Filtering using ARMA model in R
I suggest three different functions:
stats:::arima
forecast:::Arima
forecast:::auto.arima
forecast:::auto.arima will automatically seelct the p and q lags for you. | Filtering using ARMA model in R
I suggest three different functions:
stats:::arima
forecast:::Arima
forecast:::auto.arima
forecast:::auto.arima will automatically seelct the p and q lags for you. |
35,223 | Good F1 score for anomaly detection | You may have guessed it already, but it depends on ...
what the data allows you to achieve at maximum. Regarding this, please see Expected best performance possible on a data set
the cost of a wrong prediction and the benefit of a detected anomaly. I recommend The Foundations of Cost-Sensitive Learning by Charles Elkan for creation of such a cost-benefit-matrix.
So in summary, as long as the F1-score is significantly better than a random classifier (or any other dummy approach) and the cost-benefit-calculation based upon the model allows the conclusion that it is useful in practice, the corresponding F1-score can be considered as good.
Edit: What is the F1-score of a random classifier ?
Let's say we have 2 classes $c_1,c_2$.
Let's denote the a priori class probabilties as $p(c_1),p(c_2)$, $p(c_1)+p(c_2)=1$, where $p(c_k)$=number of instances with class $c_k$ divdided by number of all instances.
If the random classifier is presented an instance, it selects a class with probablitiy $p_{random}(c_k)$. Two major options exist:
$p_{random}(c_k)=p(c_k)$. This classifier will maximize the overall accuracy.
$p_{random}(c_k)=\frac{1}{2}$. This classifier will maximize recall for a minor class (as it is in case of anomaly detection) for the cost that more instances of the major class are not caught (i.e. the recall of the major class decreases). That's why a cost-maxtrix is needed.
Since the random classifier assigns the class independently of the presented instance, the prior class probabilities and hence the precision remain the same. So the precision for class k is just $precision(c_k)=p(c_k)$
Using the same independence argument, Recall, which is ratio of items of class k which have been correctly classified, is just $p_{random}(c_k)$.
Then, the F1-score for class k is (reference)
$F_1(c_k)=2\frac{precision*recall}{precision + recall}$
=$2\frac{p(c_k)p_{random}(c_k)}{p(c_k)+p_{random}(c_k)}$
Example:
Let $p(anomaly)=0.01,p(\neg anomaly)=0.99$. Then
If $p_{random}(c_k)=p(c_k)$:
precision(anomaly)=0.01, recall(anomaly)=0.01, F1(anomaly)~0.01 (accuracy=0.9892)
If $p_{random}(c_k)=\frac{1}{2}$:
precision(anomaly)=0.01, recall(anomaly)=0.5, F1(anomaly)~0.01961 (accuracy=0.5)
I have "verified" the calculation of precision and recall using the following MC-simulation (sorry for the quality, my R is a little bit rusty).
size <- 10000
trials <- 10000
anomalyCount <- size*0.01
noAnomalyCount <- size-size*0.01
dat <- data.frame("id"=1:size,"class"=as.character(c(rep("anomaly",anomalyCount),rep("no_anomaly",noAnomalyCount))))
prandom <- 0.01
precisionValues <- rep(NA,trials)
recallValues <- rep(NA,trials)
for(i in 1:trials){
predictions <- sample(c("anomaly","no_anomaly"),size,replace=T,prob=c(prandom,1-prandom))
anomalyInds <- which(predictions =="anomaly")
hits <- sum(dat$class[anomalyInds] == predictions[anomalyInds])
precisionValues[i] <- hits / length(anomalyInds)
recallValues[i] <- hits/anomalyCount
}
which delivers for $p_{random}(anomaly)=0.5$
[1] "average precision 0.00998538701407172 with error +/- 1.00354527409206e-07"
[1] "average recall 0.499318 with error +/- 1.00354527409206e-07"
and for $p_{random}(anomaly)=p(anomaly)=0.01$
[1] "average precision 0.0100008820210962 with error +/- 9.88994082273052e-07"
[1] "average recall 0.009979 with error +/- 9.88994082273052e-07" | Good F1 score for anomaly detection | You may have guessed it already, but it depends on ...
what the data allows you to achieve at maximum. Regarding this, please see Expected best performance possible on a data set
the cost of a wrong | Good F1 score for anomaly detection
You may have guessed it already, but it depends on ...
what the data allows you to achieve at maximum. Regarding this, please see Expected best performance possible on a data set
the cost of a wrong prediction and the benefit of a detected anomaly. I recommend The Foundations of Cost-Sensitive Learning by Charles Elkan for creation of such a cost-benefit-matrix.
So in summary, as long as the F1-score is significantly better than a random classifier (or any other dummy approach) and the cost-benefit-calculation based upon the model allows the conclusion that it is useful in practice, the corresponding F1-score can be considered as good.
Edit: What is the F1-score of a random classifier ?
Let's say we have 2 classes $c_1,c_2$.
Let's denote the a priori class probabilties as $p(c_1),p(c_2)$, $p(c_1)+p(c_2)=1$, where $p(c_k)$=number of instances with class $c_k$ divdided by number of all instances.
If the random classifier is presented an instance, it selects a class with probablitiy $p_{random}(c_k)$. Two major options exist:
$p_{random}(c_k)=p(c_k)$. This classifier will maximize the overall accuracy.
$p_{random}(c_k)=\frac{1}{2}$. This classifier will maximize recall for a minor class (as it is in case of anomaly detection) for the cost that more instances of the major class are not caught (i.e. the recall of the major class decreases). That's why a cost-maxtrix is needed.
Since the random classifier assigns the class independently of the presented instance, the prior class probabilities and hence the precision remain the same. So the precision for class k is just $precision(c_k)=p(c_k)$
Using the same independence argument, Recall, which is ratio of items of class k which have been correctly classified, is just $p_{random}(c_k)$.
Then, the F1-score for class k is (reference)
$F_1(c_k)=2\frac{precision*recall}{precision + recall}$
=$2\frac{p(c_k)p_{random}(c_k)}{p(c_k)+p_{random}(c_k)}$
Example:
Let $p(anomaly)=0.01,p(\neg anomaly)=0.99$. Then
If $p_{random}(c_k)=p(c_k)$:
precision(anomaly)=0.01, recall(anomaly)=0.01, F1(anomaly)~0.01 (accuracy=0.9892)
If $p_{random}(c_k)=\frac{1}{2}$:
precision(anomaly)=0.01, recall(anomaly)=0.5, F1(anomaly)~0.01961 (accuracy=0.5)
I have "verified" the calculation of precision and recall using the following MC-simulation (sorry for the quality, my R is a little bit rusty).
size <- 10000
trials <- 10000
anomalyCount <- size*0.01
noAnomalyCount <- size-size*0.01
dat <- data.frame("id"=1:size,"class"=as.character(c(rep("anomaly",anomalyCount),rep("no_anomaly",noAnomalyCount))))
prandom <- 0.01
precisionValues <- rep(NA,trials)
recallValues <- rep(NA,trials)
for(i in 1:trials){
predictions <- sample(c("anomaly","no_anomaly"),size,replace=T,prob=c(prandom,1-prandom))
anomalyInds <- which(predictions =="anomaly")
hits <- sum(dat$class[anomalyInds] == predictions[anomalyInds])
precisionValues[i] <- hits / length(anomalyInds)
recallValues[i] <- hits/anomalyCount
}
which delivers for $p_{random}(anomaly)=0.5$
[1] "average precision 0.00998538701407172 with error +/- 1.00354527409206e-07"
[1] "average recall 0.499318 with error +/- 1.00354527409206e-07"
and for $p_{random}(anomaly)=p(anomaly)=0.01$
[1] "average precision 0.0100008820210962 with error +/- 9.88994082273052e-07"
[1] "average recall 0.009979 with error +/- 9.88994082273052e-07" | Good F1 score for anomaly detection
You may have guessed it already, but it depends on ...
what the data allows you to achieve at maximum. Regarding this, please see Expected best performance possible on a data set
the cost of a wrong |
35,224 | Classification error is lower when I don't do any learning on the dataset? | It is not true that you are not doing any learning. What you are doing is using the well known classification algorithm called Nearest Neighbor (NN). It is important to realize that you are learning as long as you are using the train data (even if you dont explicitly calculate some parameter) - and in this case you are definitely using it.
It is ok that NN is doing well. However, in some cases it may be a sign that there is a problem with your data. This can happen when your data is not IID. For example, in some cases you may have exact or close duplicates in your data. In such a case, many instances in the test set will have a close neighbor in the train set and you will get a high success rate but in fact you are overfitting, because if you get a new point without duplicates your performance will be worse. What you can do in this case is try to remove duplicates in advance, or construct the train/test sets such that duplicates (or tight clusters) have to be in the same set. It is important to look at the data and try to understand what is going on. | Classification error is lower when I don't do any learning on the dataset? | It is not true that you are not doing any learning. What you are doing is using the well known classification algorithm called Nearest Neighbor (NN). It is important to realize that you are learning a | Classification error is lower when I don't do any learning on the dataset?
It is not true that you are not doing any learning. What you are doing is using the well known classification algorithm called Nearest Neighbor (NN). It is important to realize that you are learning as long as you are using the train data (even if you dont explicitly calculate some parameter) - and in this case you are definitely using it.
It is ok that NN is doing well. However, in some cases it may be a sign that there is a problem with your data. This can happen when your data is not IID. For example, in some cases you may have exact or close duplicates in your data. In such a case, many instances in the test set will have a close neighbor in the train set and you will get a high success rate but in fact you are overfitting, because if you get a new point without duplicates your performance will be worse. What you can do in this case is try to remove duplicates in advance, or construct the train/test sets such that duplicates (or tight clusters) have to be in the same set. It is important to look at the data and try to understand what is going on. | Classification error is lower when I don't do any learning on the dataset?
It is not true that you are not doing any learning. What you are doing is using the well known classification algorithm called Nearest Neighbor (NN). It is important to realize that you are learning a |
35,225 | Computing a p-value using bootstrap | Great set of alternatives for generating a discussion!
1) Your primary suggestion is actually a form of permutation test rather than a bootstrap, and does indeed generate p-values. The p-values are exact under the assumption of exchangeability, which I can't describe in terms specific to your model as I don't know your model, and if you can do all possible permutations. In a linear regression, assuming the residuals are i.i.d. is sufficient for exchangeability to hold. However, it would be proper to refit the hyperparameters inside the loop.
Note that in unbalanced ANOVA problems with missing cells it may not be possible to devise a permutation test because of the exchangeability requirement (Good, p. 138, for an example.)
2) Using sampling with replacement turns the permutation test into a nonparametric bootstrap-based test. Once again,it would be proper to refit the hyperparameters inside the loop, but exchangeability is not required.
Generally a nonparametric bootstrap test isn't exact or conservative, and it's less powerful than a permutation test, so I'd prefer the permutation test (if I have a choice.) There are also parametric bootstraps which can be used when the distribution of the test statistic under the null hypothesis is known. These are more powerful than their nonparametric cousins. In your case, if you have a linear regression and you are willing to assume i.i.d. Normal residuals, the distribution of $\hat{\beta}$ under the null hypothesis is known and you can use that fact to construct a parametric bootstrap. Still, you're not going to beat the permutation test.
A good reference for this is Permutation, Parametric, and Bootstrap Tests of Hypotheses (Good). | Computing a p-value using bootstrap | Great set of alternatives for generating a discussion!
1) Your primary suggestion is actually a form of permutation test rather than a bootstrap, and does indeed generate p-values. The p-values are e | Computing a p-value using bootstrap
Great set of alternatives for generating a discussion!
1) Your primary suggestion is actually a form of permutation test rather than a bootstrap, and does indeed generate p-values. The p-values are exact under the assumption of exchangeability, which I can't describe in terms specific to your model as I don't know your model, and if you can do all possible permutations. In a linear regression, assuming the residuals are i.i.d. is sufficient for exchangeability to hold. However, it would be proper to refit the hyperparameters inside the loop.
Note that in unbalanced ANOVA problems with missing cells it may not be possible to devise a permutation test because of the exchangeability requirement (Good, p. 138, for an example.)
2) Using sampling with replacement turns the permutation test into a nonparametric bootstrap-based test. Once again,it would be proper to refit the hyperparameters inside the loop, but exchangeability is not required.
Generally a nonparametric bootstrap test isn't exact or conservative, and it's less powerful than a permutation test, so I'd prefer the permutation test (if I have a choice.) There are also parametric bootstraps which can be used when the distribution of the test statistic under the null hypothesis is known. These are more powerful than their nonparametric cousins. In your case, if you have a linear regression and you are willing to assume i.i.d. Normal residuals, the distribution of $\hat{\beta}$ under the null hypothesis is known and you can use that fact to construct a parametric bootstrap. Still, you're not going to beat the permutation test.
A good reference for this is Permutation, Parametric, and Bootstrap Tests of Hypotheses (Good). | Computing a p-value using bootstrap
Great set of alternatives for generating a discussion!
1) Your primary suggestion is actually a form of permutation test rather than a bootstrap, and does indeed generate p-values. The p-values are e |
35,226 | How to calculate with tiny probabilities and large samples? | In physics, a Fermi problem is an exercise which asks you to estimate an order of magnitude. You can do the same for probabilities. With practice, your intuition should improve.
As Xi'an commented, you can use logarithms. Perhaps you can't see $2^{2^{25}} \gg 10^{10}$ at a glance, but you can see that $2^{25} \gg 10$ (or $10 \log_2 10 \approx 33$), which implies it.
Instead of using complicated formulas to compute exact values you don't need, use estimates which are simple to calculate. For example, the probability there is at least one other person with your genome (ignoring twins) is at most the expected number of people with the same genome, a simple product $\frac {1}{2^{2^{25}}} (7 \times 10^9)$ which you should be able to estimate as very small. Similarly, the probability that some pair of people have the same genome is at most the expected number of pairs of people with the same genome, about
$$ \frac{\frac 12 (7 \times 10^9)^2}{2^{2^{25}}}$$
By the way, I don't accept this model of probability for the genome. I just used your model for examples. This model would predict that the genetic similarity typically found between siblings is astronomically unlikely. | How to calculate with tiny probabilities and large samples? | In physics, a Fermi problem is an exercise which asks you to estimate an order of magnitude. You can do the same for probabilities. With practice, your intuition should improve.
As Xi'an commented, y | How to calculate with tiny probabilities and large samples?
In physics, a Fermi problem is an exercise which asks you to estimate an order of magnitude. You can do the same for probabilities. With practice, your intuition should improve.
As Xi'an commented, you can use logarithms. Perhaps you can't see $2^{2^{25}} \gg 10^{10}$ at a glance, but you can see that $2^{25} \gg 10$ (or $10 \log_2 10 \approx 33$), which implies it.
Instead of using complicated formulas to compute exact values you don't need, use estimates which are simple to calculate. For example, the probability there is at least one other person with your genome (ignoring twins) is at most the expected number of people with the same genome, a simple product $\frac {1}{2^{2^{25}}} (7 \times 10^9)$ which you should be able to estimate as very small. Similarly, the probability that some pair of people have the same genome is at most the expected number of pairs of people with the same genome, about
$$ \frac{\frac 12 (7 \times 10^9)^2}{2^{2^{25}}}$$
By the way, I don't accept this model of probability for the genome. I just used your model for examples. This model would predict that the genetic similarity typically found between siblings is astronomically unlikely. | How to calculate with tiny probabilities and large samples?
In physics, a Fermi problem is an exercise which asks you to estimate an order of magnitude. You can do the same for probabilities. With practice, your intuition should improve.
As Xi'an commented, y |
35,227 | How to calculate with tiny probabilities and large samples? | I think this amounts to a problem of estimating the extreme tails of a probability distribution without the extremely large sample size needed to get any or just a small few number of values observed at those extreme values. The only way to do this is by assuming a parametric model which "automstically" assumes a shape for the distributions tails. But if you have justification for the probability model then you can get the estimates you seek by fitting the density from the parametric family and using it to integrate over the tail area to estimate that small probability. If the parametric assumption is wrong the estimate could be way off (by orders of magnitude). | How to calculate with tiny probabilities and large samples? | I think this amounts to a problem of estimating the extreme tails of a probability distribution without the extremely large sample size needed to get any or just a small few number of values observed | How to calculate with tiny probabilities and large samples?
I think this amounts to a problem of estimating the extreme tails of a probability distribution without the extremely large sample size needed to get any or just a small few number of values observed at those extreme values. The only way to do this is by assuming a parametric model which "automstically" assumes a shape for the distributions tails. But if you have justification for the probability model then you can get the estimates you seek by fitting the density from the parametric family and using it to integrate over the tail area to estimate that small probability. If the parametric assumption is wrong the estimate could be way off (by orders of magnitude). | How to calculate with tiny probabilities and large samples?
I think this amounts to a problem of estimating the extreme tails of a probability distribution without the extremely large sample size needed to get any or just a small few number of values observed |
35,228 | Based on factor loadings (in factor analysis) can we give unequal weights to Likert scale items? | Yes, it is possible to supply each item with its own weight. This weight, however, cannot be the loading itself because - you might remember - loading is a regression coefficient of a factor in predicting an item, not vice versa. The weight you imply must be a regression coefficient of an item in predicting a factor. We obtain those weights when we compute factore scores; the weights $\mathbf{B}$ are estimated from inter-item correlation (or covariance) matrix $\mathbf{R}$ and loadings matrix $\mathbf{A}$ typically this way: $\mathbf{B}= \mathbf{R}^{-1} \mathbf{A}$. (If factors were obliquely rotated then in this formula factor structure matrix should replace $\mathbf{A}$.) See also, where coarse and refined methods are considered; coarse method permits using loadings as weights.
If so, then why do researchers generally use Likert scales with equally weighted items? In other words, why do they often prefer just binary weights 1 or 0 in place of the above computed fractional weights? There may be several reasons. To mention just three... First, the above weights $\mathbf{B}$ are not precise (unless we used PCA model rather than factor analysis model per se) due to the fact that the uniqness of an item is not known on the level of each case (respondent), and thereby computed factor scores are only approximation of true factor values. Second, computed weights $\mathbf{B}$ usually will vary from sample to sample and eventually they show not much better than simply 1 vs 0 weights. Third, the weighted-sum-model behind a summative (Likert) construct is a simplification in principle. It implies that the trait that is measured by the scale depends on all its items simultaneously whatever its pronouncedness. But we know that many traits behave differently. For instance, when a trait is weak, it may show only a subset of symptoms (i.e. items), but those expressed in full; as the trait grows stronger, more symptoms join in, some partly expressed, some expressed in full and even replacing those "older" symptoms. This dynamic and unpredictive internal growths of a trait cannot be modeled by weighted linear combination of its phenomena. In this situation, using fine fractional weights is in no way better than using binary 0-1 weights. | Based on factor loadings (in factor analysis) can we give unequal weights to Likert scale items? | Yes, it is possible to supply each item with its own weight. This weight, however, cannot be the loading itself because - you might remember - loading is a regression coefficient of a factor in predic | Based on factor loadings (in factor analysis) can we give unequal weights to Likert scale items?
Yes, it is possible to supply each item with its own weight. This weight, however, cannot be the loading itself because - you might remember - loading is a regression coefficient of a factor in predicting an item, not vice versa. The weight you imply must be a regression coefficient of an item in predicting a factor. We obtain those weights when we compute factore scores; the weights $\mathbf{B}$ are estimated from inter-item correlation (or covariance) matrix $\mathbf{R}$ and loadings matrix $\mathbf{A}$ typically this way: $\mathbf{B}= \mathbf{R}^{-1} \mathbf{A}$. (If factors were obliquely rotated then in this formula factor structure matrix should replace $\mathbf{A}$.) See also, where coarse and refined methods are considered; coarse method permits using loadings as weights.
If so, then why do researchers generally use Likert scales with equally weighted items? In other words, why do they often prefer just binary weights 1 or 0 in place of the above computed fractional weights? There may be several reasons. To mention just three... First, the above weights $\mathbf{B}$ are not precise (unless we used PCA model rather than factor analysis model per se) due to the fact that the uniqness of an item is not known on the level of each case (respondent), and thereby computed factor scores are only approximation of true factor values. Second, computed weights $\mathbf{B}$ usually will vary from sample to sample and eventually they show not much better than simply 1 vs 0 weights. Third, the weighted-sum-model behind a summative (Likert) construct is a simplification in principle. It implies that the trait that is measured by the scale depends on all its items simultaneously whatever its pronouncedness. But we know that many traits behave differently. For instance, when a trait is weak, it may show only a subset of symptoms (i.e. items), but those expressed in full; as the trait grows stronger, more symptoms join in, some partly expressed, some expressed in full and even replacing those "older" symptoms. This dynamic and unpredictive internal growths of a trait cannot be modeled by weighted linear combination of its phenomena. In this situation, using fine fractional weights is in no way better than using binary 0-1 weights. | Based on factor loadings (in factor analysis) can we give unequal weights to Likert scale items?
Yes, it is possible to supply each item with its own weight. This weight, however, cannot be the loading itself because - you might remember - loading is a regression coefficient of a factor in predic |
35,229 | Looking for a good online resource about survey design and analysis | On the more qualitative side of things in survey methodology, see the books mentioned in https://blogs.rti.org/surveypost/2012/05/15/surveying-on-a-deserted-island-a-bakers-dozen-list-of-resources-to-take-along/. If you are totally new to sampling, Lohr (2009) is a more modern treatment covering additionally some of the aspects of survey data analysis, replicate variance estimation, and some practical aspects, as compared to the somewhat more formal classics mentioned by Michael Chernick.
Update: I posted the original answer in 2012. Since then, another good book came out: Valliant, Dever and Kreuter (2013). Practical Tools for Designing and Weighting Survey Samples (Amazon). This is a good practical book only to the extent that you know the basics, so my advice to learn from Lohr's book first still applies. | Looking for a good online resource about survey design and analysis | On the more qualitative side of things in survey methodology, see the books mentioned in https://blogs.rti.org/surveypost/2012/05/15/surveying-on-a-deserted-island-a-bakers-dozen-list-of-resources-to- | Looking for a good online resource about survey design and analysis
On the more qualitative side of things in survey methodology, see the books mentioned in https://blogs.rti.org/surveypost/2012/05/15/surveying-on-a-deserted-island-a-bakers-dozen-list-of-resources-to-take-along/. If you are totally new to sampling, Lohr (2009) is a more modern treatment covering additionally some of the aspects of survey data analysis, replicate variance estimation, and some practical aspects, as compared to the somewhat more formal classics mentioned by Michael Chernick.
Update: I posted the original answer in 2012. Since then, another good book came out: Valliant, Dever and Kreuter (2013). Practical Tools for Designing and Weighting Survey Samples (Amazon). This is a good practical book only to the extent that you know the basics, so my advice to learn from Lohr's book first still applies. | Looking for a good online resource about survey design and analysis
On the more qualitative side of things in survey methodology, see the books mentioned in https://blogs.rti.org/surveypost/2012/05/15/surveying-on-a-deserted-island-a-bakers-dozen-list-of-resources-to- |
35,230 | Looking for a good online resource about survey design and analysis | I recommend Cochran's Sampling Techniques. It provides the fundamentals and is very clear. Leslie Kish's Survey Sampling is another classic that I can recommend. An advantage of going to amazon is that there are often many user generated book reviews there for the OP to look at. I personally have written a lot of reviews there.
I can think of nothing better than an amazon link for textbook because once you get to the site you have user and publisher reviews that you can read and amazon often provides look ins to the Table of Contents,Preface and excerpts to chapters. You are not going to get a link that will give you a free electronic copy of the book. Now reference articles that fit the bill might be possible to recommend but I think the book recommendations are better references in this case. Perhaps a brief monograph like one of the little green SAGE books would suit the OP but I like these books better. | Looking for a good online resource about survey design and analysis | I recommend Cochran's Sampling Techniques. It provides the fundamentals and is very clear. Leslie Kish's Survey Sampling is another classic that I can recommend. An advantage of going to amazon is th | Looking for a good online resource about survey design and analysis
I recommend Cochran's Sampling Techniques. It provides the fundamentals and is very clear. Leslie Kish's Survey Sampling is another classic that I can recommend. An advantage of going to amazon is that there are often many user generated book reviews there for the OP to look at. I personally have written a lot of reviews there.
I can think of nothing better than an amazon link for textbook because once you get to the site you have user and publisher reviews that you can read and amazon often provides look ins to the Table of Contents,Preface and excerpts to chapters. You are not going to get a link that will give you a free electronic copy of the book. Now reference articles that fit the bill might be possible to recommend but I think the book recommendations are better references in this case. Perhaps a brief monograph like one of the little green SAGE books would suit the OP but I like these books better. | Looking for a good online resource about survey design and analysis
I recommend Cochran's Sampling Techniques. It provides the fundamentals and is very clear. Leslie Kish's Survey Sampling is another classic that I can recommend. An advantage of going to amazon is th |
35,231 | Looking for a good online resource about survey design and analysis | Previous answers seem to have well addressed OP's question. However, I will add, for the benefit of future readers, that Thomas Lumley has provided a wealth of information on "complex surveys", which can be loosely characterized as surveys with often thousands to tens of millions or more of observations (perhaps data larger than your machine's memory), often implementing complex sampling methods (e.g. National Health Interview Survey or Nationwide Inpatient Sample).
Lumley has contributed to "complex survey" analysis through his R package survey. See here for a list of presentations about the survey package, and here for a number of very good vignettes on using the survey package. | Looking for a good online resource about survey design and analysis | Previous answers seem to have well addressed OP's question. However, I will add, for the benefit of future readers, that Thomas Lumley has provided a wealth of information on "complex surveys", which | Looking for a good online resource about survey design and analysis
Previous answers seem to have well addressed OP's question. However, I will add, for the benefit of future readers, that Thomas Lumley has provided a wealth of information on "complex surveys", which can be loosely characterized as surveys with often thousands to tens of millions or more of observations (perhaps data larger than your machine's memory), often implementing complex sampling methods (e.g. National Health Interview Survey or Nationwide Inpatient Sample).
Lumley has contributed to "complex survey" analysis through his R package survey. See here for a list of presentations about the survey package, and here for a number of very good vignettes on using the survey package. | Looking for a good online resource about survey design and analysis
Previous answers seem to have well addressed OP's question. However, I will add, for the benefit of future readers, that Thomas Lumley has provided a wealth of information on "complex surveys", which |
35,232 | How to denote element-wise difference of two matrices | Assume your matrices are called $A$ and $B$, then it is usual to notate their elements with $a_{ij}$ respectively $b_{ij}$. So you could denote the sum of the squared errors as
$$
\text{SSE} = \sum_{i,j} (a_{ij}-b_{ij})^2.
$$
You would get your MSE in the usual way, by taking the average. Does this answer your question? It sorts of seems to sample. You could also first define a new matrix $C$, via
$$c_{ij} = a_{ij}-b_{ij}$$
and work with that. As per the comment above, for the whole matrix you can also just write $$
C=A-B
$$
which works out elementwise as given above. | How to denote element-wise difference of two matrices | Assume your matrices are called $A$ and $B$, then it is usual to notate their elements with $a_{ij}$ respectively $b_{ij}$. So you could denote the sum of the squared errors as
$$
\text{SSE} = \sum_{ | How to denote element-wise difference of two matrices
Assume your matrices are called $A$ and $B$, then it is usual to notate their elements with $a_{ij}$ respectively $b_{ij}$. So you could denote the sum of the squared errors as
$$
\text{SSE} = \sum_{i,j} (a_{ij}-b_{ij})^2.
$$
You would get your MSE in the usual way, by taking the average. Does this answer your question? It sorts of seems to sample. You could also first define a new matrix $C$, via
$$c_{ij} = a_{ij}-b_{ij}$$
and work with that. As per the comment above, for the whole matrix you can also just write $$
C=A-B
$$
which works out elementwise as given above. | How to denote element-wise difference of two matrices
Assume your matrices are called $A$ and $B$, then it is usual to notate their elements with $a_{ij}$ respectively $b_{ij}$. So you could denote the sum of the squared errors as
$$
\text{SSE} = \sum_{ |
35,233 | How to denote element-wise difference of two matrices | Standard notation for addition/subtraction of matrices refers to elementwise addition/subtraction, so with standard notation you have:
$$\mathbf{A}-\mathbf{B} = \begin{bmatrix}
a_{11} - b_{11} & a_{12} - b_{12} & \cdots & a_{1m} - b_{1m} \\
a_{21} - b_{21} & a_{22} - b_{22} & \cdots & a_{2m} - b_{2m} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} - b_{n1} & a_{n2} - b_{n2} & \cdots & a_{nm} - b_{nm} \\
\end{bmatrix}.$$
The quantity of interest to you (if I understand your description correctly) can be written in matrix form as:
$$\begin{align}
\text{SSE}
&\equiv \sum_{i=1}^n \sum_{j=1}^m (a_{ij} - b_{ij})^2 \\[6pt]
&= \sum_{i=1}^n \sum_{j=1}^m ([\mathbf{A}-\mathbf{B}]_{ij} )^2 \\[6pt]
&= \sum_{i=1}^n \sum_{j=1}^m [\mathbf{A}-\mathbf{B}]_{ij} [\mathbf{A}-\mathbf{B}]_{ij} \\[6pt]
&= \sum_{i=1}^n \sum_{j=1}^m [(\mathbf{A}-\mathbf{B})^\text{T}]_{ji} [(\mathbf{A}-\mathbf{B})]_{ij} \\[6pt]
&= \sum_{j=1}^m \sum_{i=1}^n [(\mathbf{A}-\mathbf{B})^\text{T}]_{ji} [(\mathbf{A}-\mathbf{B})]_{ij} \\[6pt]
&= \sum_{j=1}^m [(\mathbf{A}-\mathbf{B})^\text{T} (\mathbf{A}-\mathbf{B})]_{jj} \\[10pt]
&= \text{tr}((\mathbf{A}-\mathbf{B})^\text{T} (\mathbf{A}-\mathbf{B})). \\[6pt]
\end{align}$$ | How to denote element-wise difference of two matrices | Standard notation for addition/subtraction of matrices refers to elementwise addition/subtraction, so with standard notation you have:
$$\mathbf{A}-\mathbf{B} = \begin{bmatrix}
a_{11} - b_{11} & a_{1 | How to denote element-wise difference of two matrices
Standard notation for addition/subtraction of matrices refers to elementwise addition/subtraction, so with standard notation you have:
$$\mathbf{A}-\mathbf{B} = \begin{bmatrix}
a_{11} - b_{11} & a_{12} - b_{12} & \cdots & a_{1m} - b_{1m} \\
a_{21} - b_{21} & a_{22} - b_{22} & \cdots & a_{2m} - b_{2m} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} - b_{n1} & a_{n2} - b_{n2} & \cdots & a_{nm} - b_{nm} \\
\end{bmatrix}.$$
The quantity of interest to you (if I understand your description correctly) can be written in matrix form as:
$$\begin{align}
\text{SSE}
&\equiv \sum_{i=1}^n \sum_{j=1}^m (a_{ij} - b_{ij})^2 \\[6pt]
&= \sum_{i=1}^n \sum_{j=1}^m ([\mathbf{A}-\mathbf{B}]_{ij} )^2 \\[6pt]
&= \sum_{i=1}^n \sum_{j=1}^m [\mathbf{A}-\mathbf{B}]_{ij} [\mathbf{A}-\mathbf{B}]_{ij} \\[6pt]
&= \sum_{i=1}^n \sum_{j=1}^m [(\mathbf{A}-\mathbf{B})^\text{T}]_{ji} [(\mathbf{A}-\mathbf{B})]_{ij} \\[6pt]
&= \sum_{j=1}^m \sum_{i=1}^n [(\mathbf{A}-\mathbf{B})^\text{T}]_{ji} [(\mathbf{A}-\mathbf{B})]_{ij} \\[6pt]
&= \sum_{j=1}^m [(\mathbf{A}-\mathbf{B})^\text{T} (\mathbf{A}-\mathbf{B})]_{jj} \\[10pt]
&= \text{tr}((\mathbf{A}-\mathbf{B})^\text{T} (\mathbf{A}-\mathbf{B})). \\[6pt]
\end{align}$$ | How to denote element-wise difference of two matrices
Standard notation for addition/subtraction of matrices refers to elementwise addition/subtraction, so with standard notation you have:
$$\mathbf{A}-\mathbf{B} = \begin{bmatrix}
a_{11} - b_{11} & a_{1 |
35,234 | SEM with binary dependent variable | Did you read the original Olsson (1979) paper? I believe it still provides the best description of what polychoric correlations are (although I've probably skimmed only 10% of the existing literature, I have to admit; at some point, it just gets too repetitive of the limited number of ideas though). Polychoric correlations are ML estimates of the correlations of the underlying normal distribution, so you interpret them just as you would Pearson moment correlations with continuous data. Given the ML origins of polychoric correlations, I never understood the advice to use ADF or other least squares methods with them to obtain model parameter estimates, although I do understand that say diagonally weighted least squares (don't know if John Fox implemented them in sem though), while being less asymptotically efficient, don't need as much auxiliary information for estimation purposes.
There is no magic sample size number, like, you hit 2000 and -- BOOM! -- everything starts working. In my simulations (and I've done a few petaflops this way and that way for my papers), I've seen both cases when asymptotic results worked perfectly fine with $N=200$ and failed to work with $N=5000$. In the most peculiar cases, for the same method and distribution of the underlying data, some asymptotic aspects, such as confidence interval coverage say, would be OK for $N=300$, while others, like $\chi^2$ distribution of a test statistic, would not work until you have $N=1000$. So I am highly skeptical of any sample size advice, and would rather recommend to run a simulation addressing your particular sample size, model complexity and magnitude of the errors. The first paper to bash ADF (Hu, Bentler and Kano (1992)) used an insane degree of overidentification, something like 30 variables in the model, which translates to 400 degrees of freedom, and a sample size of 50. ADF wouldn't even begin to work in these circumstances, as it won't be able to invert the matrix of the fourth moments which will be rank-deficient. And to get 400 degrees of freedom for the test statistic with the sample size below 1000 is a high expectation, too.
So I understand the healthy skepticism that you are demonstrating, but there is simply nothing you can do in your situation about it. Just run polycor to get the correlation estimates, feed them to sem, and that would be it -- there is little you can do to produce a much better analysis.
If you were a Stata user, I would immediately recommend gllamm package, but I am not sure whether a direct analogue of it exists in R. | SEM with binary dependent variable | Did you read the original Olsson (1979) paper? I believe it still provides the best description of what polychoric correlations are (although I've probably skimmed only 10% of the existing literature, | SEM with binary dependent variable
Did you read the original Olsson (1979) paper? I believe it still provides the best description of what polychoric correlations are (although I've probably skimmed only 10% of the existing literature, I have to admit; at some point, it just gets too repetitive of the limited number of ideas though). Polychoric correlations are ML estimates of the correlations of the underlying normal distribution, so you interpret them just as you would Pearson moment correlations with continuous data. Given the ML origins of polychoric correlations, I never understood the advice to use ADF or other least squares methods with them to obtain model parameter estimates, although I do understand that say diagonally weighted least squares (don't know if John Fox implemented them in sem though), while being less asymptotically efficient, don't need as much auxiliary information for estimation purposes.
There is no magic sample size number, like, you hit 2000 and -- BOOM! -- everything starts working. In my simulations (and I've done a few petaflops this way and that way for my papers), I've seen both cases when asymptotic results worked perfectly fine with $N=200$ and failed to work with $N=5000$. In the most peculiar cases, for the same method and distribution of the underlying data, some asymptotic aspects, such as confidence interval coverage say, would be OK for $N=300$, while others, like $\chi^2$ distribution of a test statistic, would not work until you have $N=1000$. So I am highly skeptical of any sample size advice, and would rather recommend to run a simulation addressing your particular sample size, model complexity and magnitude of the errors. The first paper to bash ADF (Hu, Bentler and Kano (1992)) used an insane degree of overidentification, something like 30 variables in the model, which translates to 400 degrees of freedom, and a sample size of 50. ADF wouldn't even begin to work in these circumstances, as it won't be able to invert the matrix of the fourth moments which will be rank-deficient. And to get 400 degrees of freedom for the test statistic with the sample size below 1000 is a high expectation, too.
So I understand the healthy skepticism that you are demonstrating, but there is simply nothing you can do in your situation about it. Just run polycor to get the correlation estimates, feed them to sem, and that would be it -- there is little you can do to produce a much better analysis.
If you were a Stata user, I would immediately recommend gllamm package, but I am not sure whether a direct analogue of it exists in R. | SEM with binary dependent variable
Did you read the original Olsson (1979) paper? I believe it still provides the best description of what polychoric correlations are (although I've probably skimmed only 10% of the existing literature, |
35,235 | SEM with binary dependent variable | @StasK is correct in stating that the polychoric correlations can be interpreted similarly to pearson correlations, however, it sounds as if you are attempting to build a latent variable model and not simply interpret the correlation matrix, so i suggest not worry about direct interpretation. suffice to say that polychoric correlations are appropriate for binary indicators.
the problem with binary indicators really stems from the fact that you're talking about (severely) non-normally distributed variables. this is a problem that normal theory estimators, such as ML and GLS, struggle to overcome - ML and GLS typically estimate inflated $\chi^2$ model-fit indices as well as under-estimating parameter variance, both leading to inflated type-I errors. nevermind the fact that you also have a small sample size.
given these issues, the weighted least squares with mean and variance correction (WLSMV) has been shown to be the most appropriate estimator for use with binary indicators. unfortunately, this estimator is only available in the Mplus software. Other than Mplus, the fa.poly function in the R package psych implements a WLS estimator which still runs into issues with sample size but is preferrable over ADF or ML estimation.
for a good overview on the topic of categorical data in SEM (and, really, any latent variable model), i recommend the accessible chapter by Finney and DiStefano (2006).
...though you mentioned a continuous indicator, depending on the model you're trying to estimate, you may give item response theory (IRT) models a look. under certain conditions, they are seen to be equivalent to CFA/SEM models but differ in the estimation approach. Finch (2010) does a good job of illustrating the IRT/CFA equivalence.
Finch, H. (2010). Item parameter estimation for the MIRT model: Bias and precision of
confirmatory factor-analysis based models. Applied Psychological Measurement, 34, 10-
26.
Finney, S. J., & DiStefano, C. (2006). Nonnormal and categorical data in structural equation models. In G.R. Hancock & R.O. Mueller (Eds.). A second course in structural equation modeling (pp. 269-314). Greenwich, CT: Information Age. | SEM with binary dependent variable | @StasK is correct in stating that the polychoric correlations can be interpreted similarly to pearson correlations, however, it sounds as if you are attempting to build a latent variable model and not | SEM with binary dependent variable
@StasK is correct in stating that the polychoric correlations can be interpreted similarly to pearson correlations, however, it sounds as if you are attempting to build a latent variable model and not simply interpret the correlation matrix, so i suggest not worry about direct interpretation. suffice to say that polychoric correlations are appropriate for binary indicators.
the problem with binary indicators really stems from the fact that you're talking about (severely) non-normally distributed variables. this is a problem that normal theory estimators, such as ML and GLS, struggle to overcome - ML and GLS typically estimate inflated $\chi^2$ model-fit indices as well as under-estimating parameter variance, both leading to inflated type-I errors. nevermind the fact that you also have a small sample size.
given these issues, the weighted least squares with mean and variance correction (WLSMV) has been shown to be the most appropriate estimator for use with binary indicators. unfortunately, this estimator is only available in the Mplus software. Other than Mplus, the fa.poly function in the R package psych implements a WLS estimator which still runs into issues with sample size but is preferrable over ADF or ML estimation.
for a good overview on the topic of categorical data in SEM (and, really, any latent variable model), i recommend the accessible chapter by Finney and DiStefano (2006).
...though you mentioned a continuous indicator, depending on the model you're trying to estimate, you may give item response theory (IRT) models a look. under certain conditions, they are seen to be equivalent to CFA/SEM models but differ in the estimation approach. Finch (2010) does a good job of illustrating the IRT/CFA equivalence.
Finch, H. (2010). Item parameter estimation for the MIRT model: Bias and precision of
confirmatory factor-analysis based models. Applied Psychological Measurement, 34, 10-
26.
Finney, S. J., & DiStefano, C. (2006). Nonnormal and categorical data in structural equation models. In G.R. Hancock & R.O. Mueller (Eds.). A second course in structural equation modeling (pp. 269-314). Greenwich, CT: Information Age. | SEM with binary dependent variable
@StasK is correct in stating that the polychoric correlations can be interpreted similarly to pearson correlations, however, it sounds as if you are attempting to build a latent variable model and not |
35,236 | On what tasks does neuroevolution outperform basic application of neural networks or genetic algorithms? | This has been researched for 20 years or so, and there are many papers claiming to outperform backpropagation. Xin Yao did a lot of work on this in the 1990s, and Kenneth Stanley created one of the currently most active frameworks, NEAT (NeuroEvolution of Augmenting Topologies (see http://www.cs.ucf.edu/~kstanley/neat.html and http://tech.groups.yahoo.com/group/neat/).
There's a lot of published material on different neuroevolutionary techniques, but these references may be useful in getting a feel for progress over the years:
Azzini, A., Tettamanzi, A. (2008) 'Evolving Neural Networks for
Static Single-Position Automated Trading', Journal of Artificial
Evolution and Applications, Volume 2008, Article ID 184286
Hintz, K.J., Spofford, J.J. (1990) 'Evolving a Neural Network',
Proceedings, 5th IEEE International Symposium on Intelligent
Control, pp. 479-484
Miller, G.F., Todd, P.M., Hedge, S.U. (1989) 'Designing neural
networks using genetic algorithms', Proceedings of the Third
International Conference on Genetic Algorithms
Montana, D.J. (1995) 'Neural Network Weight Selection Using Genetic
Algorithms', Intelligent Hybrid Systems
Yao, X. (1993) 'Evolutionary artificial neural networks',
International Journal of Neural Systems, Vol. 4, No. 3, pp. 203-222 | On what tasks does neuroevolution outperform basic application of neural networks or genetic algorit | This has been researched for 20 years or so, and there are many papers claiming to outperform backpropagation. Xin Yao did a lot of work on this in the 1990s, and Kenneth Stanley created one of the cu | On what tasks does neuroevolution outperform basic application of neural networks or genetic algorithms?
This has been researched for 20 years or so, and there are many papers claiming to outperform backpropagation. Xin Yao did a lot of work on this in the 1990s, and Kenneth Stanley created one of the currently most active frameworks, NEAT (NeuroEvolution of Augmenting Topologies (see http://www.cs.ucf.edu/~kstanley/neat.html and http://tech.groups.yahoo.com/group/neat/).
There's a lot of published material on different neuroevolutionary techniques, but these references may be useful in getting a feel for progress over the years:
Azzini, A., Tettamanzi, A. (2008) 'Evolving Neural Networks for
Static Single-Position Automated Trading', Journal of Artificial
Evolution and Applications, Volume 2008, Article ID 184286
Hintz, K.J., Spofford, J.J. (1990) 'Evolving a Neural Network',
Proceedings, 5th IEEE International Symposium on Intelligent
Control, pp. 479-484
Miller, G.F., Todd, P.M., Hedge, S.U. (1989) 'Designing neural
networks using genetic algorithms', Proceedings of the Third
International Conference on Genetic Algorithms
Montana, D.J. (1995) 'Neural Network Weight Selection Using Genetic
Algorithms', Intelligent Hybrid Systems
Yao, X. (1993) 'Evolutionary artificial neural networks',
International Journal of Neural Systems, Vol. 4, No. 3, pp. 203-222 | On what tasks does neuroevolution outperform basic application of neural networks or genetic algorit
This has been researched for 20 years or so, and there are many papers claiming to outperform backpropagation. Xin Yao did a lot of work on this in the 1990s, and Kenneth Stanley created one of the cu |
35,237 | Smallest Kullback-Leibler divergence | If you express the Kullback–Leibler divergence when $p_2$ is a normal pdf on $\mathbb R^d$,
\begin{align}
D&(p_1||p_2) =\int_{\mathbb R^d} p_1 \log p_1 \text{d}\lambda - \int_{\mathbb R^d} p_1 \log p_2 \text{d}\lambda\\
&= \int_{\mathbb R^d} p_1 \log p_1 \text{d}\lambda - \dfrac{1}{2} \int_{\mathbb R^d} p_1 \left\{-(x-\mu)^T \Sigma^{-1} (x-\mu) - \log |\Sigma| -d \log 2\pi \right\} \text{d}\lambda \\
&= \int_{\mathbb R^d} p_1 \log p_1 \text{d}\lambda + \dfrac{1}{2} \left\{ \log |\Sigma| + d \log 2\pi + \mathbb{E}_1 \left[ (x-\mu)^T \Sigma^{-1} (x-\mu) \right] \right\}
\end{align}
Now
$$
\mathbb{E}_1 \left[ (x-\mu)^T \Sigma^{-1} (x-\mu) \right]=
\mathbb{E}_1 \left[ (x-\mathbb{E}_1[x] )^T \Sigma^{-1} (x-\mathbb{E}_1[x]) \right]$$
$$\qquad\qquad\qquad + (\mathbb{E}_1[x]-\mu)^T \Sigma^{-1} (\mathbb{E}_1[x]-\mu)
$$
so the minimum in $\mu$ is indeed reached for $\mu=\mathbb{E}_1[x]$.
Minimising
$$
\log |\Sigma| + \mathbb{E}_1 \left[ (x-\mathbb{E}_1[x] )^T \Sigma^{-1} (x-\mathbb{E}_1[x]) \right] =
$$
$$
\log |\Sigma| + \mathbb{E}_1 \left[ \text{trace} \left\{ (x-\mathbb{E}_1[x] )^T \Sigma^{-1} (x-\mathbb{E}_1[x]) \right\}\right] = \qquad\qquad\qquad
$$
$$
\log |\Sigma| + \mathbb{E}_1 \left[ \text{trace} \left\{ \Sigma^{-1} (x-\mathbb{E}_1[x]) (x-\mathbb{E}_1[x] )^T \right\}\right] =
$$
$$
\log |\Sigma| + \text{trace} \left\{ \Sigma^{-1} \mathbb{E}_1 \left[ (x-\mathbb{E}_1[x]) (x-\mathbb{E}_1[x] )^T \right] \right\}=
$$
$$
\qquad\qquad \log |\Sigma| + \text{trace} \left\{ \Sigma^{-1} \Sigma_1 \right\}
$$
leads to a minimum in $\Sigma$ for $\Sigma=\Sigma_1$. | Smallest Kullback-Leibler divergence | If you express the Kullback–Leibler divergence when $p_2$ is a normal pdf on $\mathbb R^d$,
\begin{align}
D&(p_1||p_2) =\int_{\mathbb R^d} p_1 \log p_1 \text{d}\lambda - \int_{\mathbb R^d} p_1 \log p_ | Smallest Kullback-Leibler divergence
If you express the Kullback–Leibler divergence when $p_2$ is a normal pdf on $\mathbb R^d$,
\begin{align}
D&(p_1||p_2) =\int_{\mathbb R^d} p_1 \log p_1 \text{d}\lambda - \int_{\mathbb R^d} p_1 \log p_2 \text{d}\lambda\\
&= \int_{\mathbb R^d} p_1 \log p_1 \text{d}\lambda - \dfrac{1}{2} \int_{\mathbb R^d} p_1 \left\{-(x-\mu)^T \Sigma^{-1} (x-\mu) - \log |\Sigma| -d \log 2\pi \right\} \text{d}\lambda \\
&= \int_{\mathbb R^d} p_1 \log p_1 \text{d}\lambda + \dfrac{1}{2} \left\{ \log |\Sigma| + d \log 2\pi + \mathbb{E}_1 \left[ (x-\mu)^T \Sigma^{-1} (x-\mu) \right] \right\}
\end{align}
Now
$$
\mathbb{E}_1 \left[ (x-\mu)^T \Sigma^{-1} (x-\mu) \right]=
\mathbb{E}_1 \left[ (x-\mathbb{E}_1[x] )^T \Sigma^{-1} (x-\mathbb{E}_1[x]) \right]$$
$$\qquad\qquad\qquad + (\mathbb{E}_1[x]-\mu)^T \Sigma^{-1} (\mathbb{E}_1[x]-\mu)
$$
so the minimum in $\mu$ is indeed reached for $\mu=\mathbb{E}_1[x]$.
Minimising
$$
\log |\Sigma| + \mathbb{E}_1 \left[ (x-\mathbb{E}_1[x] )^T \Sigma^{-1} (x-\mathbb{E}_1[x]) \right] =
$$
$$
\log |\Sigma| + \mathbb{E}_1 \left[ \text{trace} \left\{ (x-\mathbb{E}_1[x] )^T \Sigma^{-1} (x-\mathbb{E}_1[x]) \right\}\right] = \qquad\qquad\qquad
$$
$$
\log |\Sigma| + \mathbb{E}_1 \left[ \text{trace} \left\{ \Sigma^{-1} (x-\mathbb{E}_1[x]) (x-\mathbb{E}_1[x] )^T \right\}\right] =
$$
$$
\log |\Sigma| + \text{trace} \left\{ \Sigma^{-1} \mathbb{E}_1 \left[ (x-\mathbb{E}_1[x]) (x-\mathbb{E}_1[x] )^T \right] \right\}=
$$
$$
\qquad\qquad \log |\Sigma| + \text{trace} \left\{ \Sigma^{-1} \Sigma_1 \right\}
$$
leads to a minimum in $\Sigma$ for $\Sigma=\Sigma_1$. | Smallest Kullback-Leibler divergence
If you express the Kullback–Leibler divergence when $p_2$ is a normal pdf on $\mathbb R^d$,
\begin{align}
D&(p_1||p_2) =\int_{\mathbb R^d} p_1 \log p_1 \text{d}\lambda - \int_{\mathbb R^d} p_1 \log p_ |
35,238 | What is the difference between 2x2 factorial design experiment and a 2-way ANOVA? | A 2x2 factorial design is a trial design meant to be able to more efficiently test two interventions in one sample. For instance, testing aspirin versus placebo and clonidine versus placebo in a randomized trial (the POISE-2 trial is doing this). Each patient is randomized to (clonidine or placebo) and (aspirin or placebo). The main effect of aspirin and the main effect of clonidine on the outcome of interest can be assessed using a two-way ANOVA.
This trial design is useful to detect an interaction (this is where the effect on the outcome of one factor (e.g. aspirin) depends on the level of the other factor (i.e. whether or not the person gets clonidine)), but one must be careful, as many factorial trials are not powered to detect an interaction. Therefore, one runs the risk of falsely declaring that there is no interaction, when in fact there is one (a type II error).
Therefore, I wouldn't say the two have the same assumptions, as one is a design and one is a statistical method. That being said, the two-way ANOVA is a great way of analyzing a 2x2 factorial design, since you will get results on the main effects as well as any interaction between the effects.
See http://udel.edu/~mcdonald/stattwoway.html for more information. | What is the difference between 2x2 factorial design experiment and a 2-way ANOVA? | A 2x2 factorial design is a trial design meant to be able to more efficiently test two interventions in one sample. For instance, testing aspirin versus placebo and clonidine versus placebo in a rando | What is the difference between 2x2 factorial design experiment and a 2-way ANOVA?
A 2x2 factorial design is a trial design meant to be able to more efficiently test two interventions in one sample. For instance, testing aspirin versus placebo and clonidine versus placebo in a randomized trial (the POISE-2 trial is doing this). Each patient is randomized to (clonidine or placebo) and (aspirin or placebo). The main effect of aspirin and the main effect of clonidine on the outcome of interest can be assessed using a two-way ANOVA.
This trial design is useful to detect an interaction (this is where the effect on the outcome of one factor (e.g. aspirin) depends on the level of the other factor (i.e. whether or not the person gets clonidine)), but one must be careful, as many factorial trials are not powered to detect an interaction. Therefore, one runs the risk of falsely declaring that there is no interaction, when in fact there is one (a type II error).
Therefore, I wouldn't say the two have the same assumptions, as one is a design and one is a statistical method. That being said, the two-way ANOVA is a great way of analyzing a 2x2 factorial design, since you will get results on the main effects as well as any interaction between the effects.
See http://udel.edu/~mcdonald/stattwoway.html for more information. | What is the difference between 2x2 factorial design experiment and a 2-way ANOVA?
A 2x2 factorial design is a trial design meant to be able to more efficiently test two interventions in one sample. For instance, testing aspirin versus placebo and clonidine versus placebo in a rando |
35,239 | Why the infrequent use of machine learning techniques in translational biomedicine? | Machine learning techniques often lack interpretability. Also, they tend to be rather crude from a statistical point of view --- e.g. neural networks make no assumptions about the input data. I have a feeling that lots of people (especially if they have a strong statistical background) look down on them. | Why the infrequent use of machine learning techniques in translational biomedicine? | Machine learning techniques often lack interpretability. Also, they tend to be rather crude from a statistical point of view --- e.g. neural networks make no assumptions about the input data. I have | Why the infrequent use of machine learning techniques in translational biomedicine?
Machine learning techniques often lack interpretability. Also, they tend to be rather crude from a statistical point of view --- e.g. neural networks make no assumptions about the input data. I have a feeling that lots of people (especially if they have a strong statistical background) look down on them. | Why the infrequent use of machine learning techniques in translational biomedicine?
Machine learning techniques often lack interpretability. Also, they tend to be rather crude from a statistical point of view --- e.g. neural networks make no assumptions about the input data. I have |
35,240 | Why the infrequent use of machine learning techniques in translational biomedicine? | The track record of machine learning in biomedicine has not been very good. Early successes in machine learning came in high signal:noise ratio pattern recognition areas such as visual pattern recognition. The S:N ratio is much lower in biology and the social sciences. Machine learning effectively fits a lot of interactions between predictors, and to do that you must either have a huge sample size or a very high S:N ratio. See Is Medicine Mesmerized by Machine Learning?. In addition, many practitioners of machine learning has misunderstood prediction tasks as classification tasks. See here for more. | Why the infrequent use of machine learning techniques in translational biomedicine? | The track record of machine learning in biomedicine has not been very good. Early successes in machine learning came in high signal:noise ratio pattern recognition areas such as visual pattern recogn | Why the infrequent use of machine learning techniques in translational biomedicine?
The track record of machine learning in biomedicine has not been very good. Early successes in machine learning came in high signal:noise ratio pattern recognition areas such as visual pattern recognition. The S:N ratio is much lower in biology and the social sciences. Machine learning effectively fits a lot of interactions between predictors, and to do that you must either have a huge sample size or a very high S:N ratio. See Is Medicine Mesmerized by Machine Learning?. In addition, many practitioners of machine learning has misunderstood prediction tasks as classification tasks. See here for more. | Why the infrequent use of machine learning techniques in translational biomedicine?
The track record of machine learning in biomedicine has not been very good. Early successes in machine learning came in high signal:noise ratio pattern recognition areas such as visual pattern recogn |
35,241 | Visualize sets and their connections | Let $C$ by a table and $C[i,j]$ is number of users that use both $i$-th and $j$-th feature. $C[i,i]$ is number of users using $i$-th feature. By $N$ we denote the total number of users.
One possibility is to plot the table, and (below there are some suggestions):
only its left triangular part (as $C[i,j]=C[j,i]$),
with entries sorted in a way where frequently co-used features are together,
with coloring/brightness more-or-less proportional to $\log C[i,j]$.
Other possibility is to construct a graph out of your data. Nodes are features and the edges connecting them - the indication of co-usage.
To obtain it you can compute relative co-usage
$$c[i,j]= \frac{C[i,j]}{\sqrt{C[i,i] C[j,j]}}$$
For not correlated features it is $$c_\textrm{non-corr}[i,j]= \frac{N \frac{C[i,i]}{N} \frac{C[j,j]}{N}}{\sqrt{C[i,i] C[j,j]}} = \frac{\sqrt{C[i,i] C[j,j]}}{N}.$$
If it is much higher (up to $1$) then the features are co-used.
If it is much lower (down to $0$) - the features are anti-correlated (i.e. people tend to use either $i$ or $j$), which may be a common phenomena as well.
You need to set thresholds $t_c$ (and optionally $t_a<t_c$ ):
If $\frac{c[i,j]}{c_\textrm{non-corr}[i,j]}>t_c$ connect $i$ and $j$, to mark them as the co-used features.
Optionally, if $\frac{c[i,j]}{c_\textrm{non-corr}[i,j]}<t_a$ connect $i$ and $j$ with a different type of lines, to mark them as anti-correlated.
Displaying the actual numbers on the plot (or point/line sizes) may be useful.
EDIT: Fixed an error. | Visualize sets and their connections | Let $C$ by a table and $C[i,j]$ is number of users that use both $i$-th and $j$-th feature. $C[i,i]$ is number of users using $i$-th feature. By $N$ we denote the total number of users.
One possibilit | Visualize sets and their connections
Let $C$ by a table and $C[i,j]$ is number of users that use both $i$-th and $j$-th feature. $C[i,i]$ is number of users using $i$-th feature. By $N$ we denote the total number of users.
One possibility is to plot the table, and (below there are some suggestions):
only its left triangular part (as $C[i,j]=C[j,i]$),
with entries sorted in a way where frequently co-used features are together,
with coloring/brightness more-or-less proportional to $\log C[i,j]$.
Other possibility is to construct a graph out of your data. Nodes are features and the edges connecting them - the indication of co-usage.
To obtain it you can compute relative co-usage
$$c[i,j]= \frac{C[i,j]}{\sqrt{C[i,i] C[j,j]}}$$
For not correlated features it is $$c_\textrm{non-corr}[i,j]= \frac{N \frac{C[i,i]}{N} \frac{C[j,j]}{N}}{\sqrt{C[i,i] C[j,j]}} = \frac{\sqrt{C[i,i] C[j,j]}}{N}.$$
If it is much higher (up to $1$) then the features are co-used.
If it is much lower (down to $0$) - the features are anti-correlated (i.e. people tend to use either $i$ or $j$), which may be a common phenomena as well.
You need to set thresholds $t_c$ (and optionally $t_a<t_c$ ):
If $\frac{c[i,j]}{c_\textrm{non-corr}[i,j]}>t_c$ connect $i$ and $j$, to mark them as the co-used features.
Optionally, if $\frac{c[i,j]}{c_\textrm{non-corr}[i,j]}<t_a$ connect $i$ and $j$ with a different type of lines, to mark them as anti-correlated.
Displaying the actual numbers on the plot (or point/line sizes) may be useful.
EDIT: Fixed an error. | Visualize sets and their connections
Let $C$ by a table and $C[i,j]$ is number of users that use both $i$-th and $j$-th feature. $C[i,i]$ is number of users using $i$-th feature. By $N$ we denote the total number of users.
One possibilit |
35,242 | Visualize sets and their connections | I think you may be interested in circular displays for tabular data (in your case, a two-way table denoting the co-occurence of every binary features), as proposed through Circos; see example and on-line demo here.
Sidenote: As an alternative, you can also take a look at Parallel Sets that were developed by Robert Kosara. See also,
Robert Kosara, Turning a Table into a Tree: Growing Parallel Sets
into a Purposeful Project, in Steele, Iliinsky (eds), Beautiful
Visualization, pp. 193–204, O'Reilly Media, 2010. | Visualize sets and their connections | I think you may be interested in circular displays for tabular data (in your case, a two-way table denoting the co-occurence of every binary features), as proposed through Circos; see example and on-l | Visualize sets and their connections
I think you may be interested in circular displays for tabular data (in your case, a two-way table denoting the co-occurence of every binary features), as proposed through Circos; see example and on-line demo here.
Sidenote: As an alternative, you can also take a look at Parallel Sets that were developed by Robert Kosara. See also,
Robert Kosara, Turning a Table into a Tree: Growing Parallel Sets
into a Purposeful Project, in Steele, Iliinsky (eds), Beautiful
Visualization, pp. 193–204, O'Reilly Media, 2010. | Visualize sets and their connections
I think you may be interested in circular displays for tabular data (in your case, a two-way table denoting the co-occurence of every binary features), as proposed through Circos; see example and on-l |
35,243 | Visualize sets and their connections | The simple approach for #2 would be a cross-tabulation: the list of 10 features across the top and down the side, with the intersecting usage of each feature shown as a count, or as various percentages. Percentages are incredibly flexible: you can base the percentages to column count, or table count, and those counts can be unique users or unique user-feature pairs. For that reason I'd start with the counts. Counts are nice and simple and close to the original data. But, your original question sounds like you'll need to do a percentage at some point.
Using conditional formatting in Excel is a quick and dirty way to then see the relative magnitude - set it to a color scale. | Visualize sets and their connections | The simple approach for #2 would be a cross-tabulation: the list of 10 features across the top and down the side, with the intersecting usage of each feature shown as a count, or as various percentage | Visualize sets and their connections
The simple approach for #2 would be a cross-tabulation: the list of 10 features across the top and down the side, with the intersecting usage of each feature shown as a count, or as various percentages. Percentages are incredibly flexible: you can base the percentages to column count, or table count, and those counts can be unique users or unique user-feature pairs. For that reason I'd start with the counts. Counts are nice and simple and close to the original data. But, your original question sounds like you'll need to do a percentage at some point.
Using conditional formatting in Excel is a quick and dirty way to then see the relative magnitude - set it to a color scale. | Visualize sets and their connections
The simple approach for #2 would be a cross-tabulation: the list of 10 features across the top and down the side, with the intersecting usage of each feature shown as a count, or as various percentage |
35,244 | Visualize sets and their connections | A couple of simple ideas:
bar graph of pairs (or sets) of features
a separate bar graph, for each feature, of how many times each other feature appears with it. | Visualize sets and their connections | A couple of simple ideas:
bar graph of pairs (or sets) of features
a separate bar graph, for each feature, of how many times each other feature appears with it. | Visualize sets and their connections
A couple of simple ideas:
bar graph of pairs (or sets) of features
a separate bar graph, for each feature, of how many times each other feature appears with it. | Visualize sets and their connections
A couple of simple ideas:
bar graph of pairs (or sets) of features
a separate bar graph, for each feature, of how many times each other feature appears with it. |
35,245 | Why do people often run a regression with and without control variables? | A little bit on terms first. By definition control variable is kept constant through the study, so you can't use it in regression. You probably mean variables that should be statistically controlled for. Such as covariates or blocking factors (as after randomized block experimental design)
People run regression or ANOVA with such variables not only to wash their effect off predictor variables but mainly to check whether their own effect is significant. If it is significant then their inclusion in the model is fully warranted. If not, they might better be excluded from the model.
This is mostly important for a blocking factor. If you leave it in the model despite that it is not significant you risk to miss the effect of predictor variables due to decreas in Error term df, - blocking factor decreases both Error and its df, and there appeares a competitive situation. Significance of predictors may go down or up depending on "what wins" - fall of Error sum-of-squares of fall of its df. This may be the reason why people prefer more concise models sometimes.
Another reason for this may be that for sample as moderate as 100 inclusuion a lot of IVs, even if they all seem important or significant, lead to overfitting. | Why do people often run a regression with and without control variables? | A little bit on terms first. By definition control variable is kept constant through the study, so you can't use it in regression. You probably mean variables that should be statistically controlled f | Why do people often run a regression with and without control variables?
A little bit on terms first. By definition control variable is kept constant through the study, so you can't use it in regression. You probably mean variables that should be statistically controlled for. Such as covariates or blocking factors (as after randomized block experimental design)
People run regression or ANOVA with such variables not only to wash their effect off predictor variables but mainly to check whether their own effect is significant. If it is significant then their inclusion in the model is fully warranted. If not, they might better be excluded from the model.
This is mostly important for a blocking factor. If you leave it in the model despite that it is not significant you risk to miss the effect of predictor variables due to decreas in Error term df, - blocking factor decreases both Error and its df, and there appeares a competitive situation. Significance of predictors may go down or up depending on "what wins" - fall of Error sum-of-squares of fall of its df. This may be the reason why people prefer more concise models sometimes.
Another reason for this may be that for sample as moderate as 100 inclusuion a lot of IVs, even if they all seem important or significant, lead to overfitting. | Why do people often run a regression with and without control variables?
A little bit on terms first. By definition control variable is kept constant through the study, so you can't use it in regression. You probably mean variables that should be statistically controlled f |
35,246 | Why do people often run a regression with and without control variables? | One more reason to include covariates is that they are important in the literature. If you can demonstrate that some covariate that has been found to have large effects in the past (either on its own or by affecting other parameters) does NOT have large effects in your study, then you have discovered something interesting. | Why do people often run a regression with and without control variables? | One more reason to include covariates is that they are important in the literature. If you can demonstrate that some covariate that has been found to have large effects in the past (either on its own | Why do people often run a regression with and without control variables?
One more reason to include covariates is that they are important in the literature. If you can demonstrate that some covariate that has been found to have large effects in the past (either on its own or by affecting other parameters) does NOT have large effects in your study, then you have discovered something interesting. | Why do people often run a regression with and without control variables?
One more reason to include covariates is that they are important in the literature. If you can demonstrate that some covariate that has been found to have large effects in the past (either on its own |
35,247 | Why do people often run a regression with and without control variables? | Typically, this means that there is a regression with an outcome and a treatment variable. Then, there are other controls that could be added to the model---other covariates that may be important. The authors first run a simple model that only includes treatment. Then, they check the robustness of their findings to the inclusion of other variables. In particular, they ask whether the inclusion of other covariates reduces or eliminates the impact estimated in the simple model.
Additionally, the inclusion of other covariates typically reduces standard errors. In this case, authors may find that the estimated impact is relatively similar between the simple model and the one that includes controls, but only in the latter is the estimate significant (usually, different from 0). The authors would then use the latter model to perform inference (hypothesis tests, confidence intervals) because of its smaller standard errors. | Why do people often run a regression with and without control variables? | Typically, this means that there is a regression with an outcome and a treatment variable. Then, there are other controls that could be added to the model---other covariates that may be important. The | Why do people often run a regression with and without control variables?
Typically, this means that there is a regression with an outcome and a treatment variable. Then, there are other controls that could be added to the model---other covariates that may be important. The authors first run a simple model that only includes treatment. Then, they check the robustness of their findings to the inclusion of other variables. In particular, they ask whether the inclusion of other covariates reduces or eliminates the impact estimated in the simple model.
Additionally, the inclusion of other covariates typically reduces standard errors. In this case, authors may find that the estimated impact is relatively similar between the simple model and the one that includes controls, but only in the latter is the estimate significant (usually, different from 0). The authors would then use the latter model to perform inference (hypothesis tests, confidence intervals) because of its smaller standard errors. | Why do people often run a regression with and without control variables?
Typically, this means that there is a regression with an outcome and a treatment variable. Then, there are other controls that could be added to the model---other covariates that may be important. The |
35,248 | Why do people often run a regression with and without control variables? | In addition to the answers above, there are some covariate selection techniques that involve comparing models with and without a variable in place. And if one wishes to illustrate the effect of adding a covariate, the crude (unadjusted) model is necessary as a reference in the first place. | Why do people often run a regression with and without control variables? | In addition to the answers above, there are some covariate selection techniques that involve comparing models with and without a variable in place. And if one wishes to illustrate the effect of adding | Why do people often run a regression with and without control variables?
In addition to the answers above, there are some covariate selection techniques that involve comparing models with and without a variable in place. And if one wishes to illustrate the effect of adding a covariate, the crude (unadjusted) model is necessary as a reference in the first place. | Why do people often run a regression with and without control variables?
In addition to the answers above, there are some covariate selection techniques that involve comparing models with and without a variable in place. And if one wishes to illustrate the effect of adding |
35,249 | The exact distribution of Wilcoxon rank-sum statistic U | AFAIK, there is no closed form for the distribution. Using R, the naive implementation of getting the exact distribution works for me up to group sizes of at least 12 - that takes less than 1 minute on a Core i5 using Windows7 64bit and current R. For R's own more clever algorithm in C that's used in pwilcox(), you can check the source file src/nmath/wilcox.c
n1 <- 12 # size group 1
n2 <- 12 # size group 2
N <- n1 + n2 # total number of subjects
Now generate all possible cases for the ranks within group 1. These are all ${N \choose n_{1}}$ different samples from the numbers $1, \ldots, N$ of size $n_{1}$. Then calculate the rank sum (= test statistic) for each of these cases. Tabulate these rank sums to get the probability density function from the relative frequencies, the cumulative sum of these relative frequencies is the cumulative distribution function.
rankMat <- combn(1:N, n1) # all possible ranks within group 1
LnPl <- colSums(rankMat) # all possible rank sums for group 1
dWRS <- table(LnPl) / choose(N, n1) # relative frequencies of rank sums: pdf
pWRS <- cumsum(dWRS) # cumulative sums: cdf
Compare the exact distribution against the asymptotically correct normal distribution.
muLnPl <- (n1 * (N+1)) / 2 # expected value
varLnPl <- (n1*n2 * (N+1)) / 12 # variance
plot(names(pWRS), pWRS, main="Wilcoxon RS, N=(12, 12): exact vs. asymptotic",
type="n", xlab="ln+", ylab="P(Ln+ <= ln+)", cex.lab=1.4)
curve(pnorm(x, mean=muLnPl, sd=sqrt(varLnPl)), lwd=4, n=200, add=TRUE)
points(names(pWRS), pWRS, pch=16, col="red", cex=0.7)
abline(h=0.95, col="blue")
legend(x="bottomright", legend=c("exact", "asymptotic"),
pch=c(16, NA), col=c("red", "black"), lty=c(NA, 1), lwd=c(NA, 2)) | The exact distribution of Wilcoxon rank-sum statistic U | AFAIK, there is no closed form for the distribution. Using R, the naive implementation of getting the exact distribution works for me up to group sizes of at least 12 - that takes less than 1 minute o | The exact distribution of Wilcoxon rank-sum statistic U
AFAIK, there is no closed form for the distribution. Using R, the naive implementation of getting the exact distribution works for me up to group sizes of at least 12 - that takes less than 1 minute on a Core i5 using Windows7 64bit and current R. For R's own more clever algorithm in C that's used in pwilcox(), you can check the source file src/nmath/wilcox.c
n1 <- 12 # size group 1
n2 <- 12 # size group 2
N <- n1 + n2 # total number of subjects
Now generate all possible cases for the ranks within group 1. These are all ${N \choose n_{1}}$ different samples from the numbers $1, \ldots, N$ of size $n_{1}$. Then calculate the rank sum (= test statistic) for each of these cases. Tabulate these rank sums to get the probability density function from the relative frequencies, the cumulative sum of these relative frequencies is the cumulative distribution function.
rankMat <- combn(1:N, n1) # all possible ranks within group 1
LnPl <- colSums(rankMat) # all possible rank sums for group 1
dWRS <- table(LnPl) / choose(N, n1) # relative frequencies of rank sums: pdf
pWRS <- cumsum(dWRS) # cumulative sums: cdf
Compare the exact distribution against the asymptotically correct normal distribution.
muLnPl <- (n1 * (N+1)) / 2 # expected value
varLnPl <- (n1*n2 * (N+1)) / 12 # variance
plot(names(pWRS), pWRS, main="Wilcoxon RS, N=(12, 12): exact vs. asymptotic",
type="n", xlab="ln+", ylab="P(Ln+ <= ln+)", cex.lab=1.4)
curve(pnorm(x, mean=muLnPl, sd=sqrt(varLnPl)), lwd=4, n=200, add=TRUE)
points(names(pWRS), pWRS, pch=16, col="red", cex=0.7)
abline(h=0.95, col="blue")
legend(x="bottomright", legend=c("exact", "asymptotic"),
pch=c(16, NA), col=c("red", "black"), lty=c(NA, 1), lwd=c(NA, 2)) | The exact distribution of Wilcoxon rank-sum statistic U
AFAIK, there is no closed form for the distribution. Using R, the naive implementation of getting the exact distribution works for me up to group sizes of at least 12 - that takes less than 1 minute o |
35,250 | The exact distribution of Wilcoxon rank-sum statistic U | Caracal's answer is nice, but it's important to consider that the large sample approximation works best for equal sample sizes, and can perform considerably worse for unbalanced samples.
The paper you (and I) are looking for is for more general statistics than the Wilcoxon (Jonckheere-Terpstra, Umbrella tests, etc...).
Mehta has some papers around 1984 that should speed up the calculation of the distribution, but I agree with Caracal that pwilcox() should do the trick for you unless your samples are quite large.
Also, consider looking at the probability generating function for the Wilcoxon, which a closed form solution does exist and shows up as early as Jonckheere's original paper, and many times after that. This may or may not be useful, depending on your application. | The exact distribution of Wilcoxon rank-sum statistic U | Caracal's answer is nice, but it's important to consider that the large sample approximation works best for equal sample sizes, and can perform considerably worse for unbalanced samples.
The paper yo | The exact distribution of Wilcoxon rank-sum statistic U
Caracal's answer is nice, but it's important to consider that the large sample approximation works best for equal sample sizes, and can perform considerably worse for unbalanced samples.
The paper you (and I) are looking for is for more general statistics than the Wilcoxon (Jonckheere-Terpstra, Umbrella tests, etc...).
Mehta has some papers around 1984 that should speed up the calculation of the distribution, but I agree with Caracal that pwilcox() should do the trick for you unless your samples are quite large.
Also, consider looking at the probability generating function for the Wilcoxon, which a closed form solution does exist and shows up as early as Jonckheere's original paper, and many times after that. This may or may not be useful, depending on your application. | The exact distribution of Wilcoxon rank-sum statistic U
Caracal's answer is nice, but it's important to consider that the large sample approximation works best for equal sample sizes, and can perform considerably worse for unbalanced samples.
The paper yo |
35,251 | Can a 2-2-1 feedforward neural network with sigmoid activation functions represent a solution to XOR? | It is not possible to give a perfect logical gate with a logistic output neuron because the range is $(0,1)$, but you can approximate $(0,0)\mapsto 0, (0,1)\mapsto 1, (1,0) \mapsto 1, (1,1) \mapsto 0$ arbitrarily well, contrary to the previous answer.
Let the inputs be $i,j$. Let the first hidden neuron have weights $(1,1)$ so that its output is $\sigma(i+j)$ where $\sigma$ is the logistic function $\sigma(x) = \frac{\exp(x)}{1+\exp(x)}$, which is $(0,0)\mapsto 1/2, (1,0),(0,1) \mapsto \sigma(1) = 0.731, (1,1) \mapsto \sigma(2) = 0.881$. Let the second hidden neuron have weights $(2,2)$ so that it takes the values $1/2, \sigma(2) = 0.881, \sigma(4) = 0.982$. Let the output neuron have weights $(\alpha,\beta)$. In order to produce the XOR function, we want
$$\begin{eqnarray}\frac12 \alpha + \frac12 \beta &\ll& 0 \newline \sigma(1) \alpha + \sigma(2) \beta & \gg & 0 \newline \sigma(2) \alpha + \sigma(4) \beta &\ll & 0.\end{eqnarray}$$
If we find $(\alpha,\beta)$ so that the inequalities are satisfied, this puts the outputs on the correct sides of $1/2$. Then we can rescale so that the output of the network is arbitrarily close to XOR.
The inequalities are satisfied by $(-1,\beta)$ if $0.830 = \frac{\sigma(1)}{\sigma(2)} \lt \beta \lt \frac{\sigma(2)}{\sigma(4)} = 0.897$. For example, $(\alpha,\beta) = (-1,0.85)$ produces outputs of $(0.481, 0.504, 0.488)$. Scaling this up to $(\alpha,\beta) = (-1000,850)$ gives a neural network of the required structure and no biases which takes the following values:
$$\begin{eqnarray}
(0,0) & \mapsto & 2.7 \times 10^{-33} & \approx & 0\newline
(0,1) & \mapsto & 1 - (2.2 \times 10^{-8}) & \approx & 1\newline
(1,0) & \mapsto & 1 - (2.2 \times 10^{-8}) & \approx & 1\newline
(1,1) & \mapsto & 9.7 \times 10^{-21} & \approx & 0.
\end{eqnarray}$$
So, you can produce XOR using that structure and no biases. | Can a 2-2-1 feedforward neural network with sigmoid activation functions represent a solution to XOR | It is not possible to give a perfect logical gate with a logistic output neuron because the range is $(0,1)$, but you can approximate $(0,0)\mapsto 0, (0,1)\mapsto 1, (1,0) \mapsto 1, (1,1) \mapsto 0$ | Can a 2-2-1 feedforward neural network with sigmoid activation functions represent a solution to XOR?
It is not possible to give a perfect logical gate with a logistic output neuron because the range is $(0,1)$, but you can approximate $(0,0)\mapsto 0, (0,1)\mapsto 1, (1,0) \mapsto 1, (1,1) \mapsto 0$ arbitrarily well, contrary to the previous answer.
Let the inputs be $i,j$. Let the first hidden neuron have weights $(1,1)$ so that its output is $\sigma(i+j)$ where $\sigma$ is the logistic function $\sigma(x) = \frac{\exp(x)}{1+\exp(x)}$, which is $(0,0)\mapsto 1/2, (1,0),(0,1) \mapsto \sigma(1) = 0.731, (1,1) \mapsto \sigma(2) = 0.881$. Let the second hidden neuron have weights $(2,2)$ so that it takes the values $1/2, \sigma(2) = 0.881, \sigma(4) = 0.982$. Let the output neuron have weights $(\alpha,\beta)$. In order to produce the XOR function, we want
$$\begin{eqnarray}\frac12 \alpha + \frac12 \beta &\ll& 0 \newline \sigma(1) \alpha + \sigma(2) \beta & \gg & 0 \newline \sigma(2) \alpha + \sigma(4) \beta &\ll & 0.\end{eqnarray}$$
If we find $(\alpha,\beta)$ so that the inequalities are satisfied, this puts the outputs on the correct sides of $1/2$. Then we can rescale so that the output of the network is arbitrarily close to XOR.
The inequalities are satisfied by $(-1,\beta)$ if $0.830 = \frac{\sigma(1)}{\sigma(2)} \lt \beta \lt \frac{\sigma(2)}{\sigma(4)} = 0.897$. For example, $(\alpha,\beta) = (-1,0.85)$ produces outputs of $(0.481, 0.504, 0.488)$. Scaling this up to $(\alpha,\beta) = (-1000,850)$ gives a neural network of the required structure and no biases which takes the following values:
$$\begin{eqnarray}
(0,0) & \mapsto & 2.7 \times 10^{-33} & \approx & 0\newline
(0,1) & \mapsto & 1 - (2.2 \times 10^{-8}) & \approx & 1\newline
(1,0) & \mapsto & 1 - (2.2 \times 10^{-8}) & \approx & 1\newline
(1,1) & \mapsto & 9.7 \times 10^{-21} & \approx & 0.
\end{eqnarray}$$
So, you can produce XOR using that structure and no biases. | Can a 2-2-1 feedforward neural network with sigmoid activation functions represent a solution to XOR
It is not possible to give a perfect logical gate with a logistic output neuron because the range is $(0,1)$, but you can approximate $(0,0)\mapsto 0, (0,1)\mapsto 1, (1,0) \mapsto 1, (1,1) \mapsto 0$ |
35,252 | Linear regression terminology question -- Beta (β) | You're right. Most texts I've seen write a regression model as $$Y = \beta_0 + \beta_1 X_1 + \ldots + \beta_{p-1} X_{p-1} + \epsilon,$$ and the second usage of "beta" or "beta weight" to mean "standardised regession coefficient" is also relatively common (and is used in some statistical software).
I avoid this ambiguity by saying/writing "standardised regression coefficient" rather than "beta" in the second situation. I would also only say/write $\beta_i$ in the first situation if I've defined it, otherwise I'd say something like "true regression coefficient of $X_i$". | Linear regression terminology question -- Beta (β) | You're right. Most texts I've seen write a regression model as $$Y = \beta_0 + \beta_1 X_1 + \ldots + \beta_{p-1} X_{p-1} + \epsilon,$$ and the second usage of "beta" or "beta weight" to mean "standar | Linear regression terminology question -- Beta (β)
You're right. Most texts I've seen write a regression model as $$Y = \beta_0 + \beta_1 X_1 + \ldots + \beta_{p-1} X_{p-1} + \epsilon,$$ and the second usage of "beta" or "beta weight" to mean "standardised regession coefficient" is also relatively common (and is used in some statistical software).
I avoid this ambiguity by saying/writing "standardised regression coefficient" rather than "beta" in the second situation. I would also only say/write $\beta_i$ in the first situation if I've defined it, otherwise I'd say something like "true regression coefficient of $X_i$". | Linear regression terminology question -- Beta (β)
You're right. Most texts I've seen write a regression model as $$Y = \beta_0 + \beta_1 X_1 + \ldots + \beta_{p-1} X_{p-1} + \epsilon,$$ and the second usage of "beta" or "beta weight" to mean "standar |
35,253 | Linear regression terminology question -- Beta (β) | Assuming the linear model is correct, $\hat{\beta}$ (this is sometimes also called $b$ in elementary texts), the coefficient estimated from the data set you have, is an estimate of the true slope, $\beta$. To my knowledge there is no standard notation for the standardized coefficient, although some simple algebra will give you the relationship between the standardized coefficient and the un-standardized one. | Linear regression terminology question -- Beta (β) | Assuming the linear model is correct, $\hat{\beta}$ (this is sometimes also called $b$ in elementary texts), the coefficient estimated from the data set you have, is an estimate of the true slope, $\b | Linear regression terminology question -- Beta (β)
Assuming the linear model is correct, $\hat{\beta}$ (this is sometimes also called $b$ in elementary texts), the coefficient estimated from the data set you have, is an estimate of the true slope, $\beta$. To my knowledge there is no standard notation for the standardized coefficient, although some simple algebra will give you the relationship between the standardized coefficient and the un-standardized one. | Linear regression terminology question -- Beta (β)
Assuming the linear model is correct, $\hat{\beta}$ (this is sometimes also called $b$ in elementary texts), the coefficient estimated from the data set you have, is an estimate of the true slope, $\b |
35,254 | Generating random samples from a density function | Denote the joint density you want to sample from by $p_{XY}(x,y)$. Then the marginal densities are easily obtainable:
$$ p_{X}(x) = \int_{D_{Y}} p_{XY}(x,y) dy, \ \ \ \ \ \ \ \ \ p_{Y}(y) = \int_{D_{X}} p_{XY}(x,y) dx $$
where $D_X, D_Y$ denote the support of $X, Y$, respectively. From this you can calculate each of the conditional density functions:
$$ p_{X|Y=y}(x) = \frac{ p_{XY}(x,y) }{ p_{Y}(y) }, \ \ \ \ \ \ \ \ p_{Y|X=x}(y) = \frac{ p_{XY}(x,y) }{ p_{X}(x) } $$
From here you can execute Gibbs' sampling to sample from $p_{XY}$. That is, start with some arbitrary starting value in the $(x_{0}, y_{0})$. Then, at step $i$,
(1) Sample from $p_{X|Y=y_{i}}(x)$ to obtain $x_{i+1}$
(2) Sample from $p_{Y|X=x_{i+1}}(y)$ to obtain $y_{i+1}$
(3) Repeat a number of times equal to the desired sample size
Each $(x_{i}, y_{i})$ is an approximate draw from the joint distribution of $X$ and $Y$. The conditional densities can be sampled from in steps (1) and (2) using, for example, rejection sampling assuming they do not come from one of the "standard" distributions, in which case you can use a pre-existing function to generate from them. | Generating random samples from a density function | Denote the joint density you want to sample from by $p_{XY}(x,y)$. Then the marginal densities are easily obtainable:
$$ p_{X}(x) = \int_{D_{Y}} p_{XY}(x,y) dy, \ \ \ \ \ \ \ \ \ p_{Y}(y) = \int_{D_{ | Generating random samples from a density function
Denote the joint density you want to sample from by $p_{XY}(x,y)$. Then the marginal densities are easily obtainable:
$$ p_{X}(x) = \int_{D_{Y}} p_{XY}(x,y) dy, \ \ \ \ \ \ \ \ \ p_{Y}(y) = \int_{D_{X}} p_{XY}(x,y) dx $$
where $D_X, D_Y$ denote the support of $X, Y$, respectively. From this you can calculate each of the conditional density functions:
$$ p_{X|Y=y}(x) = \frac{ p_{XY}(x,y) }{ p_{Y}(y) }, \ \ \ \ \ \ \ \ p_{Y|X=x}(y) = \frac{ p_{XY}(x,y) }{ p_{X}(x) } $$
From here you can execute Gibbs' sampling to sample from $p_{XY}$. That is, start with some arbitrary starting value in the $(x_{0}, y_{0})$. Then, at step $i$,
(1) Sample from $p_{X|Y=y_{i}}(x)$ to obtain $x_{i+1}$
(2) Sample from $p_{Y|X=x_{i+1}}(y)$ to obtain $y_{i+1}$
(3) Repeat a number of times equal to the desired sample size
Each $(x_{i}, y_{i})$ is an approximate draw from the joint distribution of $X$ and $Y$. The conditional densities can be sampled from in steps (1) and (2) using, for example, rejection sampling assuming they do not come from one of the "standard" distributions, in which case you can use a pre-existing function to generate from them. | Generating random samples from a density function
Denote the joint density you want to sample from by $p_{XY}(x,y)$. Then the marginal densities are easily obtainable:
$$ p_{X}(x) = \int_{D_{Y}} p_{XY}(x,y) dy, \ \ \ \ \ \ \ \ \ p_{Y}(y) = \int_{D_{ |
35,255 | Generating random samples from a density function | You can use rejection sampling.
Let $f$ be the probability density we want to sample from
(it does not matter if it is 1- or 2-dimensional).
If we can find a probability distribution $g$,
from which we can easily sample from,
e.g., a uniform distribution (a good choice, if the support of $f$ is bounded),
a Gaussian distribution,
or a Cauchy distribution (a last-resort choice, inefficient but useful if $f$ has fat tails),
such that there exists a constant $c$ with $f \leq c g$,
then we can sample from $f$ as follows.
Take a random sample $x$ from $g$, take a random number $\lambda$ uniformly in $[0,1]$;
if $\lambda c g(x) < f(x)$, keep the sample, otherwise reject it and try again.
Repeat until you have the desired number of samples.
The reason why it works is easily seen on a picture: $(x,\lambda c g(x))$ is a point sampled uniformly in the area under the curve $x \mapsto c g(x)$, and the condition $\lambda c g(x) < f(x)$ asks that it be under the curve $x\mapsto f(x)$. | Generating random samples from a density function | You can use rejection sampling.
Let $f$ be the probability density we want to sample from
(it does not matter if it is 1- or 2-dimensional).
If we can find a probability distribution $g$,
from which | Generating random samples from a density function
You can use rejection sampling.
Let $f$ be the probability density we want to sample from
(it does not matter if it is 1- or 2-dimensional).
If we can find a probability distribution $g$,
from which we can easily sample from,
e.g., a uniform distribution (a good choice, if the support of $f$ is bounded),
a Gaussian distribution,
or a Cauchy distribution (a last-resort choice, inefficient but useful if $f$ has fat tails),
such that there exists a constant $c$ with $f \leq c g$,
then we can sample from $f$ as follows.
Take a random sample $x$ from $g$, take a random number $\lambda$ uniformly in $[0,1]$;
if $\lambda c g(x) < f(x)$, keep the sample, otherwise reject it and try again.
Repeat until you have the desired number of samples.
The reason why it works is easily seen on a picture: $(x,\lambda c g(x))$ is a point sampled uniformly in the area under the curve $x \mapsto c g(x)$, and the condition $\lambda c g(x) < f(x)$ asks that it be under the curve $x\mapsto f(x)$. | Generating random samples from a density function
You can use rejection sampling.
Let $f$ be the probability density we want to sample from
(it does not matter if it is 1- or 2-dimensional).
If we can find a probability distribution $g$,
from which |
35,256 | What is covariance matrix adaptation evolution strategy? | Per the wikipedia page linked above to answer (1) this is another form of gradient descent (which if you need more information with lots of pictures there are many articles available if you google it -- sorry apparently new posters only get 2 urls so I'm having to have to tell you to search instead of pointing you to specific ones) so it is used for optimization of an objective function. Usually this means I want to find the maximum or minimum value for a function I'm interested in.
Not to split hairs with the other poster, but how statistical this algorithm is a question of semantics. It is proposed to converge (for our considerations that is to say this method works) based on a maximum likelihood argument and covariance (variance in higher dimension -- the spread of the sample in the space) is used as part of the update rule for this algorithm to determine how best to proceed for the next iteration. Also new points are sampled from a multivariate normal distribution. Which is a probability model. Similarly as the wikipedia points out this method is similar to, but not identical to, principal component analysis. So from my perspective it seems pretty statistical, but again that is perhaps subjective.
I suppose the second wiki link and parts of the above address (2), but perhaps not adequately so if you need more information please feel free to follow up. The short of it is if there is an optimum this procedure searches for it in the problem space and will eventually find it under the model assumptions. You need to test for the assumptions, because deviation from them is likely a major consequence to and how well this method will work on your problem.
I do however agree with the other poster that you may find more if not better answers on a site like Meta Optimize which is a community like this one specific to machine learning, but many Statisticians do work on learning problems so I would expect others will weigh in as well.
As for (3) it is not necessarily a better method. It is a method for a different set of assumptions than other methods (such as feedforward neural network with gradient descent for example) specifically this method is for non-linear or non-convex problems. Again you need to test if the problem you are working on is appropriate for this method before proceeding to use it. Otherwise you may get a less than optimal solution or worse yet garbage in garbage out. If your problem is non-linear or non-convex then go for it, otherwise you may need to look into using a different method. There is a part of the wikipedia article "Performance in Practice" may help you determine whether this is an appropriate method for the problem you are working on.
You alluded to some mathphobia when not wanting to see too many expressions. So you may not know some properties of the data/problem you're working with. Are you willing to share some more specific details or are you just asking in general?
Hopefully this helps. Please let me know if there are any follow ups. Good luck. | What is covariance matrix adaptation evolution strategy? | Per the wikipedia page linked above to answer (1) this is another form of gradient descent (which if you need more information with lots of pictures there are many articles available if you google it | What is covariance matrix adaptation evolution strategy?
Per the wikipedia page linked above to answer (1) this is another form of gradient descent (which if you need more information with lots of pictures there are many articles available if you google it -- sorry apparently new posters only get 2 urls so I'm having to have to tell you to search instead of pointing you to specific ones) so it is used for optimization of an objective function. Usually this means I want to find the maximum or minimum value for a function I'm interested in.
Not to split hairs with the other poster, but how statistical this algorithm is a question of semantics. It is proposed to converge (for our considerations that is to say this method works) based on a maximum likelihood argument and covariance (variance in higher dimension -- the spread of the sample in the space) is used as part of the update rule for this algorithm to determine how best to proceed for the next iteration. Also new points are sampled from a multivariate normal distribution. Which is a probability model. Similarly as the wikipedia points out this method is similar to, but not identical to, principal component analysis. So from my perspective it seems pretty statistical, but again that is perhaps subjective.
I suppose the second wiki link and parts of the above address (2), but perhaps not adequately so if you need more information please feel free to follow up. The short of it is if there is an optimum this procedure searches for it in the problem space and will eventually find it under the model assumptions. You need to test for the assumptions, because deviation from them is likely a major consequence to and how well this method will work on your problem.
I do however agree with the other poster that you may find more if not better answers on a site like Meta Optimize which is a community like this one specific to machine learning, but many Statisticians do work on learning problems so I would expect others will weigh in as well.
As for (3) it is not necessarily a better method. It is a method for a different set of assumptions than other methods (such as feedforward neural network with gradient descent for example) specifically this method is for non-linear or non-convex problems. Again you need to test if the problem you are working on is appropriate for this method before proceeding to use it. Otherwise you may get a less than optimal solution or worse yet garbage in garbage out. If your problem is non-linear or non-convex then go for it, otherwise you may need to look into using a different method. There is a part of the wikipedia article "Performance in Practice" may help you determine whether this is an appropriate method for the problem you are working on.
You alluded to some mathphobia when not wanting to see too many expressions. So you may not know some properties of the data/problem you're working with. Are you willing to share some more specific details or are you just asking in general?
Hopefully this helps. Please let me know if there are any follow ups. Good luck. | What is covariance matrix adaptation evolution strategy?
Per the wikipedia page linked above to answer (1) this is another form of gradient descent (which if you need more information with lots of pictures there are many articles available if you google it |
35,257 | What is covariance matrix adaptation evolution strategy? | It's an optimization algorithm: it tries to find the minimum of a function. It is said to be amongst the best optimization algorithms for non-convex problems in high dimensions (above 5 or 10 parameters to optimize).
The term Covariance in the name is a bit misleading to the statistics community: there is no statistics involved in the algorithm. As a result, this site might not be the best place to ask for help on it.
I can't give much more information on it, but the wikipedia page is quite clear. | What is covariance matrix adaptation evolution strategy? | It's an optimization algorithm: it tries to find the minimum of a function. It is said to be amongst the best optimization algorithms for non-convex problems in high dimensions (above 5 or 10 paramete | What is covariance matrix adaptation evolution strategy?
It's an optimization algorithm: it tries to find the minimum of a function. It is said to be amongst the best optimization algorithms for non-convex problems in high dimensions (above 5 or 10 parameters to optimize).
The term Covariance in the name is a bit misleading to the statistics community: there is no statistics involved in the algorithm. As a result, this site might not be the best place to ask for help on it.
I can't give much more information on it, but the wikipedia page is quite clear. | What is covariance matrix adaptation evolution strategy?
It's an optimization algorithm: it tries to find the minimum of a function. It is said to be amongst the best optimization algorithms for non-convex problems in high dimensions (above 5 or 10 paramete |
35,258 | Quality-price trade-off | The Keeney-Raiffa approach to Multi-attribute valuation theory is well-grounded practically and theoretically, has been successfully applied to many problems, and--when applied to problems with just two attributes--is particularly simple. It proceeds by systematically exploring the trade-offs you would actually make in hypothetical situations and uses those to deduce two things: (1) an appropriate way to re-express each attribute and (2) a linear combination of the re-expressed attributes that fully reflects an overall value.
Be careful when doing Web research on this. The vast majority of published models of this type appear to ignore (1), which is crucial, and often establish (2) in ad-hoc or arbitrary ways.
Another approach, consistent with (but inferior to) the Keeney-Raiffa theory, establishes an "efficient frontier." Plot quality on one axis and price on another for each of the available alternatives. If you do so with increasing quality to the right and decreasing price upwards, then points lying at the extreme right or above of all the others are the best candidates to consider. In effect this method ignores (1) and uses this "frontier" to avoid specifying the coefficients in (2). It is often used in financial applications where the two attributes are "alpha" (expected rate of return) and "beta" (variance of returns, a surrogate for risk). Modern portfolio theory uses a variant of this approach. | Quality-price trade-off | The Keeney-Raiffa approach to Multi-attribute valuation theory is well-grounded practically and theoretically, has been successfully applied to many problems, and--when applied to problems with just t | Quality-price trade-off
The Keeney-Raiffa approach to Multi-attribute valuation theory is well-grounded practically and theoretically, has been successfully applied to many problems, and--when applied to problems with just two attributes--is particularly simple. It proceeds by systematically exploring the trade-offs you would actually make in hypothetical situations and uses those to deduce two things: (1) an appropriate way to re-express each attribute and (2) a linear combination of the re-expressed attributes that fully reflects an overall value.
Be careful when doing Web research on this. The vast majority of published models of this type appear to ignore (1), which is crucial, and often establish (2) in ad-hoc or arbitrary ways.
Another approach, consistent with (but inferior to) the Keeney-Raiffa theory, establishes an "efficient frontier." Plot quality on one axis and price on another for each of the available alternatives. If you do so with increasing quality to the right and decreasing price upwards, then points lying at the extreme right or above of all the others are the best candidates to consider. In effect this method ignores (1) and uses this "frontier" to avoid specifying the coefficients in (2). It is often used in financial applications where the two attributes are "alpha" (expected rate of return) and "beta" (variance of returns, a surrogate for risk). Modern portfolio theory uses a variant of this approach. | Quality-price trade-off
The Keeney-Raiffa approach to Multi-attribute valuation theory is well-grounded practically and theoretically, has been successfully applied to many problems, and--when applied to problems with just t |
35,259 | How to compare Likert scales with varying number of categories across time? | Apply empirically-based rescaling formula: If you can administer both versions of the scale to a subsample, you could estimate what the corresponding scores are on the two response formats. Then you could apply a conversion formula that is empirically justified.
There are several ways that you could do this. For instance, you could get 100 or so participants (more is better) to answer the set of questions twice (perhaps counterbalanced for order) using one response format and then the other. You could then experiment with different weightings of the old scale that yield identical means and standard deviations (and any other characteristics of interest) on the new scale. This could potentially be setup as an optimisation problem.
Apply a "common-sense" conversion":
Alternatively, you could apply a "common-sense" conversion.
One naive conversion involves rescaling so that the min and max of the two scales are aligned. So for you 5-point to 9-point conversion, it would be 1 = 1; 2 = 3; 3 = 5; 4 = 7; 5 = 9.
A more psychologically plausible conversion might consider 1 on 9-point scale to be more extreme than a 1 on a 5-point scale. You could also consider the words used for the response options and how they align across the response formats. So for instance, you might choose something like 1 = 1.5, 2 = 3, 3 = 5, 4 = 7, 5 = 8.5, with a final decision based on some expert judgements.
Importantly, these "common sense conversions" are only approximate. In some contexts, an approximate conversion is fine. But if you're interested in subtle longitudinal changes (e.g., did employees less satisfied in Year 2 compared to Year 1), then approximate conversions are generally not adequate. As a broad statement (at least within my experience in organisational settings) changes in item wording and changes in scale options are likely to have a greater effect on responses than any actual change in the attribute of interest. | How to compare Likert scales with varying number of categories across time? | Apply empirically-based rescaling formula: If you can administer both versions of the scale to a subsample, you could estimate what the corresponding scores are on the two response formats. Then you c | How to compare Likert scales with varying number of categories across time?
Apply empirically-based rescaling formula: If you can administer both versions of the scale to a subsample, you could estimate what the corresponding scores are on the two response formats. Then you could apply a conversion formula that is empirically justified.
There are several ways that you could do this. For instance, you could get 100 or so participants (more is better) to answer the set of questions twice (perhaps counterbalanced for order) using one response format and then the other. You could then experiment with different weightings of the old scale that yield identical means and standard deviations (and any other characteristics of interest) on the new scale. This could potentially be setup as an optimisation problem.
Apply a "common-sense" conversion":
Alternatively, you could apply a "common-sense" conversion.
One naive conversion involves rescaling so that the min and max of the two scales are aligned. So for you 5-point to 9-point conversion, it would be 1 = 1; 2 = 3; 3 = 5; 4 = 7; 5 = 9.
A more psychologically plausible conversion might consider 1 on 9-point scale to be more extreme than a 1 on a 5-point scale. You could also consider the words used for the response options and how they align across the response formats. So for instance, you might choose something like 1 = 1.5, 2 = 3, 3 = 5, 4 = 7, 5 = 8.5, with a final decision based on some expert judgements.
Importantly, these "common sense conversions" are only approximate. In some contexts, an approximate conversion is fine. But if you're interested in subtle longitudinal changes (e.g., did employees less satisfied in Year 2 compared to Year 1), then approximate conversions are generally not adequate. As a broad statement (at least within my experience in organisational settings) changes in item wording and changes in scale options are likely to have a greater effect on responses than any actual change in the attribute of interest. | How to compare Likert scales with varying number of categories across time?
Apply empirically-based rescaling formula: If you can administer both versions of the scale to a subsample, you could estimate what the corresponding scores are on the two response formats. Then you c |
35,260 | How to compare Likert scales with varying number of categories across time? | [Technically you've got survey items, not Likert scales; the latter are fashioned from multiple items. See, for example, Paul Spector's Summated Rating Scale Construction {Sage}.]
The steps you take will need to depend on the audience for which you're reporting. If it's academic and rigorous, like a dissertation committee, you may face special challenges. If it's not, and if it's comfortable with the common 1-5 format, why not rescale to fit that and then report means and standard deviations (especially since shapes, skew, and kurtosis are no different from year to year. I presume the distributions are normal enough that means accurately express central tendency?).
-->Why am I treating your variables as interval-level ones? Purists may say that ordinal-level variables should not be reported via means or s.d. Well, your comments suggest, despite your use of "categorical/ordinal," that you are dealing with an ordinal level of measurement which you actually feel comfortable treating as interval-level. After all, why otherwise would you assess skewness or kurtosis. I'm guessing that your audience, too, will be ok with and will be able to relate to interval-level statistics such as means.
It sounds good that you have already explored the data graphically. If you want to go beyond assessing the magnitude of the difference and conduct an hypothesis test, why not do a T-test (independent or correlated, depending on your data) comparing the 1-5 scores pre and the 1-5 scores post, and yielding a confidence interval for the mean difference. Here I'm assuming you've got random samples from a population. | How to compare Likert scales with varying number of categories across time? | [Technically you've got survey items, not Likert scales; the latter are fashioned from multiple items. See, for example, Paul Spector's Summated Rating Scale Construction {Sage}.]
The steps you tak | How to compare Likert scales with varying number of categories across time?
[Technically you've got survey items, not Likert scales; the latter are fashioned from multiple items. See, for example, Paul Spector's Summated Rating Scale Construction {Sage}.]
The steps you take will need to depend on the audience for which you're reporting. If it's academic and rigorous, like a dissertation committee, you may face special challenges. If it's not, and if it's comfortable with the common 1-5 format, why not rescale to fit that and then report means and standard deviations (especially since shapes, skew, and kurtosis are no different from year to year. I presume the distributions are normal enough that means accurately express central tendency?).
-->Why am I treating your variables as interval-level ones? Purists may say that ordinal-level variables should not be reported via means or s.d. Well, your comments suggest, despite your use of "categorical/ordinal," that you are dealing with an ordinal level of measurement which you actually feel comfortable treating as interval-level. After all, why otherwise would you assess skewness or kurtosis. I'm guessing that your audience, too, will be ok with and will be able to relate to interval-level statistics such as means.
It sounds good that you have already explored the data graphically. If you want to go beyond assessing the magnitude of the difference and conduct an hypothesis test, why not do a T-test (independent or correlated, depending on your data) comparing the 1-5 scores pre and the 1-5 scores post, and yielding a confidence interval for the mean difference. Here I'm assuming you've got random samples from a population. | How to compare Likert scales with varying number of categories across time?
[Technically you've got survey items, not Likert scales; the latter are fashioned from multiple items. See, for example, Paul Spector's Summated Rating Scale Construction {Sage}.]
The steps you tak |
35,261 | How to compare Likert scales with varying number of categories across time? | Consider transforming the responses from both data sets into z-scores. There is going to be an ad hoc quality to any sort of rescaling but at least this way you avoid mechanically treating any particular set of intervals on one item as equivalent to any particular set on the other. I'd definitely go this route if I were using the items as predictors or outcome variables in any sort of analysis of variance. If you were doing anything w/ composite scales -- ones that aggregate likert measures -- you'd likely do essentially what I've prpposed: either you'd convert the item responses to z-scores before summing or taking their mean to form the composite scale; or you'd form a scale with factor analysis or another technique that uses the covariance matrix of the items to determine the affinity of the responses to them. | How to compare Likert scales with varying number of categories across time? | Consider transforming the responses from both data sets into z-scores. There is going to be an ad hoc quality to any sort of rescaling but at least this way you avoid mechanically treating any particu | How to compare Likert scales with varying number of categories across time?
Consider transforming the responses from both data sets into z-scores. There is going to be an ad hoc quality to any sort of rescaling but at least this way you avoid mechanically treating any particular set of intervals on one item as equivalent to any particular set on the other. I'd definitely go this route if I were using the items as predictors or outcome variables in any sort of analysis of variance. If you were doing anything w/ composite scales -- ones that aggregate likert measures -- you'd likely do essentially what I've prpposed: either you'd convert the item responses to z-scores before summing or taking their mean to form the composite scale; or you'd form a scale with factor analysis or another technique that uses the covariance matrix of the items to determine the affinity of the responses to them. | How to compare Likert scales with varying number of categories across time?
Consider transforming the responses from both data sets into z-scores. There is going to be an ad hoc quality to any sort of rescaling but at least this way you avoid mechanically treating any particu |
35,262 | How to compare Likert scales with varying number of categories across time? | I've just had to solve this exact problem. We had a 9 point scale that was changed to a 5 point scale on a tracker going back 10 years. Not only that but some of the statements changed as well. And we were reporting as a form of Net Promoter Score.
The solution we used apply's a paired design by asking each respondent a few of the old statements the old way (as well as all the new way). We only asked a couple the old way rather than all of them since this minimises respondent fatigue. We then take each score on the 9 point scale and find it's average on the 5 point score and use this to correct for the scale change AND the statement change. This is quite similar to what is called the "Semantic Judgement of Fixed Word Value" in some papers, but instead of using experts to decide the 'word value' we used respondents actual data.
For example, if the average score on the 5 point scale was 1.2 for those respondents who answered 2 on the 9 point scale then to let us directly compare years with different scales on the 5 point scale we would replace all 2's on the 9 point scale with 1.2, then do the same for all the 9 point scores, and proceed as normal.
We did a similar thing for reporting NPS. But first we converted the 5 point scale to the NPS scale of 1 (promoter), 0 (passive), -1 (detractor) e.g. if the average on the NPS scale was 0.9 for a 2 on the 9 point scale then we replaced it with 0.9, then do the same for all the 9 point scores, and then calculated NPS normally.
To evaluate the effectiveness of this we first compared the 'uncorrected' NPS scores using the 9 and 5 point scales to see if there was actually any problem at all, and then the 'corrected' ones. I haven't got the data yet but will report back when we do! | How to compare Likert scales with varying number of categories across time? | I've just had to solve this exact problem. We had a 9 point scale that was changed to a 5 point scale on a tracker going back 10 years. Not only that but some of the statements changed as well. And we | How to compare Likert scales with varying number of categories across time?
I've just had to solve this exact problem. We had a 9 point scale that was changed to a 5 point scale on a tracker going back 10 years. Not only that but some of the statements changed as well. And we were reporting as a form of Net Promoter Score.
The solution we used apply's a paired design by asking each respondent a few of the old statements the old way (as well as all the new way). We only asked a couple the old way rather than all of them since this minimises respondent fatigue. We then take each score on the 9 point scale and find it's average on the 5 point score and use this to correct for the scale change AND the statement change. This is quite similar to what is called the "Semantic Judgement of Fixed Word Value" in some papers, but instead of using experts to decide the 'word value' we used respondents actual data.
For example, if the average score on the 5 point scale was 1.2 for those respondents who answered 2 on the 9 point scale then to let us directly compare years with different scales on the 5 point scale we would replace all 2's on the 9 point scale with 1.2, then do the same for all the 9 point scores, and proceed as normal.
We did a similar thing for reporting NPS. But first we converted the 5 point scale to the NPS scale of 1 (promoter), 0 (passive), -1 (detractor) e.g. if the average on the NPS scale was 0.9 for a 2 on the 9 point scale then we replaced it with 0.9, then do the same for all the 9 point scores, and then calculated NPS normally.
To evaluate the effectiveness of this we first compared the 'uncorrected' NPS scores using the 9 and 5 point scales to see if there was actually any problem at all, and then the 'corrected' ones. I haven't got the data yet but will report back when we do! | How to compare Likert scales with varying number of categories across time?
I've just had to solve this exact problem. We had a 9 point scale that was changed to a 5 point scale on a tracker going back 10 years. Not only that but some of the statements changed as well. And we |
35,263 | Group vs Stacked Bar Plots | I think grouped bars are preferable to stacked bars in most situations because they retain information about the sizes of the groups and stay readable even when you have multiple nominal categories. For me, the segments of stacked bars get difficult to compare beyond two categories - and even with just two categories, they can be quite deceptive if your groups are of very different sizes. I'd prefer a frequency table over a stacked bar plot any day.
You should also consider a series of bar plots, with each group in a separate plot:
This is probably what I use most often. You can do this in R with facet_wrap and facet_grid inggplot2, as well as thelattice` package.
Historical note: histograms != bar plots | Group vs Stacked Bar Plots | I think grouped bars are preferable to stacked bars in most situations because they retain information about the sizes of the groups and stay readable even when you have multiple nominal categories. | Group vs Stacked Bar Plots
I think grouped bars are preferable to stacked bars in most situations because they retain information about the sizes of the groups and stay readable even when you have multiple nominal categories. For me, the segments of stacked bars get difficult to compare beyond two categories - and even with just two categories, they can be quite deceptive if your groups are of very different sizes. I'd prefer a frequency table over a stacked bar plot any day.
You should also consider a series of bar plots, with each group in a separate plot:
This is probably what I use most often. You can do this in R with facet_wrap and facet_grid inggplot2, as well as thelattice` package.
Historical note: histograms != bar plots | Group vs Stacked Bar Plots
I think grouped bars are preferable to stacked bars in most situations because they retain information about the sizes of the groups and stay readable even when you have multiple nominal categories. |
35,264 | Group vs Stacked Bar Plots | Stacked barcharts useless? Here's an appropriate use of one, actually, a hybrid, grouped and stacked bar chart. Lets say I set 1 milliliter of milk aside while passing a 2nd milliliter through a column packed with a resin capable of adsorbing a certain milk component that I'm able to later measure. Let's say I capture any liquid that drips through the column in a tube. Then let's say I wash the column with progressively stronger solvents while capturing the flowthrough of each solvent into a separate tube. Finally, let's say I measure the amount of the milk component in each tube, AND, then also the amount in the other half-milliliter. The plot needs to show (a) how much of the component didn't stick to the column, (b1 through bn) how much 'broke through' with each solvent wash, AND (c) -- here's where the stacked display comes in -- whether the total amount recovered equals the starting amount, i.e., whether some of the component appears to still be stuck on the column. | Group vs Stacked Bar Plots | Stacked barcharts useless? Here's an appropriate use of one, actually, a hybrid, grouped and stacked bar chart. Lets say I set 1 milliliter of milk aside while passing a 2nd milliliter through a colum | Group vs Stacked Bar Plots
Stacked barcharts useless? Here's an appropriate use of one, actually, a hybrid, grouped and stacked bar chart. Lets say I set 1 milliliter of milk aside while passing a 2nd milliliter through a column packed with a resin capable of adsorbing a certain milk component that I'm able to later measure. Let's say I capture any liquid that drips through the column in a tube. Then let's say I wash the column with progressively stronger solvents while capturing the flowthrough of each solvent into a separate tube. Finally, let's say I measure the amount of the milk component in each tube, AND, then also the amount in the other half-milliliter. The plot needs to show (a) how much of the component didn't stick to the column, (b1 through bn) how much 'broke through' with each solvent wash, AND (c) -- here's where the stacked display comes in -- whether the total amount recovered equals the starting amount, i.e., whether some of the component appears to still be stuck on the column. | Group vs Stacked Bar Plots
Stacked barcharts useless? Here's an appropriate use of one, actually, a hybrid, grouped and stacked bar chart. Lets say I set 1 milliliter of milk aside while passing a 2nd milliliter through a colum |
35,265 | Group vs Stacked Bar Plots | I don't think there are any appropriate uses of stacked bar charts; grouped bar charts are better, but both are inferior to other plots, depending on what aspect of your data you want to emphasize, and how much data you have. | Group vs Stacked Bar Plots | I don't think there are any appropriate uses of stacked bar charts; grouped bar charts are better, but both are inferior to other plots, depending on what aspect of your data you want to emphasize, an | Group vs Stacked Bar Plots
I don't think there are any appropriate uses of stacked bar charts; grouped bar charts are better, but both are inferior to other plots, depending on what aspect of your data you want to emphasize, and how much data you have. | Group vs Stacked Bar Plots
I don't think there are any appropriate uses of stacked bar charts; grouped bar charts are better, but both are inferior to other plots, depending on what aspect of your data you want to emphasize, an |
35,266 | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficients consistent estimators? | If you can assume that your rows of your data matrix are exchangeable then your modelling strategy should work well. Your method should be fine under the conditions stated by Gaetan Lion before.
The reason why your method will work (given the exchangeability assumption holds) is that it be taken as a special case of parametric bootstrap in which you take re-sample N rows of big sample, fit a model and store the coefficients and repeat this M times (in traditional bootstrap terminology your M is equivalent to B) and take average of the M coefficient estimates. You can also look at it from a permutation testing view point as well.
But all these results are true if the (hard to verify) exchangeability assumption holds. If exchangeability assumption doesn't hold, the answer in that case becomes a bit complicated. Probably you need to take care of the subgroups in your data which are exchangeable and perform your process conditioned on these subgroups. Basically, hierarchical modeling. | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficient | If you can assume that your rows of your data matrix are exchangeable then your modelling strategy should work well. Your method should be fine under the conditions stated by Gaetan Lion before.
The | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficients consistent estimators?
If you can assume that your rows of your data matrix are exchangeable then your modelling strategy should work well. Your method should be fine under the conditions stated by Gaetan Lion before.
The reason why your method will work (given the exchangeability assumption holds) is that it be taken as a special case of parametric bootstrap in which you take re-sample N rows of big sample, fit a model and store the coefficients and repeat this M times (in traditional bootstrap terminology your M is equivalent to B) and take average of the M coefficient estimates. You can also look at it from a permutation testing view point as well.
But all these results are true if the (hard to verify) exchangeability assumption holds. If exchangeability assumption doesn't hold, the answer in that case becomes a bit complicated. Probably you need to take care of the subgroups in your data which are exchangeable and perform your process conditioned on these subgroups. Basically, hierarchical modeling. | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficient
If you can assume that your rows of your data matrix are exchangeable then your modelling strategy should work well. Your method should be fine under the conditions stated by Gaetan Lion before.
The |
35,267 | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficients consistent estimators? | The answer to your original question is yes, because the classical theory applies under your sampling scheme. You don’t need any assumptions on the original data matrix. All of the randomness (implicitly behind standard errors and consistency) comes from your scheme for sampling $N$ rows from the data matrix.
Think of your entire dataset (100M rows) as being the population. Each estimate (assuming your sample of size $N$ is a simple random sample of the rows) is a consistent estimate of the regression coefficients (say, $\hat{\beta}_*$) computed from the entire data set. Moreover, it is approximately Normal with mean equal to $\hat{\beta}_*$ and some covariance. The usual estimate of the covariance of the estimate is also consistent. If you repeat this $M$ times and average those $M$ estimates, then the resulting estimate (say, $\hat{\beta}_{avg}$) will also be approximately Normal. You can treat those $M$ estimates as being nearly independent (uncorrelated) as long as $N$ and $M$ are small relative to 100M. That’s an important assumption. The idea being that sampling without replacement is approximately the same as sampling with replacement when the sample size is small compared to the population size.
That being said, I think that your problem really is one of how to efficiently approximate the regression estimate ($\hat{\beta}_*$) computed from the entire data set. There is a difference between (1) averaging $M$ estimates based on samples of size $N$ and (2) one estimate based on a sample of size $MN$. The MSE of (2) will generally be smaller than the MSE of (1). They would only be equal if the estimate was linear in the data, but that is not the case. I assume you are using least squares. The least squares estimate is linear in the $Y$ (response) vector, but not the $X$ (covariates) matrix. You are randomly sampling $Y$ and $X$.
(1) and (2) are both simple schemes, but not necessarily efficient. (Though it may not matter since you only have 30 variables.) There are better ways. Here is one example: http://arxiv.org/abs/0710.1435 | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficient | The answer to your original question is yes, because the classical theory applies under your sampling scheme. You don’t need any assumptions on the original data matrix. All of the randomness (impli | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficients consistent estimators?
The answer to your original question is yes, because the classical theory applies under your sampling scheme. You don’t need any assumptions on the original data matrix. All of the randomness (implicitly behind standard errors and consistency) comes from your scheme for sampling $N$ rows from the data matrix.
Think of your entire dataset (100M rows) as being the population. Each estimate (assuming your sample of size $N$ is a simple random sample of the rows) is a consistent estimate of the regression coefficients (say, $\hat{\beta}_*$) computed from the entire data set. Moreover, it is approximately Normal with mean equal to $\hat{\beta}_*$ and some covariance. The usual estimate of the covariance of the estimate is also consistent. If you repeat this $M$ times and average those $M$ estimates, then the resulting estimate (say, $\hat{\beta}_{avg}$) will also be approximately Normal. You can treat those $M$ estimates as being nearly independent (uncorrelated) as long as $N$ and $M$ are small relative to 100M. That’s an important assumption. The idea being that sampling without replacement is approximately the same as sampling with replacement when the sample size is small compared to the population size.
That being said, I think that your problem really is one of how to efficiently approximate the regression estimate ($\hat{\beta}_*$) computed from the entire data set. There is a difference between (1) averaging $M$ estimates based on samples of size $N$ and (2) one estimate based on a sample of size $MN$. The MSE of (2) will generally be smaller than the MSE of (1). They would only be equal if the estimate was linear in the data, but that is not the case. I assume you are using least squares. The least squares estimate is linear in the $Y$ (response) vector, but not the $X$ (covariates) matrix. You are randomly sampling $Y$ and $X$.
(1) and (2) are both simple schemes, but not necessarily efficient. (Though it may not matter since you only have 30 variables.) There are better ways. Here is one example: http://arxiv.org/abs/0710.1435 | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficient
The answer to your original question is yes, because the classical theory applies under your sampling scheme. You don’t need any assumptions on the original data matrix. All of the randomness (impli |
35,268 | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficients consistent estimators? | The greater the sample N, the smaller the standard error (higher t stat, and smaller the respective p values) associated with all your regression coefficients. The greater M, the more datapoints you will have and the smaller will be your standard error of the mean of the coefficients over M runs. Such means should have a standard error that is normally distributed per the Central Limit Theorem. In terms of convergence of such means, I am not sure there are any statistical principles that dictate this. I suspect if your random sampling is well done (no structural bias, etc...) the convergence should occur fairly rapidly. That is something you just may have to observe empirically.
Otherwise, your method seems good, I don't see any problem with it. | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficient | The greater the sample N, the smaller the standard error (higher t stat, and smaller the respective p values) associated with all your regression coefficients. The greater M, the more datapoints you | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficients consistent estimators?
The greater the sample N, the smaller the standard error (higher t stat, and smaller the respective p values) associated with all your regression coefficients. The greater M, the more datapoints you will have and the smaller will be your standard error of the mean of the coefficients over M runs. Such means should have a standard error that is normally distributed per the Central Limit Theorem. In terms of convergence of such means, I am not sure there are any statistical principles that dictate this. I suspect if your random sampling is well done (no structural bias, etc...) the convergence should occur fairly rapidly. That is something you just may have to observe empirically.
Otherwise, your method seems good, I don't see any problem with it. | Doing regressions on samples from a very large file: are the means and SEs of the sample coefficient
The greater the sample N, the smaller the standard error (higher t stat, and smaller the respective p values) associated with all your regression coefficients. The greater M, the more datapoints you |
35,269 | Obtaining SAS experience | I would recommend going through a self-study course such as the UCLA website and specifically the SAS Starter Kit. If you learn better within an interactive environment, I would suggest checking out online course offerings such as the World Campus SAS courses offered at Penn State University (Stat 480, 481, & 482).
Update: Sorry should've read more carefully, I agree with @Christoper.Aden that there aren't really any equivalent languages with SAS. You can learn R to perform statistical calculations, but if you need to use SAS, then learning R will only be a small step in the right direction (general programming knowledge - the two languages are incredibly different in practice).
I would recommend getting an academic discount version of SAS if you enroll in a program like I mentioned above - Penn State currently sells a 1yr licensed copy of SAS for $30 (only to students). | Obtaining SAS experience | I would recommend going through a self-study course such as the UCLA website and specifically the SAS Starter Kit. If you learn better within an interactive environment, I would suggest checking out | Obtaining SAS experience
I would recommend going through a self-study course such as the UCLA website and specifically the SAS Starter Kit. If you learn better within an interactive environment, I would suggest checking out online course offerings such as the World Campus SAS courses offered at Penn State University (Stat 480, 481, & 482).
Update: Sorry should've read more carefully, I agree with @Christoper.Aden that there aren't really any equivalent languages with SAS. You can learn R to perform statistical calculations, but if you need to use SAS, then learning R will only be a small step in the right direction (general programming knowledge - the two languages are incredibly different in practice).
I would recommend getting an academic discount version of SAS if you enroll in a program like I mentioned above - Penn State currently sells a 1yr licensed copy of SAS for $30 (only to students). | Obtaining SAS experience
I would recommend going through a self-study course such as the UCLA website and specifically the SAS Starter Kit. If you learn better within an interactive environment, I would suggest checking out |
35,270 | Obtaining SAS experience | As far as SAS goes, getting certified is resume gold. The SAS Institute offers classes and exams to receive the certification. There are also books you can use if you are self-motivated.
Getting SAS is quite difficult if your company does not have it. I'm on a college campus, and they offer academic discounts on student liscenses and the campus labs have it installed on some machines. If you want something a little similar, but cheaper, give JMP a try. It's probably the closest thing to the SAS feel.
For similar languages, it would probably depend on your field. The social sciences would probably be more receptive to seeing SPSS on your resume than would the economics-related work. | Obtaining SAS experience | As far as SAS goes, getting certified is resume gold. The SAS Institute offers classes and exams to receive the certification. There are also books you can use if you are self-motivated.
Getting SAS i | Obtaining SAS experience
As far as SAS goes, getting certified is resume gold. The SAS Institute offers classes and exams to receive the certification. There are also books you can use if you are self-motivated.
Getting SAS is quite difficult if your company does not have it. I'm on a college campus, and they offer academic discounts on student liscenses and the campus labs have it installed on some machines. If you want something a little similar, but cheaper, give JMP a try. It's probably the closest thing to the SAS feel.
For similar languages, it would probably depend on your field. The social sciences would probably be more receptive to seeing SPSS on your resume than would the economics-related work. | Obtaining SAS experience
As far as SAS goes, getting certified is resume gold. The SAS Institute offers classes and exams to receive the certification. There are also books you can use if you are self-motivated.
Getting SAS i |
35,271 | Obtaining SAS experience | The programming language most similar to SAS is... SAS. Which you can interpret using WPS, which will run SAS code and evidently costs substantially less than a SAS license and has a 30 day free trial. I haven't used it myself, but it should get you started programming in the SAS language.
As M. Tibbits suggests, I don't think that experience with R would be helpful in most corporate settings. I also don't think that SPSS experience will be all that helpful, either, and my sense is that it has a less than stellar reputation outside of the social sciences. | Obtaining SAS experience | The programming language most similar to SAS is... SAS. Which you can interpret using WPS, which will run SAS code and evidently costs substantially less than a SAS license and has a 30 day free tria | Obtaining SAS experience
The programming language most similar to SAS is... SAS. Which you can interpret using WPS, which will run SAS code and evidently costs substantially less than a SAS license and has a 30 day free trial. I haven't used it myself, but it should get you started programming in the SAS language.
As M. Tibbits suggests, I don't think that experience with R would be helpful in most corporate settings. I also don't think that SPSS experience will be all that helpful, either, and my sense is that it has a less than stellar reputation outside of the social sciences. | Obtaining SAS experience
The programming language most similar to SAS is... SAS. Which you can interpret using WPS, which will run SAS code and evidently costs substantially less than a SAS license and has a 30 day free tria |
35,272 | Obtaining SAS experience | when i joined analytics industry( just out of my own interest) after serving software for 5 yrs..I didnt know SAS either..I got some version from somewhere and started writing codes on my own. Yes, I had programming background before that..I knew SQL, I knew general programming. I would suggest you visit tutorials and start writing codes yourself. The version is something you should get first but. Read SQL. Know everything from select * to joins to merge..tommorow if some interviewer gives you a loop or a join(left,right, full)..or some function like 1) contains 2) coalesce 3) sum, min, max, average
4) merge( in=a) (in=b) ..bla bla..you should be in decent condition that you are gonna ace it. These are just some bits from my side..apart from this you could also focus on reading things like regression analysis, MLE and OLS methods..this would show the interviewer that though this guy didnt have SAS facility he is good on general concepts..All I am preaching here is what i practiced. | Obtaining SAS experience | when i joined analytics industry( just out of my own interest) after serving software for 5 yrs..I didnt know SAS either..I got some version from somewhere and started writing codes on my own. Yes, I | Obtaining SAS experience
when i joined analytics industry( just out of my own interest) after serving software for 5 yrs..I didnt know SAS either..I got some version from somewhere and started writing codes on my own. Yes, I had programming background before that..I knew SQL, I knew general programming. I would suggest you visit tutorials and start writing codes yourself. The version is something you should get first but. Read SQL. Know everything from select * to joins to merge..tommorow if some interviewer gives you a loop or a join(left,right, full)..or some function like 1) contains 2) coalesce 3) sum, min, max, average
4) merge( in=a) (in=b) ..bla bla..you should be in decent condition that you are gonna ace it. These are just some bits from my side..apart from this you could also focus on reading things like regression analysis, MLE and OLS methods..this would show the interviewer that though this guy didnt have SAS facility he is good on general concepts..All I am preaching here is what i practiced. | Obtaining SAS experience
when i joined analytics industry( just out of my own interest) after serving software for 5 yrs..I didnt know SAS either..I got some version from somewhere and started writing codes on my own. Yes, I |
35,273 | Obtaining SAS experience | SAS University Edition is a great place to start! | Obtaining SAS experience | SAS University Edition is a great place to start! | Obtaining SAS experience
SAS University Edition is a great place to start! | Obtaining SAS experience
SAS University Edition is a great place to start! |
35,274 | Clustering genes in a time course experiment | It seems you just want to make a fair standard analysis, so I am not a best person to answer your question; yet I would suggest you to dive deeper into Bioconductor; it has a lot of useful stuff, nevertheless finding what you want is painful. For instance Mfuzz package looks promising. | Clustering genes in a time course experiment | It seems you just want to make a fair standard analysis, so I am not a best person to answer your question; yet I would suggest you to dive deeper into Bioconductor; it has a lot of useful stuff, neve | Clustering genes in a time course experiment
It seems you just want to make a fair standard analysis, so I am not a best person to answer your question; yet I would suggest you to dive deeper into Bioconductor; it has a lot of useful stuff, nevertheless finding what you want is painful. For instance Mfuzz package looks promising. | Clustering genes in a time course experiment
It seems you just want to make a fair standard analysis, so I am not a best person to answer your question; yet I would suggest you to dive deeper into Bioconductor; it has a lot of useful stuff, neve |
35,275 | Clustering genes in a time course experiment | In complement to @mbq's response (Mfuzz looks fine), I'll just put some references (PDFs) about clustering of time-course gene expression data:
Futschik, ME and Charlisle, B (2005). Noise robust clustering of gene expression time-course data. Journal of Bioinformatics and Computational Biology, 3(4), 965-988.
Luan, Y and Li, H (2003). Clustering of time-course gene expression data using a mixed-effects model with B-splines. Bioinformatics, 19(4), 474-482.
Tai YC and Speed, TP (2006). A multivariate empirical Bayes statistic for replicated microarray time course data. The Annals of Statistics, 34, 2387–2412.
Schliep, A, Steinhoff, C, and Schönhuth, A (2004). Robust inference of groups in gene expression time-courses using mixtures of HMMs. Bioinformatics, 20(1), i283-i228.
Costa, IG, de Carvalho, F, and de Souto, MCP (2004). Comparative analysis of clustering methods for gene expression time course data. Genetics and Molecular Biology, 27(4), 623-631.
Inoue, LYT, Neira, M, Nelson, C, Gleave, M, and Etzioni, R (2006). Cluster-based network model for time-course gene expression data. Biostatistics, 8(3), 507-525.
Phang, TL, Neville, MC, Rudolph, M, and Hunter, L (2003). Trajectory Clustering: A Non-Parametric Method for Grouping Gene Expression Time Courses with Applications to Mammary Development. Pacific Symposium on Biocomputing, 8, 351-362.
Did you try the timecourse package (as suggested by @csgillespie in his handout)? | Clustering genes in a time course experiment | In complement to @mbq's response (Mfuzz looks fine), I'll just put some references (PDFs) about clustering of time-course gene expression data:
Futschik, ME and Charlisle, B (2005). Noise robust clus | Clustering genes in a time course experiment
In complement to @mbq's response (Mfuzz looks fine), I'll just put some references (PDFs) about clustering of time-course gene expression data:
Futschik, ME and Charlisle, B (2005). Noise robust clustering of gene expression time-course data. Journal of Bioinformatics and Computational Biology, 3(4), 965-988.
Luan, Y and Li, H (2003). Clustering of time-course gene expression data using a mixed-effects model with B-splines. Bioinformatics, 19(4), 474-482.
Tai YC and Speed, TP (2006). A multivariate empirical Bayes statistic for replicated microarray time course data. The Annals of Statistics, 34, 2387–2412.
Schliep, A, Steinhoff, C, and Schönhuth, A (2004). Robust inference of groups in gene expression time-courses using mixtures of HMMs. Bioinformatics, 20(1), i283-i228.
Costa, IG, de Carvalho, F, and de Souto, MCP (2004). Comparative analysis of clustering methods for gene expression time course data. Genetics and Molecular Biology, 27(4), 623-631.
Inoue, LYT, Neira, M, Nelson, C, Gleave, M, and Etzioni, R (2006). Cluster-based network model for time-course gene expression data. Biostatistics, 8(3), 507-525.
Phang, TL, Neville, MC, Rudolph, M, and Hunter, L (2003). Trajectory Clustering: A Non-Parametric Method for Grouping Gene Expression Time Courses with Applications to Mammary Development. Pacific Symposium on Biocomputing, 8, 351-362.
Did you try the timecourse package (as suggested by @csgillespie in his handout)? | Clustering genes in a time course experiment
In complement to @mbq's response (Mfuzz looks fine), I'll just put some references (PDFs) about clustering of time-course gene expression data:
Futschik, ME and Charlisle, B (2005). Noise robust clus |
35,276 | Clustering genes in a time course experiment | Just to add to the other answers (which look like they should solve your problem), did you try using standard clustering algorithms for you data when constructing your dendrogram? For example,
heatmap.2(dataset, <standard args>,
hclustfun = function(c){hclust(c, method= 'average')}
)
Instead of using the average distance for clustering, you can also use "ward", "single", "median", ... See ?hclust for a full list.
To extract clusters, use the hclust command directly and then use the cutree command. For example,
hc = hclust(dataset)
cutree(hc)
More details can be found at my webpage. | Clustering genes in a time course experiment | Just to add to the other answers (which look like they should solve your problem), did you try using standard clustering algorithms for you data when constructing your dendrogram? For example,
heatma | Clustering genes in a time course experiment
Just to add to the other answers (which look like they should solve your problem), did you try using standard clustering algorithms for you data when constructing your dendrogram? For example,
heatmap.2(dataset, <standard args>,
hclustfun = function(c){hclust(c, method= 'average')}
)
Instead of using the average distance for clustering, you can also use "ward", "single", "median", ... See ?hclust for a full list.
To extract clusters, use the hclust command directly and then use the cutree command. For example,
hc = hclust(dataset)
cutree(hc)
More details can be found at my webpage. | Clustering genes in a time course experiment
Just to add to the other answers (which look like they should solve your problem), did you try using standard clustering algorithms for you data when constructing your dendrogram? For example,
heatma |
35,277 | How could I predict the results of a simple card game? | The easiest way is just to simulate the game lots of times. The R code below simulates a single game.
nplayers = 4
#Create an empty data frame to keep track
#of card number, suit and if it's magic
empty.hand = data.frame(number = numeric(52),
suit = numeric(52),
magic = numeric(52))
#A list of players who are in the game
players =list()
for(i in 1:nplayers)
players[[i]] = empty.hand
#Simulate shuffling the deck
deck = empty.hand
deck$number = rep(1:13, 4)
deck$suit = as.character(rep(c("H", "C", "S", "D"), each=13))
deck$magic = rep(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0), each=4)
deck = deck[sample(1:52, 52),]
#Deal out five cards per person
for(i in 1:length(players)){
r = (5*i-4):(5*i)
players[[i]][r,] = deck[r,]
}
#Play the game
i = 5*length(players)+1
current = deck[i,]
while(i < 53){
for(j in 1:length(players)){
playersdeck = players[[j]]
#Need to test for magic and suit also - left as an exercise!
if(is.element(current$number, playersdeck$number)){
#Update current card
current = playersdeck[match(current$number,
playersdeck$number),]
#Remove card from players deck
playersdeck[match(current$number, playersdeck$number),] = c(0,
0, 0)
} else {
#Add card to players deck
playersdeck[i,] = deck[i,]
i = i + 1
}
players[[j]] = playersdeck
#Has someone won or have we run out of card
if(sum(playersdeck$number) == 0 | i > 52){
i = 53
break
}
}
}
#How many cards are left for each player
for(i in 1:length(players))
{
cat(sum(players[[i]]$number !=0), "\n")
}
Some comments
You will need to add a couple of lines for magic cards and suits, but data structure is already there. I presume you didn't want a complete solution? ;)
To estimate the average game length, just place the above code in a function and call lots of times.
Rather than dynamically increasing a vector when a player gets a card, I find it easier just to create a sparse data frame that is more than sufficient. In this case, each player has a data frame with 52 rows, which they will never fill (unless it's a 1 player game).
There is a small element of strategy with this game. What should you do if you can play more than one card. For example, if 7H comes up, and you have in your hand 7S, 8H and the JC. All three of these cards are "playable". | How could I predict the results of a simple card game? | The easiest way is just to simulate the game lots of times. The R code below simulates a single game.
nplayers = 4
#Create an empty data frame to keep track
#of card number, suit and if it's magic
emp | How could I predict the results of a simple card game?
The easiest way is just to simulate the game lots of times. The R code below simulates a single game.
nplayers = 4
#Create an empty data frame to keep track
#of card number, suit and if it's magic
empty.hand = data.frame(number = numeric(52),
suit = numeric(52),
magic = numeric(52))
#A list of players who are in the game
players =list()
for(i in 1:nplayers)
players[[i]] = empty.hand
#Simulate shuffling the deck
deck = empty.hand
deck$number = rep(1:13, 4)
deck$suit = as.character(rep(c("H", "C", "S", "D"), each=13))
deck$magic = rep(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0), each=4)
deck = deck[sample(1:52, 52),]
#Deal out five cards per person
for(i in 1:length(players)){
r = (5*i-4):(5*i)
players[[i]][r,] = deck[r,]
}
#Play the game
i = 5*length(players)+1
current = deck[i,]
while(i < 53){
for(j in 1:length(players)){
playersdeck = players[[j]]
#Need to test for magic and suit also - left as an exercise!
if(is.element(current$number, playersdeck$number)){
#Update current card
current = playersdeck[match(current$number,
playersdeck$number),]
#Remove card from players deck
playersdeck[match(current$number, playersdeck$number),] = c(0,
0, 0)
} else {
#Add card to players deck
playersdeck[i,] = deck[i,]
i = i + 1
}
players[[j]] = playersdeck
#Has someone won or have we run out of card
if(sum(playersdeck$number) == 0 | i > 52){
i = 53
break
}
}
}
#How many cards are left for each player
for(i in 1:length(players))
{
cat(sum(players[[i]]$number !=0), "\n")
}
Some comments
You will need to add a couple of lines for magic cards and suits, but data structure is already there. I presume you didn't want a complete solution? ;)
To estimate the average game length, just place the above code in a function and call lots of times.
Rather than dynamically increasing a vector when a player gets a card, I find it easier just to create a sparse data frame that is more than sufficient. In this case, each player has a data frame with 52 rows, which they will never fill (unless it's a 1 player game).
There is a small element of strategy with this game. What should you do if you can play more than one card. For example, if 7H comes up, and you have in your hand 7S, 8H and the JC. All three of these cards are "playable". | How could I predict the results of a simple card game?
The easiest way is just to simulate the game lots of times. The R code below simulates a single game.
nplayers = 4
#Create an empty data frame to keep track
#of card number, suit and if it's magic
emp |
35,278 | Least squares problem penalising the number of non-zero coefficients | If ridge regression is considered to penalize the usual (square) loss according to the $L_2$ norm of the parameter vector and LASSO regression is considered to penalize the loss according to the $L_1$ norm of the parameter vector, then this would be a penalty according to the $L_0$ "norm" of the parameter vector, which is a count of the nonzero elements. I put "norm" in quotes because $L_0$ is not really a norm, but it is norm-ish.
$$
L\bigg(\hat\beta\bigg\vert X, y,\lambda\bigg) = \overset{N}{\underset{i=1}{\sum}}
\left(
y_i - \overset{p}{\underset{j=0}{\sum}}\left(
\hat\beta_j^TX_{ij}
\right)
\right)^2 +
\lambda\left\vert\left\vert \hat\beta\right\vert\right\vert_0
$$
Ridge and LASSO regression would use $\lambda\left\vert\left\vert \hat\beta\right\vert\right\vert_2$ and $\lambda\left\vert\left\vert \hat\beta\right\vert\right\vert_1$, respectively.
Some of the trouble you might encounter with this is that the above loss function will not be a continuous function of $\hat\beta$ unless $\lambda=0$ (which is just ordinary least squares), since the $L_0$ "norm" abruptly changes as a vector component changes its status of zero or nonzero. Another issue is that computers have trouble declaring a value as being truly zero. For instance, run the following code in R: (sqrt(2))^2 - 2 == 0. We all know that $\left(\sqrt 2\right)^2-2=0$, yet my computer says the statement is false. Finally, as is pointed out in the comments, $L_0$ regularization is $NP$-hard.
For a reference in the literature:
Louizos, Christos, Max Welling, and Diederik P. Kingma. "Learning sparse neural networks through $ L_0 $ regularization." arXiv preprint arXiv:1712.01312 (2017).
I found that by running a Google search for "l0 regularization" and would expect that and similar searches to turn up more hits. | Least squares problem penalising the number of non-zero coefficients | If ridge regression is considered to penalize the usual (square) loss according to the $L_2$ norm of the parameter vector and LASSO regression is considered to penalize the loss according to the $L_1$ | Least squares problem penalising the number of non-zero coefficients
If ridge regression is considered to penalize the usual (square) loss according to the $L_2$ norm of the parameter vector and LASSO regression is considered to penalize the loss according to the $L_1$ norm of the parameter vector, then this would be a penalty according to the $L_0$ "norm" of the parameter vector, which is a count of the nonzero elements. I put "norm" in quotes because $L_0$ is not really a norm, but it is norm-ish.
$$
L\bigg(\hat\beta\bigg\vert X, y,\lambda\bigg) = \overset{N}{\underset{i=1}{\sum}}
\left(
y_i - \overset{p}{\underset{j=0}{\sum}}\left(
\hat\beta_j^TX_{ij}
\right)
\right)^2 +
\lambda\left\vert\left\vert \hat\beta\right\vert\right\vert_0
$$
Ridge and LASSO regression would use $\lambda\left\vert\left\vert \hat\beta\right\vert\right\vert_2$ and $\lambda\left\vert\left\vert \hat\beta\right\vert\right\vert_1$, respectively.
Some of the trouble you might encounter with this is that the above loss function will not be a continuous function of $\hat\beta$ unless $\lambda=0$ (which is just ordinary least squares), since the $L_0$ "norm" abruptly changes as a vector component changes its status of zero or nonzero. Another issue is that computers have trouble declaring a value as being truly zero. For instance, run the following code in R: (sqrt(2))^2 - 2 == 0. We all know that $\left(\sqrt 2\right)^2-2=0$, yet my computer says the statement is false. Finally, as is pointed out in the comments, $L_0$ regularization is $NP$-hard.
For a reference in the literature:
Louizos, Christos, Max Welling, and Diederik P. Kingma. "Learning sparse neural networks through $ L_0 $ regularization." arXiv preprint arXiv:1712.01312 (2017).
I found that by running a Google search for "l0 regularization" and would expect that and similar searches to turn up more hits. | Least squares problem penalising the number of non-zero coefficients
If ridge regression is considered to penalize the usual (square) loss according to the $L_2$ norm of the parameter vector and LASSO regression is considered to penalize the loss according to the $L_1$ |
35,279 | Generating random variable which has a power distribution of Box and Tiao (1962) | Box & Tiao refer to this as a "convenient class of power distributions," referencing Diananda (1949), Box (1953), and Turner (1960).
Because $\mu$ and $\sigma$ just establish a unit of measurement and the absolute value reflects values around the origin, the basic density is proportional to $\exp(-z^p/2)$ where $p = 2/(1+\alpha)$ and $z \ge 0.$ Changing variables to $y = z^p$ for $0\lt p \lt \infty$ changes the probability element to
$$\exp(-z^p/2)\mathrm{d}z \to \exp(-y/2) \mathrm{d}\left(y^{1/p}\right) = \frac{1}{p}y^{1/p - 1}e^{-y/2}\mathrm{d}y.$$
Since $ p = 2/(1+\alpha),$ this is proportional to a scaled Gamma$(1/p)$ = Gamma$((1+\alpha)/2)$ density, also known as a Chi-squared$(1+\alpha)$ density.
Thus, to generate a value from such a distribution, undo all these transformations in reverse order:
Generate a value $Y$ from a Chi-squared$(1+\alpha)$ distribution, raise it to the $2/(1+\alpha)$ power, randomly negate it (with probability $1/2$), multiply by $\sigma,$ and add $\mu.$
This R code exhibits one such implementation. n is the number of independent values to draw.
rf <- function(n, mu, sigma, alpha) {
y <- rchisq(n, 1 + alpha) # A chi-squared variate
u <- sample(c(-1,1), n, replace = TRUE) # Random sign change
y^((1 + alpha)/2) * u * sigma + mu
}
Here are some examples of values generated in this fashion (100,000 of each) along with graphs of $f.$
Generating Chi-squared variates with parameter $1+\alpha$ near zero is problematic. You can see this code works for $1+\alpha = 0.1$ (bottom left), but watch out when it gets much smaller than this:
The spike and gap in the middle should not be there.
The problem lies with floating point arithmetic: even double precision does not suffice. By this point, though, the uniform distribution looks like a good approximation.
Appendix
This R code produced the plots. It uses the showtext library to access a Google font for the axis numbers and labels. Few of these fonts, if any, support Greek or math characters, so I had to use the default font for the plot titles (using mtext). Otherwise, everything is done with the base R plotting functions hist and curve. Don't be concerned about the relatively large simulation size: the total computation time is far less than one second to generate these 400,000 variates.
library(showtext)
if(!("Informal" %in% font_families())) font_add_google("Fuzzy Bubbles", "Informal")
showtext_auto()
#
# Density calculation.
#
f <- function(x, mu, sigma, alpha)
exp(-1/2 * abs((x - mu) / sigma) ^ (2 / (1 + alpha)))
C <- function(mu, sigma, alpha, ...)
integrate(\(x) f(x, mu, sigma, alpha), -Inf, Inf, ...)$value
#
# Specify the distributions to plot.
#
Parameters <- list(list(mu = 0, sigma = 1, alpha = 0),
list(mu = 10, sigma = 2, alpha = 1/2),
list(mu = 0, sigma = 3, alpha = -0.9),
list(mu = 0, sigma = 4, alpha = 0.99))
#
# Generate the samples and plot summaries of them.
#
n.sim <- 1e5 # Sample size per plot
set.seed(17) # For reproducibility
pars <- par(mfrow = c(2, 2), mai = c(1/2, 3/4, 3/8, 1/8)) # Shrink the margins
for (parameters in Parameters)
with(parameters, {
x <- rf(n.sim, mu, sigma, alpha)
hist(x, freq = FALSE, breaks = 100, family = "Informal",
xlab = "", main = "", col = gray(0.9), border = gray(0.7))
mtext(bquote(list(mu==.(mu), sigma==.(sigma), alpha==.(alpha))),
cex = 1.25, side = 3, line = 0)
omega <- 1 / C(mu, sigma, alpha) # Compute the normalizing constant
curve(omega * f(x, mu, sigma, alpha), add = TRUE, lwd = 2, col = "Red")
})
par(pars) | Generating random variable which has a power distribution of Box and Tiao (1962) | Box & Tiao refer to this as a "convenient class of power distributions," referencing Diananda (1949), Box (1953), and Turner (1960).
Because $\mu$ and $\sigma$ just establish a unit of measurement and | Generating random variable which has a power distribution of Box and Tiao (1962)
Box & Tiao refer to this as a "convenient class of power distributions," referencing Diananda (1949), Box (1953), and Turner (1960).
Because $\mu$ and $\sigma$ just establish a unit of measurement and the absolute value reflects values around the origin, the basic density is proportional to $\exp(-z^p/2)$ where $p = 2/(1+\alpha)$ and $z \ge 0.$ Changing variables to $y = z^p$ for $0\lt p \lt \infty$ changes the probability element to
$$\exp(-z^p/2)\mathrm{d}z \to \exp(-y/2) \mathrm{d}\left(y^{1/p}\right) = \frac{1}{p}y^{1/p - 1}e^{-y/2}\mathrm{d}y.$$
Since $ p = 2/(1+\alpha),$ this is proportional to a scaled Gamma$(1/p)$ = Gamma$((1+\alpha)/2)$ density, also known as a Chi-squared$(1+\alpha)$ density.
Thus, to generate a value from such a distribution, undo all these transformations in reverse order:
Generate a value $Y$ from a Chi-squared$(1+\alpha)$ distribution, raise it to the $2/(1+\alpha)$ power, randomly negate it (with probability $1/2$), multiply by $\sigma,$ and add $\mu.$
This R code exhibits one such implementation. n is the number of independent values to draw.
rf <- function(n, mu, sigma, alpha) {
y <- rchisq(n, 1 + alpha) # A chi-squared variate
u <- sample(c(-1,1), n, replace = TRUE) # Random sign change
y^((1 + alpha)/2) * u * sigma + mu
}
Here are some examples of values generated in this fashion (100,000 of each) along with graphs of $f.$
Generating Chi-squared variates with parameter $1+\alpha$ near zero is problematic. You can see this code works for $1+\alpha = 0.1$ (bottom left), but watch out when it gets much smaller than this:
The spike and gap in the middle should not be there.
The problem lies with floating point arithmetic: even double precision does not suffice. By this point, though, the uniform distribution looks like a good approximation.
Appendix
This R code produced the plots. It uses the showtext library to access a Google font for the axis numbers and labels. Few of these fonts, if any, support Greek or math characters, so I had to use the default font for the plot titles (using mtext). Otherwise, everything is done with the base R plotting functions hist and curve. Don't be concerned about the relatively large simulation size: the total computation time is far less than one second to generate these 400,000 variates.
library(showtext)
if(!("Informal" %in% font_families())) font_add_google("Fuzzy Bubbles", "Informal")
showtext_auto()
#
# Density calculation.
#
f <- function(x, mu, sigma, alpha)
exp(-1/2 * abs((x - mu) / sigma) ^ (2 / (1 + alpha)))
C <- function(mu, sigma, alpha, ...)
integrate(\(x) f(x, mu, sigma, alpha), -Inf, Inf, ...)$value
#
# Specify the distributions to plot.
#
Parameters <- list(list(mu = 0, sigma = 1, alpha = 0),
list(mu = 10, sigma = 2, alpha = 1/2),
list(mu = 0, sigma = 3, alpha = -0.9),
list(mu = 0, sigma = 4, alpha = 0.99))
#
# Generate the samples and plot summaries of them.
#
n.sim <- 1e5 # Sample size per plot
set.seed(17) # For reproducibility
pars <- par(mfrow = c(2, 2), mai = c(1/2, 3/4, 3/8, 1/8)) # Shrink the margins
for (parameters in Parameters)
with(parameters, {
x <- rf(n.sim, mu, sigma, alpha)
hist(x, freq = FALSE, breaks = 100, family = "Informal",
xlab = "", main = "", col = gray(0.9), border = gray(0.7))
mtext(bquote(list(mu==.(mu), sigma==.(sigma), alpha==.(alpha))),
cex = 1.25, side = 3, line = 0)
omega <- 1 / C(mu, sigma, alpha) # Compute the normalizing constant
curve(omega * f(x, mu, sigma, alpha), add = TRUE, lwd = 2, col = "Red")
})
par(pars) | Generating random variable which has a power distribution of Box and Tiao (1962)
Box & Tiao refer to this as a "convenient class of power distributions," referencing Diananda (1949), Box (1953), and Turner (1960).
Because $\mu$ and $\sigma$ just establish a unit of measurement and |
35,280 | Textbook recommendations covering machine learning techniques for causal inference? | I follow this area pretty closely, but I think this subfield is so new no textbook exists (yet).
However, there are some course videos that are fairly good:
Machine Learning & Causal Inference: A Short Course at Stanford (accompanying tutorial)
Summer Institute in Machine Learning in Economics (MLESI21) at University of Chicago
There is also a nice survey paper:
"Machine learning methods that economists should know about"
by Susan Athey, Guido Imbens in the Annual Review of Economics (link to draft) | Textbook recommendations covering machine learning techniques for causal inference? | I follow this area pretty closely, but I think this subfield is so new no textbook exists (yet).
However, there are some course videos that are fairly good:
Machine Learning & Causal Inference: A Sho | Textbook recommendations covering machine learning techniques for causal inference?
I follow this area pretty closely, but I think this subfield is so new no textbook exists (yet).
However, there are some course videos that are fairly good:
Machine Learning & Causal Inference: A Short Course at Stanford (accompanying tutorial)
Summer Institute in Machine Learning in Economics (MLESI21) at University of Chicago
There is also a nice survey paper:
"Machine learning methods that economists should know about"
by Susan Athey, Guido Imbens in the Annual Review of Economics (link to draft) | Textbook recommendations covering machine learning techniques for causal inference?
I follow this area pretty closely, but I think this subfield is so new no textbook exists (yet).
However, there are some course videos that are fairly good:
Machine Learning & Causal Inference: A Sho |
35,281 | Textbook recommendations covering machine learning techniques for causal inference? | As dimitriy states, there isn't a singular textbook yet (or at least that I am aware of). However, there are a few textbook materials you can piece together to cover the topics you mentioned.
Targeted Learning in Data Science covers super learner (which is a generalized stacking algorithm you would almost always want to use in practice), and targeted maximum likelihood estimation (with a bunch of variations of it). I think this one will be preferred over the other targeted learning book since the one linked above covers the machine learning parts a bit better
Chapter 18 of Hernan and Robins covers double machine learning.
Unfortunately, I don't have a recommendation for causal trees | Textbook recommendations covering machine learning techniques for causal inference? | As dimitriy states, there isn't a singular textbook yet (or at least that I am aware of). However, there are a few textbook materials you can piece together to cover the topics you mentioned.
Targete | Textbook recommendations covering machine learning techniques for causal inference?
As dimitriy states, there isn't a singular textbook yet (or at least that I am aware of). However, there are a few textbook materials you can piece together to cover the topics you mentioned.
Targeted Learning in Data Science covers super learner (which is a generalized stacking algorithm you would almost always want to use in practice), and targeted maximum likelihood estimation (with a bunch of variations of it). I think this one will be preferred over the other targeted learning book since the one linked above covers the machine learning parts a bit better
Chapter 18 of Hernan and Robins covers double machine learning.
Unfortunately, I don't have a recommendation for causal trees | Textbook recommendations covering machine learning techniques for causal inference?
As dimitriy states, there isn't a singular textbook yet (or at least that I am aware of). However, there are a few textbook materials you can piece together to cover the topics you mentioned.
Targete |
35,282 | Textbook recommendations covering machine learning techniques for causal inference? | For most recent work have a look at the conference for Causal Learning and Reasoning (CLeaR) 2022.
If you want to get started with ML and causal inference, I particular recommend (disclaimer: I m one of the co-authors) to look at Kelly, Kong, Goerg (2022) on "Predictive State Propensity Subclassification (PSPS): A causal inference algorithm for data-driven propensity score stratification". It's a fully probabilistic framework for causal inference by learning causal representations in the predictive state space for Pr(outcome | treatment, features). See paper for details.
For a ready to go TensorFlow keras implementation see https://github.com/gmgeorg/pypsps with code examples and notebook case studies. | Textbook recommendations covering machine learning techniques for causal inference? | For most recent work have a look at the conference for Causal Learning and Reasoning (CLeaR) 2022.
If you want to get started with ML and causal inference, I particular recommend (disclaimer: I m one | Textbook recommendations covering machine learning techniques for causal inference?
For most recent work have a look at the conference for Causal Learning and Reasoning (CLeaR) 2022.
If you want to get started with ML and causal inference, I particular recommend (disclaimer: I m one of the co-authors) to look at Kelly, Kong, Goerg (2022) on "Predictive State Propensity Subclassification (PSPS): A causal inference algorithm for data-driven propensity score stratification". It's a fully probabilistic framework for causal inference by learning causal representations in the predictive state space for Pr(outcome | treatment, features). See paper for details.
For a ready to go TensorFlow keras implementation see https://github.com/gmgeorg/pypsps with code examples and notebook case studies. | Textbook recommendations covering machine learning techniques for causal inference?
For most recent work have a look at the conference for Causal Learning and Reasoning (CLeaR) 2022.
If you want to get started with ML and causal inference, I particular recommend (disclaimer: I m one |
35,283 | Why the standard error of the mean never gets to zero even when the sample has the same size as the population? | If the sample has the same size as the population, either some units from the population are sampled multiple times, or the sample is in fact the whole population.
In the former case, the sample mean provides an estimate of the population mean, in which case a different sample would likely yield a different estimate of the population mean. Thus, the standard error of the estimate should be greater than zero (unless the sample has zero variance, of course).
In the latter case, there is no need to estimate the population mean, because the population mean can simply be calculated. This statistic has a variance of zero; it will not vary if we sample again using the same sampling procedure. Thus, the standard error is zero in that case. | Why the standard error of the mean never gets to zero even when the sample has the same size as the | If the sample has the same size as the population, either some units from the population are sampled multiple times, or the sample is in fact the whole population.
In the former case, the sample mean | Why the standard error of the mean never gets to zero even when the sample has the same size as the population?
If the sample has the same size as the population, either some units from the population are sampled multiple times, or the sample is in fact the whole population.
In the former case, the sample mean provides an estimate of the population mean, in which case a different sample would likely yield a different estimate of the population mean. Thus, the standard error of the estimate should be greater than zero (unless the sample has zero variance, of course).
In the latter case, there is no need to estimate the population mean, because the population mean can simply be calculated. This statistic has a variance of zero; it will not vary if we sample again using the same sampling procedure. Thus, the standard error is zero in that case. | Why the standard error of the mean never gets to zero even when the sample has the same size as the
If the sample has the same size as the population, either some units from the population are sampled multiple times, or the sample is in fact the whole population.
In the former case, the sample mean |
35,284 | Why the standard error of the mean never gets to zero even when the sample has the same size as the population? | When sampling from finite population, the standard error of the mean $\frac{\sigma}{\sqrt{n}}$ must me multiplied with the finite population correction (FPC) $\sqrt{\frac{N - n}{N - 1}}$ where $N$ is the population size and $n$ the sample size. Hence, the corrected standard error of the mean is:
$$
\frac{\sigma}{\sqrt{n}}\sqrt{\frac{N - n}{N - 1}}
$$
and because $\lim_{n\rightarrow N}\sqrt{\frac{N - n}{N - 1}} = 0$, the standard error of the mean goes to zero, as the other posts explained.
Note that if you work with the sample variance using Bessel's correction ($n-1$) then the FPC is $\sqrt{\frac{N-n}{N}}$ (see here for a derivation). | Why the standard error of the mean never gets to zero even when the sample has the same size as the | When sampling from finite population, the standard error of the mean $\frac{\sigma}{\sqrt{n}}$ must me multiplied with the finite population correction (FPC) $\sqrt{\frac{N - n}{N - 1}}$ where $N$ is | Why the standard error of the mean never gets to zero even when the sample has the same size as the population?
When sampling from finite population, the standard error of the mean $\frac{\sigma}{\sqrt{n}}$ must me multiplied with the finite population correction (FPC) $\sqrt{\frac{N - n}{N - 1}}$ where $N$ is the population size and $n$ the sample size. Hence, the corrected standard error of the mean is:
$$
\frac{\sigma}{\sqrt{n}}\sqrt{\frac{N - n}{N - 1}}
$$
and because $\lim_{n\rightarrow N}\sqrt{\frac{N - n}{N - 1}} = 0$, the standard error of the mean goes to zero, as the other posts explained.
Note that if you work with the sample variance using Bessel's correction ($n-1$) then the FPC is $\sqrt{\frac{N-n}{N}}$ (see here for a derivation). | Why the standard error of the mean never gets to zero even when the sample has the same size as the
When sampling from finite population, the standard error of the mean $\frac{\sigma}{\sqrt{n}}$ must me multiplied with the finite population correction (FPC) $\sqrt{\frac{N - n}{N - 1}}$ where $N$ is |
35,285 | Why the standard error of the mean never gets to zero even when the sample has the same size as the population? | The reason that this does not happen is that we apply formula's that assume that the sample size is much smaller than the population size.
A sample is often regarded as taken from an infinite population and, when used as an estimate for the mean of the population, the estimate of the mean of the population has an estimated variance of $$ \text{var}(\bar{x}) = \frac{1}{n-1} \sum (y_i - \bar{y} )^2 $$
This assumes that the population can be regarded as infinite. When this assumption is false then this formula is not correct. And it is also incorrect to state that the variance does not get to zero.
When you estimate the mean of a finite population and you get to sample the entire population, then you can determine the population mean with 100% accuracy and the standard error should be zero. | Why the standard error of the mean never gets to zero even when the sample has the same size as the | The reason that this does not happen is that we apply formula's that assume that the sample size is much smaller than the population size.
A sample is often regarded as taken from an infinite populat | Why the standard error of the mean never gets to zero even when the sample has the same size as the population?
The reason that this does not happen is that we apply formula's that assume that the sample size is much smaller than the population size.
A sample is often regarded as taken from an infinite population and, when used as an estimate for the mean of the population, the estimate of the mean of the population has an estimated variance of $$ \text{var}(\bar{x}) = \frac{1}{n-1} \sum (y_i - \bar{y} )^2 $$
This assumes that the population can be regarded as infinite. When this assumption is false then this formula is not correct. And it is also incorrect to state that the variance does not get to zero.
When you estimate the mean of a finite population and you get to sample the entire population, then you can determine the population mean with 100% accuracy and the standard error should be zero. | Why the standard error of the mean never gets to zero even when the sample has the same size as the
The reason that this does not happen is that we apply formula's that assume that the sample size is much smaller than the population size.
A sample is often regarded as taken from an infinite populat |
35,286 | Problems with cross-validation for time-ordered data | Your validation splits should match your target task. If taking a database, in a randomized fashion finding out sales prices and then filling the numbers for the rest is the task (without an intent to predict future sales), then simple cross-validation is just fine. If the goal is to predict future prices, then a past-vs-future split is better (whether that's a single such split or multiple ones). Why? Here's some examples of what could go wrong (and it's usually very hard to intervene to prevent these):
Multiple sales of the same house might be in the data. If I have a past and a future sale, then guessing the the current sales price was inbetween is a pretty good bet. Sufficiently complex models will manage to memorize data to give this kind of answer and you will not even notice until you specifically look for it (I've seen exactly this type of thing with a xgboost once). On the other hand, in practice you would not have this information and real predictions would be less reliable.
Multiple similar new units might go onto the market at the same time. It's probably a good guess that identical properties in the same local go for about the same price (plus-minus a bit of negotiating skill by people).
If things like inflation, state of the economy etc. change over time, then these might have various effects on property prices. If you have sales data at the exact same time of a future sale, then that might tell you a lot about these effects that you would not know in practice (even if you put current economic forecasts and inflation numbers into your model, which would presumably somewhat help).
A model trained with future data does not need to extrapolate into the future, it only needs to interpolate. So, a validation set-up like that does not test the ability of a model to extrapolate into the future. Of course, if we plan to regularly re-train a model (let's say once a month), it will not ever need to extrapolate very far into the future, so this might be less of a concern. Nevertheless, extrapolating a little bit into the future is still harder than interpolating with knowledge of the future.
All this, so far, assumed that individual sales don't affect each other, but that might also be the case.
E.g. if people know the sales prices of recent sales in the area, they might target a price close to that (with adjustments depending on the property), but if that is so, then data from the future tells you something about similar past sales (but you would not have that knowledge when predicting the future).
Or, sales prices in one area might move in the same way due to being affected by some local event (new highway being buildt, new school opened etc.) and the model would learn this via other future sales from the area rather than from the things that could predict this in practice (e.g. knowledge of local events that move house prices). | Problems with cross-validation for time-ordered data | Your validation splits should match your target task. If taking a database, in a randomized fashion finding out sales prices and then filling the numbers for the rest is the task (without an intent to | Problems with cross-validation for time-ordered data
Your validation splits should match your target task. If taking a database, in a randomized fashion finding out sales prices and then filling the numbers for the rest is the task (without an intent to predict future sales), then simple cross-validation is just fine. If the goal is to predict future prices, then a past-vs-future split is better (whether that's a single such split or multiple ones). Why? Here's some examples of what could go wrong (and it's usually very hard to intervene to prevent these):
Multiple sales of the same house might be in the data. If I have a past and a future sale, then guessing the the current sales price was inbetween is a pretty good bet. Sufficiently complex models will manage to memorize data to give this kind of answer and you will not even notice until you specifically look for it (I've seen exactly this type of thing with a xgboost once). On the other hand, in practice you would not have this information and real predictions would be less reliable.
Multiple similar new units might go onto the market at the same time. It's probably a good guess that identical properties in the same local go for about the same price (plus-minus a bit of negotiating skill by people).
If things like inflation, state of the economy etc. change over time, then these might have various effects on property prices. If you have sales data at the exact same time of a future sale, then that might tell you a lot about these effects that you would not know in practice (even if you put current economic forecasts and inflation numbers into your model, which would presumably somewhat help).
A model trained with future data does not need to extrapolate into the future, it only needs to interpolate. So, a validation set-up like that does not test the ability of a model to extrapolate into the future. Of course, if we plan to regularly re-train a model (let's say once a month), it will not ever need to extrapolate very far into the future, so this might be less of a concern. Nevertheless, extrapolating a little bit into the future is still harder than interpolating with knowledge of the future.
All this, so far, assumed that individual sales don't affect each other, but that might also be the case.
E.g. if people know the sales prices of recent sales in the area, they might target a price close to that (with adjustments depending on the property), but if that is so, then data from the future tells you something about similar past sales (but you would not have that knowledge when predicting the future).
Or, sales prices in one area might move in the same way due to being affected by some local event (new highway being buildt, new school opened etc.) and the model would learn this via other future sales from the area rather than from the things that could predict this in practice (e.g. knowledge of local events that move house prices). | Problems with cross-validation for time-ordered data
Your validation splits should match your target task. If taking a database, in a randomized fashion finding out sales prices and then filling the numbers for the rest is the task (without an intent to |
35,287 | Problems with cross-validation for time-ordered data | I can't speak to the theoretical aspects, but why not preprocess your data to avoid the problem? If you can get some house price inflation data you can inflate the prices of the older (earlier timestamped) properties to values you can reasonably expect them to fetch today and then train your predictive algorithm using apple to apple comparisons across the data. | Problems with cross-validation for time-ordered data | I can't speak to the theoretical aspects, but why not preprocess your data to avoid the problem? If you can get some house price inflation data you can inflate the prices of the older (earlier timesta | Problems with cross-validation for time-ordered data
I can't speak to the theoretical aspects, but why not preprocess your data to avoid the problem? If you can get some house price inflation data you can inflate the prices of the older (earlier timestamped) properties to values you can reasonably expect them to fetch today and then train your predictive algorithm using apple to apple comparisons across the data. | Problems with cross-validation for time-ordered data
I can't speak to the theoretical aspects, but why not preprocess your data to avoid the problem? If you can get some house price inflation data you can inflate the prices of the older (earlier timesta |
35,288 | Problems with cross-validation for time-ordered data | I don't think that having a time-stamp on the data is necessary problematic for housing price forecasting.
The procedure is to decompose the house into its attributes (size, location, etc.), add dummy variables for time and then do regression on these variables, this is called a "hedonic model" in the econometric literature, for a detailed discussion check for instance chapter 4.3 of "The practice of econometrics: Classic and contemporary" Berdnt, 1996.
When you use this method, you estimate the coefficients accompanying the dummy time variables, so the model does not use specific house values from the past to compute future values. | Problems with cross-validation for time-ordered data | I don't think that having a time-stamp on the data is necessary problematic for housing price forecasting.
The procedure is to decompose the house into its attributes (size, location, etc.), add dummy | Problems with cross-validation for time-ordered data
I don't think that having a time-stamp on the data is necessary problematic for housing price forecasting.
The procedure is to decompose the house into its attributes (size, location, etc.), add dummy variables for time and then do regression on these variables, this is called a "hedonic model" in the econometric literature, for a detailed discussion check for instance chapter 4.3 of "The practice of econometrics: Classic and contemporary" Berdnt, 1996.
When you use this method, you estimate the coefficients accompanying the dummy time variables, so the model does not use specific house values from the past to compute future values. | Problems with cross-validation for time-ordered data
I don't think that having a time-stamp on the data is necessary problematic for housing price forecasting.
The procedure is to decompose the house into its attributes (size, location, etc.), add dummy |
35,289 | Problems with cross-validation for time-ordered data | One critical issue is the cage of covariance includes time varying effects, which can impose and unrecognised pattern on the data if not handled appropriately. There will be a general trends across house prices depending on the general economic situation evolving over time. more complex patterns such as evolving fashions and functional requirements will impose addition layers of complexity.
It is related to the Moiré effect. Here's a figure I've created recently that shows how hidden patterns in the data can affect CV, using an extreme example for demonstration purposes. Moiré patterns can be extremely complex and hard to detect if you don't look for them intentionally. If the periodicity of the sampling is a multiplicative factor of the periodicity of the underlying trend then you may under-sample underlying effects in the modelling set that mostly get held out for the test set (or vice versa).
Visualisation of experimental factor distribution in the simulated structured dataset. The color is created by red channel for time interval, green channel to indicate dose exposure, blue channel to indicate experimental group membership. Selection to hold out group indicated 1/3 reduction in intensity. a-c) Week of study as x-axis, dose group (sub-divided by experimental group) as Y-axis. a) is the basic experimental design matrix. b) black pixels indicate samples selected by sequential block selection for K-fold c) black pixels indicate samples selected using Monte-Carlo d) the average properties of the train and test sets with each selection method. | Problems with cross-validation for time-ordered data | One critical issue is the cage of covariance includes time varying effects, which can impose and unrecognised pattern on the data if not handled appropriately. There will be a general trends across ho | Problems with cross-validation for time-ordered data
One critical issue is the cage of covariance includes time varying effects, which can impose and unrecognised pattern on the data if not handled appropriately. There will be a general trends across house prices depending on the general economic situation evolving over time. more complex patterns such as evolving fashions and functional requirements will impose addition layers of complexity.
It is related to the Moiré effect. Here's a figure I've created recently that shows how hidden patterns in the data can affect CV, using an extreme example for demonstration purposes. Moiré patterns can be extremely complex and hard to detect if you don't look for them intentionally. If the periodicity of the sampling is a multiplicative factor of the periodicity of the underlying trend then you may under-sample underlying effects in the modelling set that mostly get held out for the test set (or vice versa).
Visualisation of experimental factor distribution in the simulated structured dataset. The color is created by red channel for time interval, green channel to indicate dose exposure, blue channel to indicate experimental group membership. Selection to hold out group indicated 1/3 reduction in intensity. a-c) Week of study as x-axis, dose group (sub-divided by experimental group) as Y-axis. a) is the basic experimental design matrix. b) black pixels indicate samples selected by sequential block selection for K-fold c) black pixels indicate samples selected using Monte-Carlo d) the average properties of the train and test sets with each selection method. | Problems with cross-validation for time-ordered data
One critical issue is the cage of covariance includes time varying effects, which can impose and unrecognised pattern on the data if not handled appropriately. There will be a general trends across ho |
35,290 | In R, is it possible for pbinom to take a noninteger x? | The function is operating correctly, using the definition of the CDF. Consider a random variable $X$ with integer support. For any input $x+r$ with integer part $x$ and remainder $0 \leqslant r < 1$ you should get:
$$F(x+r) \equiv \mathbb{P}(X \leqslant x+r) = \mathbb{P}(X \leqslant x) = F(x).$$
Sure enough, that is exactly what pbinom is doing:
#Check CDF values
identical(pbinom(6.94, 20, .25), pbinom(6, 20, .25))
[1] TRUE
identical(pbinom(3.06, 20, .25), pbinom(3, 20, .25))
[1] TRUE
You can also check that this corresponds to the output of dbinom (to within a small tolerance due to rounding) if you want:
#Check CDF values against PDF values
pbinom(6.94, 20, .25) - sum(dbinom(0:6, 20, .25))
[1] 0
pbinom(3.06, 20, .25) - sum(dbinom(0:3, 20, .25))
[1] 2.775558e-17 | In R, is it possible for pbinom to take a noninteger x? | The function is operating correctly, using the definition of the CDF. Consider a random variable $X$ with integer support. For any input $x+r$ with integer part $x$ and remainder $0 \leqslant r < 1$ | In R, is it possible for pbinom to take a noninteger x?
The function is operating correctly, using the definition of the CDF. Consider a random variable $X$ with integer support. For any input $x+r$ with integer part $x$ and remainder $0 \leqslant r < 1$ you should get:
$$F(x+r) \equiv \mathbb{P}(X \leqslant x+r) = \mathbb{P}(X \leqslant x) = F(x).$$
Sure enough, that is exactly what pbinom is doing:
#Check CDF values
identical(pbinom(6.94, 20, .25), pbinom(6, 20, .25))
[1] TRUE
identical(pbinom(3.06, 20, .25), pbinom(3, 20, .25))
[1] TRUE
You can also check that this corresponds to the output of dbinom (to within a small tolerance due to rounding) if you want:
#Check CDF values against PDF values
pbinom(6.94, 20, .25) - sum(dbinom(0:6, 20, .25))
[1] 0
pbinom(3.06, 20, .25) - sum(dbinom(0:3, 20, .25))
[1] 2.775558e-17 | In R, is it possible for pbinom to take a noninteger x?
The function is operating correctly, using the definition of the CDF. Consider a random variable $X$ with integer support. For any input $x+r$ with integer part $x$ and remainder $0 \leqslant r < 1$ |
35,291 | In R, is it possible for pbinom to take a noninteger x? | There are two ways of doing this.
Use a normal approximation to the sampling distribution of the sample mean.
Round the endpoints to the "closest" integer to get a conservative interval.
You may ask the teacher for clarity if this isn't clearly an application of some methods you've already covered.
EDIT:
Though R provides output for these values, the existence of their result is somewhat of an artifact of using the gamma functional to estimate the combinatorial terms in the expression of the density function. There is but a little "window" of gamma probability around the non-integral values, and relatively steep "steps" in the CDF. Importantly these "windows" and "steep steps" don't correspond to any physical term, nor does it make any practical difference from rounding.
One can try:
set.seed(123)
sim <- rbinom(1e7, 20, 0.25)
mean(3.06 < sim & sim < 6.94)
mean(3 < sim & sim <= 6)
and find
mean(3.06 < sim & sim < 6.94)
1 0.5605642
mean(3 < sim & sim <= 6)
1 0.5605642
similarly:
> pbinom(6.94, 20, 0.25) - pbinom(3.06, 20, 0.25)
[1] 0.5606259
> pbinom(6, 20, 0.25) - pbinom(3, 20, 0.25)
[1] 0.5606259 | In R, is it possible for pbinom to take a noninteger x? | There are two ways of doing this.
Use a normal approximation to the sampling distribution of the sample mean.
Round the endpoints to the "closest" integer to get a conservative interval.
You may ask | In R, is it possible for pbinom to take a noninteger x?
There are two ways of doing this.
Use a normal approximation to the sampling distribution of the sample mean.
Round the endpoints to the "closest" integer to get a conservative interval.
You may ask the teacher for clarity if this isn't clearly an application of some methods you've already covered.
EDIT:
Though R provides output for these values, the existence of their result is somewhat of an artifact of using the gamma functional to estimate the combinatorial terms in the expression of the density function. There is but a little "window" of gamma probability around the non-integral values, and relatively steep "steps" in the CDF. Importantly these "windows" and "steep steps" don't correspond to any physical term, nor does it make any practical difference from rounding.
One can try:
set.seed(123)
sim <- rbinom(1e7, 20, 0.25)
mean(3.06 < sim & sim < 6.94)
mean(3 < sim & sim <= 6)
and find
mean(3.06 < sim & sim < 6.94)
1 0.5605642
mean(3 < sim & sim <= 6)
1 0.5605642
similarly:
> pbinom(6.94, 20, 0.25) - pbinom(3.06, 20, 0.25)
[1] 0.5606259
> pbinom(6, 20, 0.25) - pbinom(3, 20, 0.25)
[1] 0.5606259 | In R, is it possible for pbinom to take a noninteger x?
There are two ways of doing this.
Use a normal approximation to the sampling distribution of the sample mean.
Round the endpoints to the "closest" integer to get a conservative interval.
You may ask |
35,292 | Is it possible that marginally independent random variables are conditionally dependent? | Here a simple example with three Bernoulli random variables $X,Y,Z$ with the properties that
$X$ and $Z$ are independent, $Y$ and $Z$ are independent and
$X$ and $Z$ are conditionally dependent random variables given the value of $Y$,
$Y$ and $Z$ are conditionally dependent random variables given the value of $X$,
Suppose that $(X,Y,Z)$ takes on the $4$ values $(0,0,0), (0,1,1), (1,0,1), (1,1,0)$ with equal probability $\frac 14$. It is easily verified that $X, Y$, and $Z$ are indeed Bernoulli random variables with parameter $\frac 12$, and that $X, Y$, and $Z$ are indeed pairwise independent random variables. (Those too lazy to carry out this verification for themselves can read some details here.) But notice that
given that $Y=0$, $X$ equals $Z$, while given that $Y=1$, $X$ equals $1-Z$ and thus $X$ and $Z$ are conditionally dependent given $Y$.
Similarly,
given that $X=0$, $Y$ equals $Z$ while given that $X=1$, $Y$ equals $1-Z$ and thus $Y$ and $Z$ are conditionally dependent given $X$.
For these specific random variables, it is also true that $X$ and $Y$ are independent random variables; in fact, $X,Y,Z$ are pairwise independent but not mutually independent random variables. In fact, for these specific random variables, it is also true that
given that $Z=0$, $X$ equals $Y$ while given that $Z=1$, $X$ equals $1-Y$ and thus $X$ and $Y$ are conditionally dependent given $Z$,
which has a pleasing symmetry with the two previous bulleted points. These extra properties might not hold for other sets of random variables that satisfy the requirements laid out in the problem posed by the OP (which don't include the requirement of conditional dependence of $X$ and $Y$ given $Z$). | Is it possible that marginally independent random variables are conditionally dependent? | Here a simple example with three Bernoulli random variables $X,Y,Z$ with the properties that
$X$ and $Z$ are independent, $Y$ and $Z$ are independent and
$X$ and $Z$ are conditionally dependent rand | Is it possible that marginally independent random variables are conditionally dependent?
Here a simple example with three Bernoulli random variables $X,Y,Z$ with the properties that
$X$ and $Z$ are independent, $Y$ and $Z$ are independent and
$X$ and $Z$ are conditionally dependent random variables given the value of $Y$,
$Y$ and $Z$ are conditionally dependent random variables given the value of $X$,
Suppose that $(X,Y,Z)$ takes on the $4$ values $(0,0,0), (0,1,1), (1,0,1), (1,1,0)$ with equal probability $\frac 14$. It is easily verified that $X, Y$, and $Z$ are indeed Bernoulli random variables with parameter $\frac 12$, and that $X, Y$, and $Z$ are indeed pairwise independent random variables. (Those too lazy to carry out this verification for themselves can read some details here.) But notice that
given that $Y=0$, $X$ equals $Z$, while given that $Y=1$, $X$ equals $1-Z$ and thus $X$ and $Z$ are conditionally dependent given $Y$.
Similarly,
given that $X=0$, $Y$ equals $Z$ while given that $X=1$, $Y$ equals $1-Z$ and thus $Y$ and $Z$ are conditionally dependent given $X$.
For these specific random variables, it is also true that $X$ and $Y$ are independent random variables; in fact, $X,Y,Z$ are pairwise independent but not mutually independent random variables. In fact, for these specific random variables, it is also true that
given that $Z=0$, $X$ equals $Y$ while given that $Z=1$, $X$ equals $1-Y$ and thus $X$ and $Y$ are conditionally dependent given $Z$,
which has a pleasing symmetry with the two previous bulleted points. These extra properties might not hold for other sets of random variables that satisfy the requirements laid out in the problem posed by the OP (which don't include the requirement of conditional dependence of $X$ and $Y$ given $Z$). | Is it possible that marginally independent random variables are conditionally dependent?
Here a simple example with three Bernoulli random variables $X,Y,Z$ with the properties that
$X$ and $Z$ are independent, $Y$ and $Z$ are independent and
$X$ and $Z$ are conditionally dependent rand |
35,293 | Is it possible that marginally independent random variables are conditionally dependent? | @Dilip Sarwate: have already given a good answer, the purpose of this is to show the connection with Simpson's paradox. If such examples did not exist, Simpson's paradox would have been eliminated. The easy way to see is with a simple example with artificial data. Let $X, Y, Z$ have the possible values $-1, +1$. In the two strata defined by $Z$ we have the following tables, with $X$ in rows, $Y$ in columns:
dist[, , 1] # Table for Z=-1:
Y
X -1 1
-1 2 4
1 4 2
dist[, , 2] # Table for Z=+1:
Y
X -1 1
-1 4 2
1 2 4
Observe that $X$ and $Y$ are dependent in both these conditional distributions. But all the three bivariate marginals are uniform, so we have pairwise independence:
apply(dist, c(1, 2), sum)
Y
X -1 1
-1 6 6
1 6 6
apply(dist, c(1, 3), sum)
Z
X -1 1
-1 6 6
1 6 6
apply(dist, c(2, 3), sum)
Z
Y -1 1
-1 6 6
1 6 6
So we have an instance of Simpson's paradox: Conditioning shows an effect which cannot be seen in the marginal distribution. Compare with examples here: How to resolve Simpson's paradox?
Construction of example data in R:
dist <- array(c(2, 4, 4, 2, 4, 2, 2, 4),
dim=c(2, 2, 2),
dimnames=list(c(-1, 1), c(-1, 1), c(-1, 1)))
names(dimnames(dist)) <- c("X", "Y", "Z")
``` | Is it possible that marginally independent random variables are conditionally dependent? | @Dilip Sarwate: have already given a good answer, the purpose of this is to show the connection with Simpson's paradox. If such examples did not exist, Simpson's paradox would have been eliminated. Th | Is it possible that marginally independent random variables are conditionally dependent?
@Dilip Sarwate: have already given a good answer, the purpose of this is to show the connection with Simpson's paradox. If such examples did not exist, Simpson's paradox would have been eliminated. The easy way to see is with a simple example with artificial data. Let $X, Y, Z$ have the possible values $-1, +1$. In the two strata defined by $Z$ we have the following tables, with $X$ in rows, $Y$ in columns:
dist[, , 1] # Table for Z=-1:
Y
X -1 1
-1 2 4
1 4 2
dist[, , 2] # Table for Z=+1:
Y
X -1 1
-1 4 2
1 2 4
Observe that $X$ and $Y$ are dependent in both these conditional distributions. But all the three bivariate marginals are uniform, so we have pairwise independence:
apply(dist, c(1, 2), sum)
Y
X -1 1
-1 6 6
1 6 6
apply(dist, c(1, 3), sum)
Z
X -1 1
-1 6 6
1 6 6
apply(dist, c(2, 3), sum)
Z
Y -1 1
-1 6 6
1 6 6
So we have an instance of Simpson's paradox: Conditioning shows an effect which cannot be seen in the marginal distribution. Compare with examples here: How to resolve Simpson's paradox?
Construction of example data in R:
dist <- array(c(2, 4, 4, 2, 4, 2, 2, 4),
dim=c(2, 2, 2),
dimnames=list(c(-1, 1), c(-1, 1), c(-1, 1)))
names(dimnames(dist)) <- c("X", "Y", "Z")
``` | Is it possible that marginally independent random variables are conditionally dependent?
@Dilip Sarwate: have already given a good answer, the purpose of this is to show the connection with Simpson's paradox. If such examples did not exist, Simpson's paradox would have been eliminated. Th |
35,294 | Standard Error, Standard Deviation and Variance confusion | The term "standard error" refers to the standard deviation of a statistic that is calculated. So, you can calculate a standard error for a mean--because the mean is a statistic. You can also calculate a standard error for a parameter estimate like $\hat{\beta}$.
We say standard error instead of standard deviation to distinguish between a value that's calculated from repeated observations and an estimate that's based on a theory about the distribution.
We only have one observation for $\hat{\beta}$, and we have mathematical theory to derive its sampling error--so we call that the standard error.
We have more than one observation of a variable X, and we calculate the sampling error based on that observed data--so we call that statistic the standard deviation. | Standard Error, Standard Deviation and Variance confusion | The term "standard error" refers to the standard deviation of a statistic that is calculated. So, you can calculate a standard error for a mean--because the mean is a statistic. You can also calculate | Standard Error, Standard Deviation and Variance confusion
The term "standard error" refers to the standard deviation of a statistic that is calculated. So, you can calculate a standard error for a mean--because the mean is a statistic. You can also calculate a standard error for a parameter estimate like $\hat{\beta}$.
We say standard error instead of standard deviation to distinguish between a value that's calculated from repeated observations and an estimate that's based on a theory about the distribution.
We only have one observation for $\hat{\beta}$, and we have mathematical theory to derive its sampling error--so we call that the standard error.
We have more than one observation of a variable X, and we calculate the sampling error based on that observed data--so we call that statistic the standard deviation. | Standard Error, Standard Deviation and Variance confusion
The term "standard error" refers to the standard deviation of a statistic that is calculated. So, you can calculate a standard error for a mean--because the mean is a statistic. You can also calculate |
35,295 | Standard Error, Standard Deviation and Variance confusion | The terminology is the same everywhere in statistics I think:
Variance $\sigma^2$ is the second moment of a known probability distribution
Standard Deviation $\sigma$ is the square root of variance
Variance of the mean $\sigma^2_{\mu} = \frac{\sigma^2}{N}$ is the variance of the mean of $N$ i.i.d random variables
Standard Deviation of the Mean $\sigma_{\mu}$ is the square root of the variance of the mean
The 4 above metrics apply analytically to probability distributions. One can estimate any one of them, typically denoted by letter $s$ and prefix 'sample', such as 'sample error of the mean' $s_{\mu}$. Sample standard deviation and Sample standard deviation of the mean are also known as Standard Error and Standard Error of the mean (SEM) respectively
With respect to your questions:
Variance and standard deviation are metrics of the distribution of the random variables in analytic case and a metric of data in the sample case. These terms are not applicable to parameters of your model, such as $\beta$ or $\hat \beta$. These are simply the parameter and its estimate.
When you construct a confidence interval for an unknown
parameter, you perform a hypothesis test. The confidence interval is likely to be a function of the moments of the distribution, or their sample counterparts, but that depends strongly on the underlying distribution.
Confidence intervals only apply to unknown parameters of the model, they do not apply to parts of data such as $y$. The closest entity to a confidence interval when applied to random variable itself is a tolerance interval, namely, the interval where the random variable is likely to fall given the exact model parameters | Standard Error, Standard Deviation and Variance confusion | The terminology is the same everywhere in statistics I think:
Variance $\sigma^2$ is the second moment of a known probability distribution
Standard Deviation $\sigma$ is the square root of variance
V | Standard Error, Standard Deviation and Variance confusion
The terminology is the same everywhere in statistics I think:
Variance $\sigma^2$ is the second moment of a known probability distribution
Standard Deviation $\sigma$ is the square root of variance
Variance of the mean $\sigma^2_{\mu} = \frac{\sigma^2}{N}$ is the variance of the mean of $N$ i.i.d random variables
Standard Deviation of the Mean $\sigma_{\mu}$ is the square root of the variance of the mean
The 4 above metrics apply analytically to probability distributions. One can estimate any one of them, typically denoted by letter $s$ and prefix 'sample', such as 'sample error of the mean' $s_{\mu}$. Sample standard deviation and Sample standard deviation of the mean are also known as Standard Error and Standard Error of the mean (SEM) respectively
With respect to your questions:
Variance and standard deviation are metrics of the distribution of the random variables in analytic case and a metric of data in the sample case. These terms are not applicable to parameters of your model, such as $\beta$ or $\hat \beta$. These are simply the parameter and its estimate.
When you construct a confidence interval for an unknown
parameter, you perform a hypothesis test. The confidence interval is likely to be a function of the moments of the distribution, or their sample counterparts, but that depends strongly on the underlying distribution.
Confidence intervals only apply to unknown parameters of the model, they do not apply to parts of data such as $y$. The closest entity to a confidence interval when applied to random variable itself is a tolerance interval, namely, the interval where the random variable is likely to fall given the exact model parameters | Standard Error, Standard Deviation and Variance confusion
The terminology is the same everywhere in statistics I think:
Variance $\sigma^2$ is the second moment of a known probability distribution
Standard Deviation $\sigma$ is the square root of variance
V |
35,296 | A good book for regression analysis for pure mathematicians | I would recommend Seber & Lee (from which I originally learned regression.) Cover most of your topics with proofs. An alternative in the same style, but also covering glm's is Linear Models and Generalizations : Least Squares and Alternatives by Rao et al.
A shorter book with a more geometric viewpoint is The Coordinate-Free Approach to Linear Models by Michael J. Wichura, but it will not cover all your topics. | A good book for regression analysis for pure mathematicians | I would recommend Seber & Lee (from which I originally learned regression.) Cover most of your topics with proofs. An alternative in the same style, but also covering glm's is Linear Models and Gener | A good book for regression analysis for pure mathematicians
I would recommend Seber & Lee (from which I originally learned regression.) Cover most of your topics with proofs. An alternative in the same style, but also covering glm's is Linear Models and Generalizations : Least Squares and Alternatives by Rao et al.
A shorter book with a more geometric viewpoint is The Coordinate-Free Approach to Linear Models by Michael J. Wichura, but it will not cover all your topics. | A good book for regression analysis for pure mathematicians
I would recommend Seber & Lee (from which I originally learned regression.) Cover most of your topics with proofs. An alternative in the same style, but also covering glm's is Linear Models and Gener |
35,297 | A good book for regression analysis for pure mathematicians | I think that you might be interested in
Stachurski (2016) A Primer in Econometric Theory.
The book is quite mathematically oriented. The book is organized into 3 sections:
Background - which is all about pure mathematical foundations; vector spaces, linear algebra and matrices, foundations of probability, modeling dependence, asymptotics etc.
Foundations of statistics - which covers rigorous mathematical definition of many basic statistical concepts, properties of estimators, confidence sets etc.
Econometric models: this section covers some econometric models in great detail. It’s range of models is not wide but it goes quite in depth for each topic (there is whole chapter just on geometry of least squares).
I think based on your description this is what you are looking for. It definitely covers most of the topics you listed (but not all I don’t think information criteria like AIC (they are mentioned but not discussed in great detail) but otherwise everything you listed should be covered). It is very in depth and focused on theory rather than practical applications. | A good book for regression analysis for pure mathematicians | I think that you might be interested in
Stachurski (2016) A Primer in Econometric Theory.
The book is quite mathematically oriented. The book is organized into 3 sections:
Background - which is all a | A good book for regression analysis for pure mathematicians
I think that you might be interested in
Stachurski (2016) A Primer in Econometric Theory.
The book is quite mathematically oriented. The book is organized into 3 sections:
Background - which is all about pure mathematical foundations; vector spaces, linear algebra and matrices, foundations of probability, modeling dependence, asymptotics etc.
Foundations of statistics - which covers rigorous mathematical definition of many basic statistical concepts, properties of estimators, confidence sets etc.
Econometric models: this section covers some econometric models in great detail. It’s range of models is not wide but it goes quite in depth for each topic (there is whole chapter just on geometry of least squares).
I think based on your description this is what you are looking for. It definitely covers most of the topics you listed (but not all I don’t think information criteria like AIC (they are mentioned but not discussed in great detail) but otherwise everything you listed should be covered). It is very in depth and focused on theory rather than practical applications. | A good book for regression analysis for pure mathematicians
I think that you might be interested in
Stachurski (2016) A Primer in Econometric Theory.
The book is quite mathematically oriented. The book is organized into 3 sections:
Background - which is all a |
35,298 | Why can I interpret a log transformed dependent variable in terms of percent change in linear regression? | Say we have a model like this:
$$\log\hat y=\beta_0+\beta_1 x$$
Since $\exp$ is the inverse function of $\log$, we can do this:
$$\hat y = f(x)=\exp(\beta_0+\beta_1 x)$$
Now, what happens when $x$ grows by 1? $f(x)$ multiplies by $\exp(\beta_1)$:
$$\begin{align}
f(x+1)&=\exp[\beta_0+\beta_1(x+1)]\\
&=\exp(\beta_0+\beta_1 x)\cdot\exp(\beta_1)\\
&=f(x)\cdot\exp(\beta_1)
\end{align}$$
OK, now how much does $f(x)$ grow in percentages?
$$\left(\frac{f(x+1)}{f(x)}-1\right)\cdot100=(\exp(\beta_1)-1)\cdot100$$
This explains that formula for converting coefficients into percent changes. Until here, we used no approximations. Now, if $x$ is a small enough number, we can approximate $\exp(x)\approx1+x$. This approximation is called a first order Taylor expansion of $\exp(x)$ around $x=0$. If you replace this approximation on the $\text{coefficients}\rightarrow\text{percent change}$ we found earlier, you get:
$$\text{percent change}\approx100\cdot\beta_1$$
So, when $\beta_1$ is a small number, you can interpret it directly as a percent change - but keep in mind that this is just an approximation. | Why can I interpret a log transformed dependent variable in terms of percent change in linear regres | Say we have a model like this:
$$\log\hat y=\beta_0+\beta_1 x$$
Since $\exp$ is the inverse function of $\log$, we can do this:
$$\hat y = f(x)=\exp(\beta_0+\beta_1 x)$$
Now, what happens when $x$ gro | Why can I interpret a log transformed dependent variable in terms of percent change in linear regression?
Say we have a model like this:
$$\log\hat y=\beta_0+\beta_1 x$$
Since $\exp$ is the inverse function of $\log$, we can do this:
$$\hat y = f(x)=\exp(\beta_0+\beta_1 x)$$
Now, what happens when $x$ grows by 1? $f(x)$ multiplies by $\exp(\beta_1)$:
$$\begin{align}
f(x+1)&=\exp[\beta_0+\beta_1(x+1)]\\
&=\exp(\beta_0+\beta_1 x)\cdot\exp(\beta_1)\\
&=f(x)\cdot\exp(\beta_1)
\end{align}$$
OK, now how much does $f(x)$ grow in percentages?
$$\left(\frac{f(x+1)}{f(x)}-1\right)\cdot100=(\exp(\beta_1)-1)\cdot100$$
This explains that formula for converting coefficients into percent changes. Until here, we used no approximations. Now, if $x$ is a small enough number, we can approximate $\exp(x)\approx1+x$. This approximation is called a first order Taylor expansion of $\exp(x)$ around $x=0$. If you replace this approximation on the $\text{coefficients}\rightarrow\text{percent change}$ we found earlier, you get:
$$\text{percent change}\approx100\cdot\beta_1$$
So, when $\beta_1$ is a small number, you can interpret it directly as a percent change - but keep in mind that this is just an approximation. | Why can I interpret a log transformed dependent variable in terms of percent change in linear regres
Say we have a model like this:
$$\log\hat y=\beta_0+\beta_1 x$$
Since $\exp$ is the inverse function of $\log$, we can do this:
$$\hat y = f(x)=\exp(\beta_0+\beta_1 x)$$
Now, what happens when $x$ gro |
35,299 | Can Negative Binomial parameters be treated like Poisson? | I put up some code to perform this task in PyMC3, since you mentioned it in the question. The first part, which you seem to already be familiar with, would be fitting the model to get a posterior distribution on the parameters:
import pymc3 as pm
import numpy as np
# generating simulated data data for a week
data = pm.NegativeBinomial.dist(mu=3, alpha=1).random(size=7*24*2)
# defining the model and sampling (MCMC)
with pm.Model() as model:
alpha = pm.Exponential("alpha", 2.0)
mean = pm.Exponential("mean", 0.2)
obs_data = pm.NegativeBinomial("obs_data", mu=mean, alpha=alpha, observed=data)
trace = pm.sample()
# plotting the posterior
pm.traceplot(trace)
pm.plot_posterior(trace)
Now we get to the part on which you seem to be struggling. We can use this nice property: when two random variables, $X$ and $Y$ have negative binomial distributions with the same overdispersion parameter, then $X+Y$ also has negative binomial distribution, with mean $\mathbb E[X]+\mathbb E[Y]$ and the same overdispersion parameter as $X$ and $Y$. You can find proofs for this property here.
Assuming that the negative binomial parameters are fixed (formally, assuming your stochastic process is in the class of Lévy processes, in which Poisson processes are included), that implies that if you want to know the distribution for the number of events in a whole hour or a whole day, you just have to adjust the mean, like you would do with a Poisson process.
For example, to find out how atypical it would be to find more than 200 events in a single day, we could use the following:
np.mean(pm.NegativeBinomial.dist(mu=48*trace["mean"], alpha=trace["alpha"]).random(10**4)>200)
Let's break this line of code down for a bit. When we use pm.NegativeBinomial.dist(mu=..., alpha=...), we are invoking the PyMC3 implementation of the negative binomial with a specific set of parameters (we could use the Numpy implementation as well, but they are parameterized differently, so it is less error prone to stick to PyMC3).
We then use the parameters we sampled from the posterior: alpha=trace["alpha"] for the overdispersion, and mu=48*trace["mean"] for the mean (we multiply by 48 to adjust this mean to reflect 24 hours instead of half an hour).
Finally, we sample many instances from this distribution and compare them to the value we are interested in (.random(10**4)>200), then finding the probability of new samples from our negative binomial process exceeding it (by applying np.mean to the resulting array of booleans). The result is the probability that your model would generate a day with 200 events or more.
A few caveats here:
if your model allows overdispersion to change with time, none of this will work
if your model allows the Poisson rate to change with time as some function $\lambda(t)$, this will have to be somewhat adapted. Instead of multiplying the rate by some number, you will have to integrate $\lambda(t)$, making this a bit more complicated.
EDIT:
I am editing to address the comment by @J Does asking about day of week effects. So, let us first generate some data with strong day of week effects:
# how many weeks of data are available?
WEEKS = 5
# how many observations are available per day?
OBS_PER_DAY = 24*2
data = pm.NegativeBinomial.dist(mu=[2,3,1,2,5,9,7]*5, alpha=1).random(size=OBS_PER_DAY).T.flatten()
Now, one way we can get around that is by having 7 different means, instead of a single one. The PyMC3 model can be written as:
with pm.Model() as model:
alpha = pm.Exponential("alpha", 2.0)
mean = pm.Exponential("mean", 0.2, shape=7)
day = np.arange(WEEKS*7*OBS_PER_DAY)//OBS_PER_DAY%7
obs_data = pm.NegativeBinomial("obs_data", mu=mean[day], alpha=alpha,
observed=data)
trace = pm.sample()
The variable day here associates each observation to the day of the week it came from. Now, we have a model that allows for day of week effects. How can we check if having more than 500 events on a Friday is atypical? The procedure is similar to the homogeneous case:
friday = 4 # assuming the week starts on monday
np.mean(pm.NegativeBinomial.dist(mu=48*trace["mean"][:,friday], alpha=trace["alpha"]).random(10**4)>500)
OK, now what if we want to check if 3000 events in a week is an atypical event? The expected count of events for a week is 48*sum(mean), so we do this:
np.mean(pm.NegativeBinomial.dist(mu=48*trace["mean"].sum(axis=1), alpha=trace["alpha"]).random(10**4)>3000)
Notice that we did not need any fancy integration, since this day-of-week effect makes $\lambda(t)$ a piecewise constant function. (hooray!). You will need no integrate the Poisson rate when its functional form is a bit more complicated: for instance, if $\lambda(t)$ is a polynomial, an exponential, a function sampled from a gaussian process, etc. Unfortunately, it seems to be hard to find resources on this specific topic on the Web... Perhaps I will add something addressing this issue to this answer when I find the time.
Hope I was helpful! | Can Negative Binomial parameters be treated like Poisson? | I put up some code to perform this task in PyMC3, since you mentioned it in the question. The first part, which you seem to already be familiar with, would be fitting the model to get a posterior dist | Can Negative Binomial parameters be treated like Poisson?
I put up some code to perform this task in PyMC3, since you mentioned it in the question. The first part, which you seem to already be familiar with, would be fitting the model to get a posterior distribution on the parameters:
import pymc3 as pm
import numpy as np
# generating simulated data data for a week
data = pm.NegativeBinomial.dist(mu=3, alpha=1).random(size=7*24*2)
# defining the model and sampling (MCMC)
with pm.Model() as model:
alpha = pm.Exponential("alpha", 2.0)
mean = pm.Exponential("mean", 0.2)
obs_data = pm.NegativeBinomial("obs_data", mu=mean, alpha=alpha, observed=data)
trace = pm.sample()
# plotting the posterior
pm.traceplot(trace)
pm.plot_posterior(trace)
Now we get to the part on which you seem to be struggling. We can use this nice property: when two random variables, $X$ and $Y$ have negative binomial distributions with the same overdispersion parameter, then $X+Y$ also has negative binomial distribution, with mean $\mathbb E[X]+\mathbb E[Y]$ and the same overdispersion parameter as $X$ and $Y$. You can find proofs for this property here.
Assuming that the negative binomial parameters are fixed (formally, assuming your stochastic process is in the class of Lévy processes, in which Poisson processes are included), that implies that if you want to know the distribution for the number of events in a whole hour or a whole day, you just have to adjust the mean, like you would do with a Poisson process.
For example, to find out how atypical it would be to find more than 200 events in a single day, we could use the following:
np.mean(pm.NegativeBinomial.dist(mu=48*trace["mean"], alpha=trace["alpha"]).random(10**4)>200)
Let's break this line of code down for a bit. When we use pm.NegativeBinomial.dist(mu=..., alpha=...), we are invoking the PyMC3 implementation of the negative binomial with a specific set of parameters (we could use the Numpy implementation as well, but they are parameterized differently, so it is less error prone to stick to PyMC3).
We then use the parameters we sampled from the posterior: alpha=trace["alpha"] for the overdispersion, and mu=48*trace["mean"] for the mean (we multiply by 48 to adjust this mean to reflect 24 hours instead of half an hour).
Finally, we sample many instances from this distribution and compare them to the value we are interested in (.random(10**4)>200), then finding the probability of new samples from our negative binomial process exceeding it (by applying np.mean to the resulting array of booleans). The result is the probability that your model would generate a day with 200 events or more.
A few caveats here:
if your model allows overdispersion to change with time, none of this will work
if your model allows the Poisson rate to change with time as some function $\lambda(t)$, this will have to be somewhat adapted. Instead of multiplying the rate by some number, you will have to integrate $\lambda(t)$, making this a bit more complicated.
EDIT:
I am editing to address the comment by @J Does asking about day of week effects. So, let us first generate some data with strong day of week effects:
# how many weeks of data are available?
WEEKS = 5
# how many observations are available per day?
OBS_PER_DAY = 24*2
data = pm.NegativeBinomial.dist(mu=[2,3,1,2,5,9,7]*5, alpha=1).random(size=OBS_PER_DAY).T.flatten()
Now, one way we can get around that is by having 7 different means, instead of a single one. The PyMC3 model can be written as:
with pm.Model() as model:
alpha = pm.Exponential("alpha", 2.0)
mean = pm.Exponential("mean", 0.2, shape=7)
day = np.arange(WEEKS*7*OBS_PER_DAY)//OBS_PER_DAY%7
obs_data = pm.NegativeBinomial("obs_data", mu=mean[day], alpha=alpha,
observed=data)
trace = pm.sample()
The variable day here associates each observation to the day of the week it came from. Now, we have a model that allows for day of week effects. How can we check if having more than 500 events on a Friday is atypical? The procedure is similar to the homogeneous case:
friday = 4 # assuming the week starts on monday
np.mean(pm.NegativeBinomial.dist(mu=48*trace["mean"][:,friday], alpha=trace["alpha"]).random(10**4)>500)
OK, now what if we want to check if 3000 events in a week is an atypical event? The expected count of events for a week is 48*sum(mean), so we do this:
np.mean(pm.NegativeBinomial.dist(mu=48*trace["mean"].sum(axis=1), alpha=trace["alpha"]).random(10**4)>3000)
Notice that we did not need any fancy integration, since this day-of-week effect makes $\lambda(t)$ a piecewise constant function. (hooray!). You will need no integrate the Poisson rate when its functional form is a bit more complicated: for instance, if $\lambda(t)$ is a polynomial, an exponential, a function sampled from a gaussian process, etc. Unfortunately, it seems to be hard to find resources on this specific topic on the Web... Perhaps I will add something addressing this issue to this answer when I find the time.
Hope I was helpful! | Can Negative Binomial parameters be treated like Poisson?
I put up some code to perform this task in PyMC3, since you mentioned it in the question. The first part, which you seem to already be familiar with, would be fitting the model to get a posterior dist |
35,300 | Can Negative Binomial parameters be treated like Poisson? | The negative binomial can be treated like Poisson, but it is ambiguous how to treat it. It will depend on the underlying process that causes the overdispersion. This can occur in different ways.
Below I will describe two ways:
The negative binomial occurs as a Poisson distribution compounded with a gamma distribution
In this case the probability of success parameter, $p$ changes.
The negative binomial occurs as a counting process where the interval/waiting time between events is geometric distributed.
In this case the $r$ parameter changes.
1. Compound distribution
You can view the negative binomial distribution as a Poisson distribution compounded with a gamma distribution.
If
$$Y \sim Poisson(\lambda=X)$$
where
$$X \sim Gamma(\alpha,\beta)$$
Then $$Y \sim NB(r=\alpha, p = (\beta+1)^{-1})$$
With a Poisson process, if you consider a larger time interval, then the distribution of the number of events relates to a Poisson distributed variable with a larger rate coefficient.
For instance, the Poisson rate in the compound distribution is scaled with a factor $c$.
$$Y_c \sim Poisson(\lambda=cX)$$
This is similar to scaling the rate of the gamma distribution.
$$cX \sim Gamma(\alpha,\beta/c)$$
So the compound distribution becomes
$$Y_c \sim NB(r=\alpha, p = (\beta/c+1)^{-1})$$
2. Counting process
You can view the negative binomial distribution as occurring in a counting process where the waiting time between events is i.i.d geometric distributed.
If you consider the ordered sequence of events $1,2,...,k,k+1,...$ where the time between events follows a geometric distribution:
$$t_k-t_{k-1} \sim Geom(p)$$
Then the number of events within an interval of length $t$ follows a negative binomial distribution with $r=\lfloor t \rfloor$ and $p=p$
$$N_{\text{events within $t$}} \sim NB(\lfloor t \rfloor, p)$$
In that case, the increase of the time period $t$ over which the counting process is performed corresponds to an increase of the parameter $r$ in the negative binomial distribution.
This case corresponds to the answer by PedroSebe.
So it will depend on what sort of process you have that generates the negative binomial distribution of counts. | Can Negative Binomial parameters be treated like Poisson? | The negative binomial can be treated like Poisson, but it is ambiguous how to treat it. It will depend on the underlying process that causes the overdispersion. This can occur in different ways.
Below | Can Negative Binomial parameters be treated like Poisson?
The negative binomial can be treated like Poisson, but it is ambiguous how to treat it. It will depend on the underlying process that causes the overdispersion. This can occur in different ways.
Below I will describe two ways:
The negative binomial occurs as a Poisson distribution compounded with a gamma distribution
In this case the probability of success parameter, $p$ changes.
The negative binomial occurs as a counting process where the interval/waiting time between events is geometric distributed.
In this case the $r$ parameter changes.
1. Compound distribution
You can view the negative binomial distribution as a Poisson distribution compounded with a gamma distribution.
If
$$Y \sim Poisson(\lambda=X)$$
where
$$X \sim Gamma(\alpha,\beta)$$
Then $$Y \sim NB(r=\alpha, p = (\beta+1)^{-1})$$
With a Poisson process, if you consider a larger time interval, then the distribution of the number of events relates to a Poisson distributed variable with a larger rate coefficient.
For instance, the Poisson rate in the compound distribution is scaled with a factor $c$.
$$Y_c \sim Poisson(\lambda=cX)$$
This is similar to scaling the rate of the gamma distribution.
$$cX \sim Gamma(\alpha,\beta/c)$$
So the compound distribution becomes
$$Y_c \sim NB(r=\alpha, p = (\beta/c+1)^{-1})$$
2. Counting process
You can view the negative binomial distribution as occurring in a counting process where the waiting time between events is i.i.d geometric distributed.
If you consider the ordered sequence of events $1,2,...,k,k+1,...$ where the time between events follows a geometric distribution:
$$t_k-t_{k-1} \sim Geom(p)$$
Then the number of events within an interval of length $t$ follows a negative binomial distribution with $r=\lfloor t \rfloor$ and $p=p$
$$N_{\text{events within $t$}} \sim NB(\lfloor t \rfloor, p)$$
In that case, the increase of the time period $t$ over which the counting process is performed corresponds to an increase of the parameter $r$ in the negative binomial distribution.
This case corresponds to the answer by PedroSebe.
So it will depend on what sort of process you have that generates the negative binomial distribution of counts. | Can Negative Binomial parameters be treated like Poisson?
The negative binomial can be treated like Poisson, but it is ambiguous how to treat it. It will depend on the underlying process that causes the overdispersion. This can occur in different ways.
Below |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.