idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
34,201 | Are Bayesian methods robust to violations of normality? | The Bayesian approach is robust only in that few Bayesians would feel right about using the model you have posed, and Bayes makes it relatively easy to use more general models that make fewer assumptions. The most common and most simple extension to the Gaussian model you posed is to use a $t$ distribution with unknown degrees of freedom $\nu$ for the raw data. A prior distribution for $\nu$ could favor normality ($\nu > 20$) if you had such knowledge, and as $n \uparrow$, $\nu$ would "follow the data" to allow tails to be as heavy as needed.
To get a sense of how massive a departure this is from the classical approach, textbooks urge students to assess normality and to try different data transformations to achieve this. The result is falsely narrow confidence intervals and much subjectivity. Lack of admission of the subjectivity, i.e., not accounting for model uncertainty, is what makes the intervals too narrow. A real Bayesian approach does not think dichotomously about normality, and when the analysis is finished, instead of asking "what if I am wrong?" you obtain a posterior probability of normality, e.g., $\Pr(\nu > 20)$.
If you do not believe the data distribution to be symmetric, you can use a similar process but based on a skew $t$ distribution. Bayes encourages us to be honest about what we do not know by including a parameter for what we don't know. If you only knew that a distribution was continuous you may need 4 parameters (location, scale, skewness, kurtosis). | Are Bayesian methods robust to violations of normality? | The Bayesian approach is robust only in that few Bayesians would feel right about using the model you have posed, and Bayes makes it relatively easy to use more general models that make fewer assumpti | Are Bayesian methods robust to violations of normality?
The Bayesian approach is robust only in that few Bayesians would feel right about using the model you have posed, and Bayes makes it relatively easy to use more general models that make fewer assumptions. The most common and most simple extension to the Gaussian model you posed is to use a $t$ distribution with unknown degrees of freedom $\nu$ for the raw data. A prior distribution for $\nu$ could favor normality ($\nu > 20$) if you had such knowledge, and as $n \uparrow$, $\nu$ would "follow the data" to allow tails to be as heavy as needed.
To get a sense of how massive a departure this is from the classical approach, textbooks urge students to assess normality and to try different data transformations to achieve this. The result is falsely narrow confidence intervals and much subjectivity. Lack of admission of the subjectivity, i.e., not accounting for model uncertainty, is what makes the intervals too narrow. A real Bayesian approach does not think dichotomously about normality, and when the analysis is finished, instead of asking "what if I am wrong?" you obtain a posterior probability of normality, e.g., $\Pr(\nu > 20)$.
If you do not believe the data distribution to be symmetric, you can use a similar process but based on a skew $t$ distribution. Bayes encourages us to be honest about what we do not know by including a parameter for what we don't know. If you only knew that a distribution was continuous you may need 4 parameters (location, scale, skewness, kurtosis). | Are Bayesian methods robust to violations of normality?
The Bayesian approach is robust only in that few Bayesians would feel right about using the model you have posed, and Bayes makes it relatively easy to use more general models that make fewer assumpti |
34,202 | Does failing to reject the null hypothesis mean rejecting the alternative? [duplicate] | In statistics there are two types of errors:
Type I: when the null hypothesis is correct. If in this case we reject null, we make this error.
Type II: when the alternative is correct. If in this case we fail to reject null, we make this error.
A type I error is connected to statistical significance. a type II error is connected to statistical power.
Many frequentists remember about significance, and forget about power. This leads to the situation, that they state, that failing to reject null means accepting null - IT IS WRONG. The true statement is failing to reject null means that we do not know anything. Unless of course we have knowledge about the power.
Let’s imagine an example, that we have a test with 5% significance, but also very low power - let’s say 10%. We failed to reject null. So now, a false positive (making an error of type I) is not our concern. Now we wish to think if we should accept the null (reject alternative), and without knowledge about the power of the test we can do nothing. But if we know the power of this test, which is 10%, we know that when the alternative is true, the test will correctly reject null only in 10% of cases - in 90% of cases where the alternative is correct, we will fail to reject null!
The problem with power is that in most cases it is a function of many aspects connected to the test itself, the sample size, unknown parameters, satisfaction of test assumptions, and probably more. In most cases it can not be calculated directly, and is approximated by Monte Carlo simulations. But every time those conditions change, the power is completely different.
For some more information about this problem, read Valentin et al. (2019) - a short, popular science, article in Nature, which describes the issue in a more elaborate way. For those more curious I'd suggest taking a look at Wasserstein and Lazar (2016) - the original ASA statement.
Amrhein, Valentin, Sander Greenland, and Blake McShane. "Scientists rise up against statistical significance." (2019): 305-307.
Wasserstein, Ronald L., and Nicole A. Lazar. "The ASA statement on p-values: context, process, and purpose." (2016): 129-133. | Does failing to reject the null hypothesis mean rejecting the alternative? [duplicate] | In statistics there are two types of errors:
Type I: when the null hypothesis is correct. If in this case we reject null, we make this error.
Type II: when the alternative is correct. If in this case | Does failing to reject the null hypothesis mean rejecting the alternative? [duplicate]
In statistics there are two types of errors:
Type I: when the null hypothesis is correct. If in this case we reject null, we make this error.
Type II: when the alternative is correct. If in this case we fail to reject null, we make this error.
A type I error is connected to statistical significance. a type II error is connected to statistical power.
Many frequentists remember about significance, and forget about power. This leads to the situation, that they state, that failing to reject null means accepting null - IT IS WRONG. The true statement is failing to reject null means that we do not know anything. Unless of course we have knowledge about the power.
Let’s imagine an example, that we have a test with 5% significance, but also very low power - let’s say 10%. We failed to reject null. So now, a false positive (making an error of type I) is not our concern. Now we wish to think if we should accept the null (reject alternative), and without knowledge about the power of the test we can do nothing. But if we know the power of this test, which is 10%, we know that when the alternative is true, the test will correctly reject null only in 10% of cases - in 90% of cases where the alternative is correct, we will fail to reject null!
The problem with power is that in most cases it is a function of many aspects connected to the test itself, the sample size, unknown parameters, satisfaction of test assumptions, and probably more. In most cases it can not be calculated directly, and is approximated by Monte Carlo simulations. But every time those conditions change, the power is completely different.
For some more information about this problem, read Valentin et al. (2019) - a short, popular science, article in Nature, which describes the issue in a more elaborate way. For those more curious I'd suggest taking a look at Wasserstein and Lazar (2016) - the original ASA statement.
Amrhein, Valentin, Sander Greenland, and Blake McShane. "Scientists rise up against statistical significance." (2019): 305-307.
Wasserstein, Ronald L., and Nicole A. Lazar. "The ASA statement on p-values: context, process, and purpose." (2016): 129-133. | Does failing to reject the null hypothesis mean rejecting the alternative? [duplicate]
In statistics there are two types of errors:
Type I: when the null hypothesis is correct. If in this case we reject null, we make this error.
Type II: when the alternative is correct. If in this case |
34,203 | What does it mean that a dataset is "biased"? | The term “biased” simply means, that your sample is not chosen randomly.
This is similar to a biased dice, which produces number 6 more often than the other numbers.
It is always difficult how to obtain an unbiased sample, but some notoriously known errors are:
non-response bias (some people respond, some not),
voluntary response bias (questions attract very opinionated people),
volunteer bias (volunteers doesn't represent the whole population),
survivorship bias (concentration on the “survivors” of a particular process)
availability bias (selecting easily available people / things)
Here and here are listed and explained some other types of biases. | What does it mean that a dataset is "biased"? | The term “biased” simply means, that your sample is not chosen randomly.
This is similar to a biased dice, which produces number 6 more often than the other numbers.
It is always difficult how to obta | What does it mean that a dataset is "biased"?
The term “biased” simply means, that your sample is not chosen randomly.
This is similar to a biased dice, which produces number 6 more often than the other numbers.
It is always difficult how to obtain an unbiased sample, but some notoriously known errors are:
non-response bias (some people respond, some not),
voluntary response bias (questions attract very opinionated people),
volunteer bias (volunteers doesn't represent the whole population),
survivorship bias (concentration on the “survivors” of a particular process)
availability bias (selecting easily available people / things)
Here and here are listed and explained some other types of biases. | What does it mean that a dataset is "biased"?
The term “biased” simply means, that your sample is not chosen randomly.
This is similar to a biased dice, which produces number 6 more often than the other numbers.
It is always difficult how to obta |
34,204 | What does it mean that a dataset is "biased"? | From working as a statistician where my main role is a consultant to the subject matter experts that also work for us, I have noticed that people with less of an understanding of statistics throw the word bias out when they just want to say something is wrong.
They really have no idea what they are saying when they are saying something has bias and will say it anytime they are concerned as a kind of catch all even if the context has nothing to do with bias. Many times when I am explaining something to someone they respond "what about bias" even though it has nothing to do with conversation at hand.
I suspect this may be the case with your scenario specifically when you see them saying something like:
"All datasets are biased in some way. How is your dataset biased?"
Which is certainly not true.
Just a note that this gets multiplied when we start talking about buzz words like machine learning. I've had people give me a dataset and ask me "can you machine learn this....". | What does it mean that a dataset is "biased"? | From working as a statistician where my main role is a consultant to the subject matter experts that also work for us, I have noticed that people with less of an understanding of statistics throw the | What does it mean that a dataset is "biased"?
From working as a statistician where my main role is a consultant to the subject matter experts that also work for us, I have noticed that people with less of an understanding of statistics throw the word bias out when they just want to say something is wrong.
They really have no idea what they are saying when they are saying something has bias and will say it anytime they are concerned as a kind of catch all even if the context has nothing to do with bias. Many times when I am explaining something to someone they respond "what about bias" even though it has nothing to do with conversation at hand.
I suspect this may be the case with your scenario specifically when you see them saying something like:
"All datasets are biased in some way. How is your dataset biased?"
Which is certainly not true.
Just a note that this gets multiplied when we start talking about buzz words like machine learning. I've had people give me a dataset and ask me "can you machine learn this....". | What does it mean that a dataset is "biased"?
From working as a statistician where my main role is a consultant to the subject matter experts that also work for us, I have noticed that people with less of an understanding of statistics throw the |
34,205 | What does it mean that a dataset is "biased"? | Perhaps you know that when iPhone users text each other, there is a blue “send” arrow instead of the green that you get when you text someone who uses another type of phone. To collect data, you randomly text numbers, but only if the arrow is blue. Your sample is biased, since you’ve excluded people who, for whatever reason, do not use iPhones. Perhaps political viewpoints influence phone purchase decisions. If you were texting about something political, you’ve excluded certain viewpoints. | What does it mean that a dataset is "biased"? | Perhaps you know that when iPhone users text each other, there is a blue “send” arrow instead of the green that you get when you text someone who uses another type of phone. To collect data, you rando | What does it mean that a dataset is "biased"?
Perhaps you know that when iPhone users text each other, there is a blue “send” arrow instead of the green that you get when you text someone who uses another type of phone. To collect data, you randomly text numbers, but only if the arrow is blue. Your sample is biased, since you’ve excluded people who, for whatever reason, do not use iPhones. Perhaps political viewpoints influence phone purchase decisions. If you were texting about something political, you’ve excluded certain viewpoints. | What does it mean that a dataset is "biased"?
Perhaps you know that when iPhone users text each other, there is a blue “send” arrow instead of the green that you get when you text someone who uses another type of phone. To collect data, you rando |
34,206 | confidence interval for 2-sample t test with scipy | It's a very good detailed answer provided by @BruceET. So if you want to do it python, you have to calculate the pooled standard error. I moved the code from this link and you can see it gives you something similar to Bruce's answer:
import numpy as np
from scipy.stats import ttest_ind
from scipy.stats import t
import pandas as pd
def welch_ttest(x1, x2,alternative):
n1 = x1.size
n2 = x2.size
m1 = np.mean(x1)
m2 = np.mean(x2)
v1 = np.var(x1, ddof=1)
v2 = np.var(x2, ddof=1)
pooled_se = np.sqrt(v1 / n1 + v2 / n2)
delta = m1-m2
tstat = delta / pooled_se
df = (v1 / n1 + v2 / n2)**2 / (v1**2 / (n1**2 * (n1 - 1)) + v2**2 / (n2**2 * (n2 - 1)))
# two side t-test
p = 2 * t.cdf(-abs(tstat), df)
# upper and lower bounds
lb = delta - t.ppf(0.975,df)*pooled_se
ub = delta + t.ppf(0.975,df)*pooled_se
return pd.DataFrame(np.array([tstat,df,p,delta,lb,ub]).reshape(1,-1),
columns=['T statistic','df','pvalue 2 sided','Difference in mean','lb','ub'])
We run this function, i named the lower and upper bounds of the 95% CI as lb and ub.. You can simply modify them in the function:
welch_ttest(ts1,ts2,"equal")
T statistic df pvalue 2 sided Difference in mean lb ub
0 -1.832542 17.90031 0.08356 -1.0 -2.146912 0.146912 | confidence interval for 2-sample t test with scipy | It's a very good detailed answer provided by @BruceET. So if you want to do it python, you have to calculate the pooled standard error. I moved the code from this link and you can see it gives you som | confidence interval for 2-sample t test with scipy
It's a very good detailed answer provided by @BruceET. So if you want to do it python, you have to calculate the pooled standard error. I moved the code from this link and you can see it gives you something similar to Bruce's answer:
import numpy as np
from scipy.stats import ttest_ind
from scipy.stats import t
import pandas as pd
def welch_ttest(x1, x2,alternative):
n1 = x1.size
n2 = x2.size
m1 = np.mean(x1)
m2 = np.mean(x2)
v1 = np.var(x1, ddof=1)
v2 = np.var(x2, ddof=1)
pooled_se = np.sqrt(v1 / n1 + v2 / n2)
delta = m1-m2
tstat = delta / pooled_se
df = (v1 / n1 + v2 / n2)**2 / (v1**2 / (n1**2 * (n1 - 1)) + v2**2 / (n2**2 * (n2 - 1)))
# two side t-test
p = 2 * t.cdf(-abs(tstat), df)
# upper and lower bounds
lb = delta - t.ppf(0.975,df)*pooled_se
ub = delta + t.ppf(0.975,df)*pooled_se
return pd.DataFrame(np.array([tstat,df,p,delta,lb,ub]).reshape(1,-1),
columns=['T statistic','df','pvalue 2 sided','Difference in mean','lb','ub'])
We run this function, i named the lower and upper bounds of the 95% CI as lb and ub.. You can simply modify them in the function:
welch_ttest(ts1,ts2,"equal")
T statistic df pvalue 2 sided Difference in mean lb ub
0 -1.832542 17.90031 0.08356 -1.0 -2.146912 0.146912 | confidence interval for 2-sample t test with scipy
It's a very good detailed answer provided by @BruceET. So if you want to do it python, you have to calculate the pooled standard error. I moved the code from this link and you can see it gives you som |
34,207 | confidence interval for 2-sample t test with scipy | Not sure about Scripy. Maybe there's a Scripy help site that will show the code. [Perhaps this.]
In R, a 95% CI is part of t.test output, where the Welch version of the 2-sample t test is
the default (and argument var.eq=T gets you the pooled test).
ts1 = c(11,9,10,11,10,12,9,11,12,9)
ts2 = c(11,13,10,13,12,9,11,12,12,11)
t.test(ts1, ts2)
Welch Two Sample t-test
data: ts1 and ts2
t = -1.8325, df = 17.9, p-value = 0.08356
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-2.1469104 0.1469104
sample estimates:
mean of x mean of y
10.4 11.4
Because the 95% CI includes $0$ the 2-sided test does not reject $H_0: \mu_1=\mu_2$ at the 5% level.
The 95% margin of error is $t^*\sqrt{\frac{S_1^2}{n_1}+\frac{S_2^2}{n_2}},$
where $t^*$ cuts probability $0.025=2.5\%$ from the upper tail of Student's t
distribution with degrees of freedom $\nu^\prime$ as found from the Welch
formula involving sample variances and sample sizes. [Here, $\nu^\prime = 17.9,$ in some software rounded down to an integer. One always has
$\min(n_1-1,n_2-1) \le \nu^\prime \le n_1+n_2-2.]$
me = qt(.975, 17.9)*sqrt(var(ts1)/10+var(ts2)/10); me
[1] 1.146912
pm=c(-1,1)
-1 + pm*me
[1] -2.1469118 0.1469118
It's always a good idea
to keep the actual formulas in mind, even if one hopes to use them only rarely. | confidence interval for 2-sample t test with scipy | Not sure about Scripy. Maybe there's a Scripy help site that will show the code. [Perhaps this.]
In R, a 95% CI is part of t.test output, where the Welch version of the 2-sample t test is
the default | confidence interval for 2-sample t test with scipy
Not sure about Scripy. Maybe there's a Scripy help site that will show the code. [Perhaps this.]
In R, a 95% CI is part of t.test output, where the Welch version of the 2-sample t test is
the default (and argument var.eq=T gets you the pooled test).
ts1 = c(11,9,10,11,10,12,9,11,12,9)
ts2 = c(11,13,10,13,12,9,11,12,12,11)
t.test(ts1, ts2)
Welch Two Sample t-test
data: ts1 and ts2
t = -1.8325, df = 17.9, p-value = 0.08356
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-2.1469104 0.1469104
sample estimates:
mean of x mean of y
10.4 11.4
Because the 95% CI includes $0$ the 2-sided test does not reject $H_0: \mu_1=\mu_2$ at the 5% level.
The 95% margin of error is $t^*\sqrt{\frac{S_1^2}{n_1}+\frac{S_2^2}{n_2}},$
where $t^*$ cuts probability $0.025=2.5\%$ from the upper tail of Student's t
distribution with degrees of freedom $\nu^\prime$ as found from the Welch
formula involving sample variances and sample sizes. [Here, $\nu^\prime = 17.9,$ in some software rounded down to an integer. One always has
$\min(n_1-1,n_2-1) \le \nu^\prime \le n_1+n_2-2.]$
me = qt(.975, 17.9)*sqrt(var(ts1)/10+var(ts2)/10); me
[1] 1.146912
pm=c(-1,1)
-1 + pm*me
[1] -2.1469118 0.1469118
It's always a good idea
to keep the actual formulas in mind, even if one hopes to use them only rarely. | confidence interval for 2-sample t test with scipy
Not sure about Scripy. Maybe there's a Scripy help site that will show the code. [Perhaps this.]
In R, a 95% CI is part of t.test output, where the Welch version of the 2-sample t test is
the default |
34,208 | confidence interval for 2-sample t test with scipy | I found my own question from a couple years ago, and let me add now a very simple answer in Python. There is no need to stick to Scipy - I didn't know that back then. Instead, install the Pingouin library.
https://pingouin-stats.org/
Then:
import pingouin as pg
res = pg.ttest(ts1, ts2, paired=False)
print(res)
Output:
T dof alternative p-val CI95% cohen-d BF10 \
T-test -1.832542 18 two-sided 0.083467 [-2.15, 0.15] 0.819538 1.225
power
T-test 0.411029 | confidence interval for 2-sample t test with scipy | I found my own question from a couple years ago, and let me add now a very simple answer in Python. There is no need to stick to Scipy - I didn't know that back then. Instead, install the Pingouin lib | confidence interval for 2-sample t test with scipy
I found my own question from a couple years ago, and let me add now a very simple answer in Python. There is no need to stick to Scipy - I didn't know that back then. Instead, install the Pingouin library.
https://pingouin-stats.org/
Then:
import pingouin as pg
res = pg.ttest(ts1, ts2, paired=False)
print(res)
Output:
T dof alternative p-val CI95% cohen-d BF10 \
T-test -1.832542 18 two-sided 0.083467 [-2.15, 0.15] 0.819538 1.225
power
T-test 0.411029 | confidence interval for 2-sample t test with scipy
I found my own question from a couple years ago, and let me add now a very simple answer in Python. There is no need to stick to Scipy - I didn't know that back then. Instead, install the Pingouin lib |
34,209 | Let A and B be two random variables, both independent from another random variable C. Is A*B also independent from C? | If all you have is pairwise independence then there is a counterexample. Suppose the following four cases each have probability $\frac14$:
A B C AB
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
Then $A$ is independent of $C$ and $B$ is independent of $C$, and $A$ is independent of $B$.
But $AB$ and $C$ are not independent as $\mathbb P(AB=1\mid C=0)=\frac12\not= 0=\mathbb P(AB=1 \mid C=1)$
In this example $A$, $B$ and $C$ are pairwise independent as suggested by the question, but are not mutually independent. If they had been mutually independent then it would also follow that $AB$ would be independent of $C$. A slightly weaker condition is that if $A$ and $B$ were jointly independent of $C$ then it would follow that $AB$ would be independent of $C$ even if $A$ and $B$ were not independent of each other. | Let A and B be two random variables, both independent from another random variable C. Is A*B also in | If all you have is pairwise independence then there is a counterexample. Suppose the following four cases each have probability $\frac14$:
A B C AB
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
Then $ | Let A and B be two random variables, both independent from another random variable C. Is A*B also independent from C?
If all you have is pairwise independence then there is a counterexample. Suppose the following four cases each have probability $\frac14$:
A B C AB
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
Then $A$ is independent of $C$ and $B$ is independent of $C$, and $A$ is independent of $B$.
But $AB$ and $C$ are not independent as $\mathbb P(AB=1\mid C=0)=\frac12\not= 0=\mathbb P(AB=1 \mid C=1)$
In this example $A$, $B$ and $C$ are pairwise independent as suggested by the question, but are not mutually independent. If they had been mutually independent then it would also follow that $AB$ would be independent of $C$. A slightly weaker condition is that if $A$ and $B$ were jointly independent of $C$ then it would follow that $AB$ would be independent of $C$ even if $A$ and $B$ were not independent of each other. | Let A and B be two random variables, both independent from another random variable C. Is A*B also in
If all you have is pairwise independence then there is a counterexample. Suppose the following four cases each have probability $\frac14$:
A B C AB
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
Then $ |
34,210 | Counting samples seem to not be Poisson distributed, need sanity check | A true Poisson distribution will have its mean exactly equal to its variance. For a sampling of a Poisson distribution, however, there will be some deviation - with only 20 samples, it's unlikely that you'd see the mean and variance of the sample be exactly equal. For the most part, you seem to have a strong correlation between the mean and variance, which is good. You could also find the confidence intervals around your parameter estimates, to take a hypothesis testing approach to determine whether your mean and variance estimates are really statistically different from one another. With a very large sample size, you'll have very good estimates which should be very nearly equal if the data is indeed Poisson distributed, but for lower sample sizes, your estimates won't be as good, so some numerical differences between the mean and variance are expected. | Counting samples seem to not be Poisson distributed, need sanity check | A true Poisson distribution will have its mean exactly equal to its variance. For a sampling of a Poisson distribution, however, there will be some deviation - with only 20 samples, it's unlikely that | Counting samples seem to not be Poisson distributed, need sanity check
A true Poisson distribution will have its mean exactly equal to its variance. For a sampling of a Poisson distribution, however, there will be some deviation - with only 20 samples, it's unlikely that you'd see the mean and variance of the sample be exactly equal. For the most part, you seem to have a strong correlation between the mean and variance, which is good. You could also find the confidence intervals around your parameter estimates, to take a hypothesis testing approach to determine whether your mean and variance estimates are really statistically different from one another. With a very large sample size, you'll have very good estimates which should be very nearly equal if the data is indeed Poisson distributed, but for lower sample sizes, your estimates won't be as good, so some numerical differences between the mean and variance are expected. | Counting samples seem to not be Poisson distributed, need sanity check
A true Poisson distribution will have its mean exactly equal to its variance. For a sampling of a Poisson distribution, however, there will be some deviation - with only 20 samples, it's unlikely that |
34,211 | Counting samples seem to not be Poisson distributed, need sanity check | +1 to Nuclear Wang's answer.
It's always good to simulate a little to get a feeling for the randomness involved in a situation like this. Here is a little something I hacked together, which may be enlightening.
I'll take the means you observed as the true means of your underlying data generating process. Then I'll assume that the data are actually Poisson distributed, and generate simulations using these assumptions. Then I'll calculate the variance for such simulations. I'll do this, say, fifty times, so I end up with fifty different variances we might observe under the null hypothesis of Poisson distributions with the means you observed. Finally, I'll plot these variances (as dot coulds, jittered horizontally), and also the variances you actually observed (as horizontal red lines):
In panels B-F, the observed variances are pretty much in the middle of the point clouds of simulated variances, so they seem to be quite consistent with a Poisson assumption. In panel A, the variance you observe is way higher than all the variances we observed, so this one does not appear consistent with a Poisson assumption. (We could call this an informal hypothesis test and say that we got $p=\frac{1}{50}=0.02$, though this doesn't account for the uncertainty in the mean.)
R code:
n_sims <- 50
n_sample <- 20
means <- c(4.90, 9.45, 8.65, 1.45, 18.35, 0.80)
vars_obs <- c(9.8842105, 6.4710526, 7.3973684, 1.3131579, 15.6078947, 0.5894737)
vars <- matrix(NA,nrow=n_sims,ncol=length(means),
dimnames=list(NULL,LETTERS[seq_along(means)]))
for ( ii in 1:n_sims ) {
set.seed(ii) # for reproducibility
vars[ii,] <- sapply(means,function(mm)var(rpois(n_sample,mm)))
}
opar <- par(mfrow=c(2,3))
for ( jj in seq_along(means) ) {
plot(rnorm(n_sims),vars[,jj],pch=19,main=colnames(vars)[jj],
ylim=range(c(vars[,jj],vars_obs[jj])),xaxt="n",las=1,ylab="",xlab="")
abline(h=vars_obs[jj],col="red",lwd=2)
}
par(opar) | Counting samples seem to not be Poisson distributed, need sanity check | +1 to Nuclear Wang's answer.
It's always good to simulate a little to get a feeling for the randomness involved in a situation like this. Here is a little something I hacked together, which may be enl | Counting samples seem to not be Poisson distributed, need sanity check
+1 to Nuclear Wang's answer.
It's always good to simulate a little to get a feeling for the randomness involved in a situation like this. Here is a little something I hacked together, which may be enlightening.
I'll take the means you observed as the true means of your underlying data generating process. Then I'll assume that the data are actually Poisson distributed, and generate simulations using these assumptions. Then I'll calculate the variance for such simulations. I'll do this, say, fifty times, so I end up with fifty different variances we might observe under the null hypothesis of Poisson distributions with the means you observed. Finally, I'll plot these variances (as dot coulds, jittered horizontally), and also the variances you actually observed (as horizontal red lines):
In panels B-F, the observed variances are pretty much in the middle of the point clouds of simulated variances, so they seem to be quite consistent with a Poisson assumption. In panel A, the variance you observe is way higher than all the variances we observed, so this one does not appear consistent with a Poisson assumption. (We could call this an informal hypothesis test and say that we got $p=\frac{1}{50}=0.02$, though this doesn't account for the uncertainty in the mean.)
R code:
n_sims <- 50
n_sample <- 20
means <- c(4.90, 9.45, 8.65, 1.45, 18.35, 0.80)
vars_obs <- c(9.8842105, 6.4710526, 7.3973684, 1.3131579, 15.6078947, 0.5894737)
vars <- matrix(NA,nrow=n_sims,ncol=length(means),
dimnames=list(NULL,LETTERS[seq_along(means)]))
for ( ii in 1:n_sims ) {
set.seed(ii) # for reproducibility
vars[ii,] <- sapply(means,function(mm)var(rpois(n_sample,mm)))
}
opar <- par(mfrow=c(2,3))
for ( jj in seq_along(means) ) {
plot(rnorm(n_sims),vars[,jj],pch=19,main=colnames(vars)[jj],
ylim=range(c(vars[,jj],vars_obs[jj])),xaxt="n",las=1,ylab="",xlab="")
abline(h=vars_obs[jj],col="red",lwd=2)
}
par(opar) | Counting samples seem to not be Poisson distributed, need sanity check
+1 to Nuclear Wang's answer.
It's always good to simulate a little to get a feeling for the randomness involved in a situation like this. Here is a little something I hacked together, which may be enl |
34,212 | Counting samples seem to not be Poisson distributed, need sanity check | One can formally test the fit of the data to the theoretical poisson distribution using
a chi-squared goodness of fit test, via the goodfit() function included in the vcd package.
Null hypothesis significance test: If p < .05 or whatever
threshold we might choose in advance, we will reject the hypothesis
that x comes from a Poisson process.
More flexible approach: we simply assess the degree of evidence
against the null based on the size of the p-value, where smaller
p = stronger evidence.
require(vcd)
x=0:5
freqx=c(7,12,7,3,1,1) # Counts at each level of x
gf = goodfit (x, type= "poisson", method= "ML")
summary(gf)
Goodness-of-fit test for poisson distribution
X^2 Likelihood Ratio = 1.91, df = 4, P(> X^2) = 0.75
plot (gf, main= "x vs Poisson distribution")
In plots such as these from vcd, bars lifted above the x-axis reflect negative residuals (too few occurrences to fit the theoretical distribution); bars that extend below, positive residuals (too many). This may seem less than intuitive, and the plot can't be modified in some of the usual ways (text size relative to bar size, etc), and so the following approach may be more helpful.
require(fitdistrplus)
poisson = fitdist(x, 'pois', method = 'mle')
print(poisson) # Identify the best value of lambda (which indicates both mean and variance) to insert below
dist = dpois(0:5, lambda = 2.5) * sum(freqx) # assign lambda from the fit above
df = as.data.frame(dist)
windows()
# Then we plot first the observed distribution, as vertical lines, and then the theoretical distribution as the curve
plot(x, freqx, type='h', lwd=2, main = 'Curve = Fitted Poisson Values',
cex.axis=1.2, cex.lab=1.2)
lines(x, df$dist, col = 'red', lwd=3) | Counting samples seem to not be Poisson distributed, need sanity check | One can formally test the fit of the data to the theoretical poisson distribution using
a chi-squared goodness of fit test, via the goodfit() function included in the vcd package.
Null hypothesis s | Counting samples seem to not be Poisson distributed, need sanity check
One can formally test the fit of the data to the theoretical poisson distribution using
a chi-squared goodness of fit test, via the goodfit() function included in the vcd package.
Null hypothesis significance test: If p < .05 or whatever
threshold we might choose in advance, we will reject the hypothesis
that x comes from a Poisson process.
More flexible approach: we simply assess the degree of evidence
against the null based on the size of the p-value, where smaller
p = stronger evidence.
require(vcd)
x=0:5
freqx=c(7,12,7,3,1,1) # Counts at each level of x
gf = goodfit (x, type= "poisson", method= "ML")
summary(gf)
Goodness-of-fit test for poisson distribution
X^2 Likelihood Ratio = 1.91, df = 4, P(> X^2) = 0.75
plot (gf, main= "x vs Poisson distribution")
In plots such as these from vcd, bars lifted above the x-axis reflect negative residuals (too few occurrences to fit the theoretical distribution); bars that extend below, positive residuals (too many). This may seem less than intuitive, and the plot can't be modified in some of the usual ways (text size relative to bar size, etc), and so the following approach may be more helpful.
require(fitdistrplus)
poisson = fitdist(x, 'pois', method = 'mle')
print(poisson) # Identify the best value of lambda (which indicates both mean and variance) to insert below
dist = dpois(0:5, lambda = 2.5) * sum(freqx) # assign lambda from the fit above
df = as.data.frame(dist)
windows()
# Then we plot first the observed distribution, as vertical lines, and then the theoretical distribution as the curve
plot(x, freqx, type='h', lwd=2, main = 'Curve = Fitted Poisson Values',
cex.axis=1.2, cex.lab=1.2)
lines(x, df$dist, col = 'red', lwd=3) | Counting samples seem to not be Poisson distributed, need sanity check
One can formally test the fit of the data to the theoretical poisson distribution using
a chi-squared goodness of fit test, via the goodfit() function included in the vcd package.
Null hypothesis s |
34,213 | Random variable vs Statistic? [duplicate] | A statistic is a function defined over one or more random variables.
So yes, a statistic is a random variable, and follows a distribution.
Another answer gave the example of the mean of a bunch of iid normal random variables.
$X_1,...,X_n\sim N(\mu,\sigma^2)$
The mean is a statistic because it is a function defined over random variables
$$\bar{X}= g(X_1, X_2 ... X_n) = \frac{1}{n} \sum_{i=1}^n X_i $$
There is one condition however, which is that a statistic cannot explicitly depend on unknown parameters. Take the following definition of $g$ :
$$ g(X_1) = \frac{X_1 - \mu}{\sigma}$$
While $g$ here is a function of a random variable, and it follows a standard normal distribution, it's not a statistic (unless $\mu$ and $\sigma$ are known).
For a more detailed explanation see pg. 122 of this. | Random variable vs Statistic? [duplicate] | A statistic is a function defined over one or more random variables.
So yes, a statistic is a random variable, and follows a distribution.
Another answer gave the example of the mean of a bunch of iid | Random variable vs Statistic? [duplicate]
A statistic is a function defined over one or more random variables.
So yes, a statistic is a random variable, and follows a distribution.
Another answer gave the example of the mean of a bunch of iid normal random variables.
$X_1,...,X_n\sim N(\mu,\sigma^2)$
The mean is a statistic because it is a function defined over random variables
$$\bar{X}= g(X_1, X_2 ... X_n) = \frac{1}{n} \sum_{i=1}^n X_i $$
There is one condition however, which is that a statistic cannot explicitly depend on unknown parameters. Take the following definition of $g$ :
$$ g(X_1) = \frac{X_1 - \mu}{\sigma}$$
While $g$ here is a function of a random variable, and it follows a standard normal distribution, it's not a statistic (unless $\mu$ and $\sigma$ are known).
For a more detailed explanation see pg. 122 of this. | Random variable vs Statistic? [duplicate]
A statistic is a function defined over one or more random variables.
So yes, a statistic is a random variable, and follows a distribution.
Another answer gave the example of the mean of a bunch of iid |
34,214 | Random variable vs Statistic? [duplicate] | The definition of a random variable depends on the context because how precise you need the definition to be depends on what kind of math you are doing. In the simplest definition, a random variable $X: \Omega\rightarrow \mathbb{R}$ is a function from a set of possible outcomes $\Omega$ onto the real numbers $\mathbb{R}$. For example, in the case of a coin, $\Omega = \lbrace H,T\rbrace$, you might have $H=1$ and $T=0$. The probability distribution carries all the info about the probabilities.
A statistic is a quantity computed from a sample. In statistical theories, usually these samples are themselves random variables. So the statistic itself is a random variable. For instance, if
$$X_1,...,X_n\sim N(\mu,\sigma^2)$$
then the sample mean is also a random variable
$$\bar{X}=\frac{1}{n} \sum_{i=1}^n X_i $$
$$\bar{X}\sim N\left(\mu,\frac{\sigma^2}{n}\right)$$
However, in practice, people may differ whether they consider a statistic to refer to a random variable. | Random variable vs Statistic? [duplicate] | The definition of a random variable depends on the context because how precise you need the definition to be depends on what kind of math you are doing. In the simplest definition, a random variable $ | Random variable vs Statistic? [duplicate]
The definition of a random variable depends on the context because how precise you need the definition to be depends on what kind of math you are doing. In the simplest definition, a random variable $X: \Omega\rightarrow \mathbb{R}$ is a function from a set of possible outcomes $\Omega$ onto the real numbers $\mathbb{R}$. For example, in the case of a coin, $\Omega = \lbrace H,T\rbrace$, you might have $H=1$ and $T=0$. The probability distribution carries all the info about the probabilities.
A statistic is a quantity computed from a sample. In statistical theories, usually these samples are themselves random variables. So the statistic itself is a random variable. For instance, if
$$X_1,...,X_n\sim N(\mu,\sigma^2)$$
then the sample mean is also a random variable
$$\bar{X}=\frac{1}{n} \sum_{i=1}^n X_i $$
$$\bar{X}\sim N\left(\mu,\frac{\sigma^2}{n}\right)$$
However, in practice, people may differ whether they consider a statistic to refer to a random variable. | Random variable vs Statistic? [duplicate]
The definition of a random variable depends on the context because how precise you need the definition to be depends on what kind of math you are doing. In the simplest definition, a random variable $ |
34,215 | Random variable vs Statistic? [duplicate] | The key thing is that a statistic is not a function of unknown parameters; so not all random variables are statistics. See Examples of a statistic that is not independent of sample's distribution?. And note that the distribution of a statistic may depend on unknown parameters; cf. a pivot, a random variable whose distribution does not depend on unknown parameters, though it may be a function of unknown parameters.
That something is a statistic doesn't mean you have to call it one: in applications the term tends to be reserved for random variables that play a special rôle in inference or description, & typically reduce the dimensionality of the sample space. Your T is a random variable; & also a statistic, even though no such motivation for using the latter term is evident. | Random variable vs Statistic? [duplicate] | The key thing is that a statistic is not a function of unknown parameters; so not all random variables are statistics. See Examples of a statistic that is not independent of sample's distribution?. An | Random variable vs Statistic? [duplicate]
The key thing is that a statistic is not a function of unknown parameters; so not all random variables are statistics. See Examples of a statistic that is not independent of sample's distribution?. And note that the distribution of a statistic may depend on unknown parameters; cf. a pivot, a random variable whose distribution does not depend on unknown parameters, though it may be a function of unknown parameters.
That something is a statistic doesn't mean you have to call it one: in applications the term tends to be reserved for random variables that play a special rôle in inference or description, & typically reduce the dimensionality of the sample space. Your T is a random variable; & also a statistic, even though no such motivation for using the latter term is evident. | Random variable vs Statistic? [duplicate]
The key thing is that a statistic is not a function of unknown parameters; so not all random variables are statistics. See Examples of a statistic that is not independent of sample's distribution?. An |
34,216 | what is the advantage of b-splines over other splines? | Splines are a large class of methods.
The method of B-splines is a simple method for taking a single covariate and expanding it such that it spans the set of all functions that are a polynomial of degree $d$ between all the given knots and $d-1$ differentiable everywhere. They are not the only way to achieve such an expansion of a covariate, but any other expansion will span the exact same set of functions, and B-splines have some nice numerical properties, so if you want splines that fit those smoothness conditions, there's not a good reason not to use B-splines.
B-splines are particularly popular because they are very simple and can be easily plugged into any regression model without any editing of the model to create non-linear effects without any special editing of the software.
But there's plenty of other types of splines you might want to use. To name a few:
M-splines: splines that span strictly positive functions
I-splines: splines that span strictly monotonic functions
Natural splines: splines whose 1st derivative is constant outside of the knots
P-splines: splines whose derivatives are penalized to enforce
smoothness (also known as smoothing splines)
Note that M+I splines are very special cases; if you want to use them, B-splines are simply not the job. Natural splines are really a subset of B-splines. P-splines have some very nice properties, but in general require special software implementations. | what is the advantage of b-splines over other splines? | Splines are a large class of methods.
The method of B-splines is a simple method for taking a single covariate and expanding it such that it spans the set of all functions that are a polynomial of de | what is the advantage of b-splines over other splines?
Splines are a large class of methods.
The method of B-splines is a simple method for taking a single covariate and expanding it such that it spans the set of all functions that are a polynomial of degree $d$ between all the given knots and $d-1$ differentiable everywhere. They are not the only way to achieve such an expansion of a covariate, but any other expansion will span the exact same set of functions, and B-splines have some nice numerical properties, so if you want splines that fit those smoothness conditions, there's not a good reason not to use B-splines.
B-splines are particularly popular because they are very simple and can be easily plugged into any regression model without any editing of the model to create non-linear effects without any special editing of the software.
But there's plenty of other types of splines you might want to use. To name a few:
M-splines: splines that span strictly positive functions
I-splines: splines that span strictly monotonic functions
Natural splines: splines whose 1st derivative is constant outside of the knots
P-splines: splines whose derivatives are penalized to enforce
smoothness (also known as smoothing splines)
Note that M+I splines are very special cases; if you want to use them, B-splines are simply not the job. Natural splines are really a subset of B-splines. P-splines have some very nice properties, but in general require special software implementations. | what is the advantage of b-splines over other splines?
Splines are a large class of methods.
The method of B-splines is a simple method for taking a single covariate and expanding it such that it spans the set of all functions that are a polynomial of de |
34,217 | what is the advantage of b-splines over other splines? | The question can be reframed, to avoid ambiguity, as: What's the advantage of b-splines over the conventional splines using monomial basis?
The fundamental advantage of using B-spline representation is that the equation system for the expansion coefficients is numerically more stable than using monomial basis, which results in Vandermonde systems, which can be quite ill-conditioned.
The goal of choosing different basis functions is to improve the conditioning of the resulting linear equation system and the computational effort for determining the coefficients. | what is the advantage of b-splines over other splines? | The question can be reframed, to avoid ambiguity, as: What's the advantage of b-splines over the conventional splines using monomial basis?
The fundamental advantage of using B-spline representation i | what is the advantage of b-splines over other splines?
The question can be reframed, to avoid ambiguity, as: What's the advantage of b-splines over the conventional splines using monomial basis?
The fundamental advantage of using B-spline representation is that the equation system for the expansion coefficients is numerically more stable than using monomial basis, which results in Vandermonde systems, which can be quite ill-conditioned.
The goal of choosing different basis functions is to improve the conditioning of the resulting linear equation system and the computational effort for determining the coefficients. | what is the advantage of b-splines over other splines?
The question can be reframed, to avoid ambiguity, as: What's the advantage of b-splines over the conventional splines using monomial basis?
The fundamental advantage of using B-spline representation i |
34,218 | Hypothesis testing: Null Hypothesis for one-sided tests | This is a common approach in some introductory statistics textbooks. The alternative hypothesis can be directional (e.g., $H_a : \mu > \mu_0$) or non-directional (e.g., $H_a : \mu \ne \mu_0$), but the null hypothesis is always written as an equality (e.g., $H_0 : \mu = \mu_0$).
Your evaluation is correct: this would be the mutually exclusive alternative only for the non-directional test. The appropriate mutually exclusive option for the first alternative hypothesis above would properly be $H_0 : \mu \le \mu_0$.
So, ¿why do textbook authors sometimes just always write the null with the equality sign? Well, it comes down to what you can (and cannot) draw. I can draw a picture of a hypothetical world where the population mean is a given value (say $\mu_0$). I can sketch the normal curve, indicate the center is at $\mu_0$, and I'm good to go. What I can't do is draw infinitely many other such curves were $\mu \le \mu_0$.
OK...but ¿won't the $P$-values be different if I drew different curves? Yes, they would, but if you conduct a thought-experiment of what the new $P$-value would be if you did have a normal curve with a shifted mean, that new $P$-value will always be less than the one you calculated with the fixed null hypothesis.
And in the end, technically, I can't calculate a separate $P$-value for the infinite options indicated in $H_0: \mu \le \mu_0$, but I can calculate one for $H_0 : \mu = \mu_0$. (Well, not if we aren't going down a Bayesian path...)**
Hope this helps justify the pedagogic rationale behind this (seemingly) wrong conventional notation.
Footnotes/Comments
**This comment is based on the more simplistic definition of $P$-value used in most introductory statistics textbook. A more general definition of the $P$-value can account for this, and is described in another answer below. | Hypothesis testing: Null Hypothesis for one-sided tests | This is a common approach in some introductory statistics textbooks. The alternative hypothesis can be directional (e.g., $H_a : \mu > \mu_0$) or non-directional (e.g., $H_a : \mu \ne \mu_0$), but the | Hypothesis testing: Null Hypothesis for one-sided tests
This is a common approach in some introductory statistics textbooks. The alternative hypothesis can be directional (e.g., $H_a : \mu > \mu_0$) or non-directional (e.g., $H_a : \mu \ne \mu_0$), but the null hypothesis is always written as an equality (e.g., $H_0 : \mu = \mu_0$).
Your evaluation is correct: this would be the mutually exclusive alternative only for the non-directional test. The appropriate mutually exclusive option for the first alternative hypothesis above would properly be $H_0 : \mu \le \mu_0$.
So, ¿why do textbook authors sometimes just always write the null with the equality sign? Well, it comes down to what you can (and cannot) draw. I can draw a picture of a hypothetical world where the population mean is a given value (say $\mu_0$). I can sketch the normal curve, indicate the center is at $\mu_0$, and I'm good to go. What I can't do is draw infinitely many other such curves were $\mu \le \mu_0$.
OK...but ¿won't the $P$-values be different if I drew different curves? Yes, they would, but if you conduct a thought-experiment of what the new $P$-value would be if you did have a normal curve with a shifted mean, that new $P$-value will always be less than the one you calculated with the fixed null hypothesis.
And in the end, technically, I can't calculate a separate $P$-value for the infinite options indicated in $H_0: \mu \le \mu_0$, but I can calculate one for $H_0 : \mu = \mu_0$. (Well, not if we aren't going down a Bayesian path...)**
Hope this helps justify the pedagogic rationale behind this (seemingly) wrong conventional notation.
Footnotes/Comments
**This comment is based on the more simplistic definition of $P$-value used in most introductory statistics textbook. A more general definition of the $P$-value can account for this, and is described in another answer below. | Hypothesis testing: Null Hypothesis for one-sided tests
This is a common approach in some introductory statistics textbooks. The alternative hypothesis can be directional (e.g., $H_a : \mu > \mu_0$) or non-directional (e.g., $H_a : \mu \ne \mu_0$), but the |
34,219 | Hypothesis testing: Null Hypothesis for one-sided tests | @GreggH's answer is excellent, but seems, in the penultimate paragraph, to hint at something fishy going on. In fact a formal definition of p-values takes this kind of situation into account.
When the null hypothesis is composite, specifying a set of values $\Theta_0$ for the unknown parameter $\theta$, a valid p-value $p(x)=\alpha$ (where $x$ is the observed data) is one which has a distribution function at $\alpha$ that does not exceed $\alpha$ whatever the value of $\theta$ might be (within $\Theta_0$):
$$\Pr_\theta \left[ p(X)\leq\alpha \right] \leq \alpha \quad \forall \theta \in \Theta_0, \ \forall\alpha\in[0,1]$$
One way of ensuring validity† is simply to construct the p-value as the supremum of the probability that a test statistic $T$ exceeds or equals its observed value $t$ over all values of $\theta$ within $\Theta_0$:
$$p(x) = \sup_{\theta\in\Theta_0} \Pr_\theta\left[T\geq t\right]$$
In many cases the location of the supremum can easily be seen to be at the boundary with the alternative hypothesis, so there's no difference between testing $H_0:\theta=\theta_0$ or $H_0: \theta\leq\theta_0$ vs $H1:\theta>\theta_0$.
† In general $\theta$ may be a vector, say $(\phi,\lambda$), with one component, say $\phi$, being the parameter of interest, & the other, say $\lambda$, being a nuisance parameter; another way to construct a valid p-value is to condition on a statistic that's sufficient for $\lambda$ when $\phi=\phi_0$. | Hypothesis testing: Null Hypothesis for one-sided tests | @GreggH's answer is excellent, but seems, in the penultimate paragraph, to hint at something fishy going on. In fact a formal definition of p-values takes this kind of situation into account.
When the | Hypothesis testing: Null Hypothesis for one-sided tests
@GreggH's answer is excellent, but seems, in the penultimate paragraph, to hint at something fishy going on. In fact a formal definition of p-values takes this kind of situation into account.
When the null hypothesis is composite, specifying a set of values $\Theta_0$ for the unknown parameter $\theta$, a valid p-value $p(x)=\alpha$ (where $x$ is the observed data) is one which has a distribution function at $\alpha$ that does not exceed $\alpha$ whatever the value of $\theta$ might be (within $\Theta_0$):
$$\Pr_\theta \left[ p(X)\leq\alpha \right] \leq \alpha \quad \forall \theta \in \Theta_0, \ \forall\alpha\in[0,1]$$
One way of ensuring validity† is simply to construct the p-value as the supremum of the probability that a test statistic $T$ exceeds or equals its observed value $t$ over all values of $\theta$ within $\Theta_0$:
$$p(x) = \sup_{\theta\in\Theta_0} \Pr_\theta\left[T\geq t\right]$$
In many cases the location of the supremum can easily be seen to be at the boundary with the alternative hypothesis, so there's no difference between testing $H_0:\theta=\theta_0$ or $H_0: \theta\leq\theta_0$ vs $H1:\theta>\theta_0$.
† In general $\theta$ may be a vector, say $(\phi,\lambda$), with one component, say $\phi$, being the parameter of interest, & the other, say $\lambda$, being a nuisance parameter; another way to construct a valid p-value is to condition on a statistic that's sufficient for $\lambda$ when $\phi=\phi_0$. | Hypothesis testing: Null Hypothesis for one-sided tests
@GreggH's answer is excellent, but seems, in the penultimate paragraph, to hint at something fishy going on. In fact a formal definition of p-values takes this kind of situation into account.
When the |
34,220 | Hypothesis testing: Null Hypothesis for one-sided tests | In conclusion, it appears to be a matter of convention, whether to write the null hypothesis as $H_0 : \mu_s = \mu_t$ or $H_0 : \mu_s <= \mu_t$ (opposite of alternative). | Hypothesis testing: Null Hypothesis for one-sided tests | In conclusion, it appears to be a matter of convention, whether to write the null hypothesis as $H_0 : \mu_s = \mu_t$ or $H_0 : \mu_s <= \mu_t$ (opposite of alternative). | Hypothesis testing: Null Hypothesis for one-sided tests
In conclusion, it appears to be a matter of convention, whether to write the null hypothesis as $H_0 : \mu_s = \mu_t$ or $H_0 : \mu_s <= \mu_t$ (opposite of alternative). | Hypothesis testing: Null Hypothesis for one-sided tests
In conclusion, it appears to be a matter of convention, whether to write the null hypothesis as $H_0 : \mu_s = \mu_t$ or $H_0 : \mu_s <= \mu_t$ (opposite of alternative). |
34,221 | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier | The hamming loss (HL) is
the fraction of the wrong labels to the total number of labels
Hence, for the binary case (imbalanced or not), HL=1-Accuracy as you wrote.
When considering the multi label use case, you should decide how to extend accuracy to this case. The method choose in hamming loss was to give each label equal weight. One could use other methods (e.g., taking the maximum).
Since hamming loss is designed for multi class while Precision, Recall, F1-Measure are designed for the binary class, it is better to compare the last one to Accuracy.
In general, there is no magical metric that is the best for every problem.
In every problem you have different needs, and you should optimize for them.
By the way, specifically for imbalanced problems, accuracy is a problematic metric. For details, see here. | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier | The hamming loss (HL) is
the fraction of the wrong labels to the total number of labels
Hence, for the binary case (imbalanced or not), HL=1-Accuracy as you wrote.
When considering the multi label | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier
The hamming loss (HL) is
the fraction of the wrong labels to the total number of labels
Hence, for the binary case (imbalanced or not), HL=1-Accuracy as you wrote.
When considering the multi label use case, you should decide how to extend accuracy to this case. The method choose in hamming loss was to give each label equal weight. One could use other methods (e.g., taking the maximum).
Since hamming loss is designed for multi class while Precision, Recall, F1-Measure are designed for the binary class, it is better to compare the last one to Accuracy.
In general, there is no magical metric that is the best for every problem.
In every problem you have different needs, and you should optimize for them.
By the way, specifically for imbalanced problems, accuracy is a problematic metric. For details, see here. | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier
The hamming loss (HL) is
the fraction of the wrong labels to the total number of labels
Hence, for the binary case (imbalanced or not), HL=1-Accuracy as you wrote.
When considering the multi label |
34,222 | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier | Since Hamming loss is defined as $$HL = \frac{1}{N L} \sum_{l=1}^L\sum_{i=1}^N Y_{i,l} \oplus X_{i,l},$$ where $\oplus$ denotes exlusive-or, $X_{i,l}$ ($Y_{i,l}$) stands for boolean that the $i$-th datum (prediction) contains the $l$-th label, it really equals to (1 - accuracy) for binary case $(L=1)$: $$HL=\frac{1}{N}\sum_{i=1}^N Y_i \oplus X_i = \frac{1}{N}\sum_{i=1}^N 1 - I(X_i,Y_i) = 1 - \frac{\sum_{i=1}^N I(X_i,Y_i)}{N} =1 - Ac,$$ where $I(X_i, Y_i) = 1$ if $X_i = Y_i$ and 0 otherwise and Ac denotes accuracy.
From the above reason, the use of HL does not make sense to me in the binary case, respectively it is directly related to accuracy. Nevertheless, as mentioned here, the accuracy is ambiguous in the multiple-label case.
The HL thus presents one clear single-performance-value for multiple-label case in contrast to the precision/recall/f1 that can be evaluated only for independent binary classifiers for each label. | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier | Since Hamming loss is defined as $$HL = \frac{1}{N L} \sum_{l=1}^L\sum_{i=1}^N Y_{i,l} \oplus X_{i,l},$$ where $\oplus$ denotes exlusive-or, $X_{i,l}$ ($Y_{i,l}$) stands for boolean that the $i$-th da | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier
Since Hamming loss is defined as $$HL = \frac{1}{N L} \sum_{l=1}^L\sum_{i=1}^N Y_{i,l} \oplus X_{i,l},$$ where $\oplus$ denotes exlusive-or, $X_{i,l}$ ($Y_{i,l}$) stands for boolean that the $i$-th datum (prediction) contains the $l$-th label, it really equals to (1 - accuracy) for binary case $(L=1)$: $$HL=\frac{1}{N}\sum_{i=1}^N Y_i \oplus X_i = \frac{1}{N}\sum_{i=1}^N 1 - I(X_i,Y_i) = 1 - \frac{\sum_{i=1}^N I(X_i,Y_i)}{N} =1 - Ac,$$ where $I(X_i, Y_i) = 1$ if $X_i = Y_i$ and 0 otherwise and Ac denotes accuracy.
From the above reason, the use of HL does not make sense to me in the binary case, respectively it is directly related to accuracy. Nevertheless, as mentioned here, the accuracy is ambiguous in the multiple-label case.
The HL thus presents one clear single-performance-value for multiple-label case in contrast to the precision/recall/f1 that can be evaluated only for independent binary classifiers for each label. | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier
Since Hamming loss is defined as $$HL = \frac{1}{N L} \sum_{l=1}^L\sum_{i=1}^N Y_{i,l} \oplus X_{i,l},$$ where $\oplus$ denotes exlusive-or, $X_{i,l}$ ($Y_{i,l}$) stands for boolean that the $i$-th da |
34,223 | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier | In multi-label classification, a misclassification is no longer a hard wrong or right. A prediction containing a subset of the actual classes should be considered better than a prediction that contains none of them.source
So accuracy counts no of correctly classified data instance, Hamming Loss calculates loss generated in the bit string of class labels during prediction, It does that by exclusive or (XOR) between the actual and predicted labels and then average across the dataset. source
Number of Instances = 2
Number of Labels = 2
Case 1: Actual Same as Predicted
Actual = [[0 1] Predicted= [[0 1]
[1 1]] [1 1]]
Actual XOR Predicted = [[0 0
0 0]]
from sklearn.metrics import hamming_loss
import numpy as np
print(hamming_loss(np.array([[0,1], [1,1]]), np.array([[0,1], [1,1]])))
HL= 0.0
Case 2: Actual completely different from Predicted
Actual = [[0 1] Predicted= [[1 0]
[1 1]] [0 0]]
Actual XOR Predicted = [[1 1
1 1]]
from sklearn.metrics import hamming_loss
import numpy as np
print('HL=',hamming_loss(np.array([[0,1], [1,1]]), np.array([[1,0], [0,0]])))
HL = 4/(2*2) = 1
Case 3: Actual partially different from Predicted
Actual = [[0 1] Predicted= [[0 0]
[1 1]] [0 1]]
Actual XOR Predicted = [[0 1
1 0]]
from sklearn.metrics import hamming_loss
import numpy as np
print(hamming_loss(np.array([[0,1], [1,1]]), np.array([[0,0], [0,1]])))
HL = (1+1)/(2*2) = 0.5
hamming loss value ranges from 0 to 1. Lesser value of hamming loss indicates a better classifier. | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier | In multi-label classification, a misclassification is no longer a hard wrong or right. A prediction containing a subset of the actual classes should be considered better than a prediction that contain | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier
In multi-label classification, a misclassification is no longer a hard wrong or right. A prediction containing a subset of the actual classes should be considered better than a prediction that contains none of them.source
So accuracy counts no of correctly classified data instance, Hamming Loss calculates loss generated in the bit string of class labels during prediction, It does that by exclusive or (XOR) between the actual and predicted labels and then average across the dataset. source
Number of Instances = 2
Number of Labels = 2
Case 1: Actual Same as Predicted
Actual = [[0 1] Predicted= [[0 1]
[1 1]] [1 1]]
Actual XOR Predicted = [[0 0
0 0]]
from sklearn.metrics import hamming_loss
import numpy as np
print(hamming_loss(np.array([[0,1], [1,1]]), np.array([[0,1], [1,1]])))
HL= 0.0
Case 2: Actual completely different from Predicted
Actual = [[0 1] Predicted= [[1 0]
[1 1]] [0 0]]
Actual XOR Predicted = [[1 1
1 1]]
from sklearn.metrics import hamming_loss
import numpy as np
print('HL=',hamming_loss(np.array([[0,1], [1,1]]), np.array([[1,0], [0,0]])))
HL = 4/(2*2) = 1
Case 3: Actual partially different from Predicted
Actual = [[0 1] Predicted= [[0 0]
[1 1]] [0 1]]
Actual XOR Predicted = [[0 1
1 0]]
from sklearn.metrics import hamming_loss
import numpy as np
print(hamming_loss(np.array([[0,1], [1,1]]), np.array([[0,0], [0,1]])))
HL = (1+1)/(2*2) = 0.5
hamming loss value ranges from 0 to 1. Lesser value of hamming loss indicates a better classifier. | What is a Hamming Loss ? will we consider it for an Imbalanced Binary classifier
In multi-label classification, a misclassification is no longer a hard wrong or right. A prediction containing a subset of the actual classes should be considered better than a prediction that contain |
34,224 | "Unexpected" expectation | The explanation to the Monte Carlo evaluation of the ratio $\mathbb{E}[X_1/(X_1+X_2)]$ taking weird values is that the expectation does not exist. As a transform of a Cauchy $X_1/X_2$ in your Normal example. Indeed,
\begin{align*}
\mathbb{E}[X_1/(X_1+X_2)]
&=\mathbb{E}[1/(1+X_2/X_1)]\\
&=\int_{-\infty}^{+\infty} \frac{1}{1+y}\,\frac{1}{\pi(1+y^2)}\text{d}y
\end{align*}
which is not integrable at $y=-1$ since equivalent to $(y+1)^{-1}$.
Note that $X_1/\bar{X}$ is not a Cauchy variate but the transform of a Cauchy variate by the function$$f:\ y \to \dfrac{n}{1+\sqrt{n-1}y}$$The reason is that$$(X_2+\ldots+X_n)\sim\text{N}(0,n-1)$$and that
$$\frac{X_1}{\bar{X}}=\dfrac{n}{1+(X_2+\ldots+X_n)/X_1}=\dfrac{n}{1+\sqrt{n-1}Z/X_1}$$where $Z\sim\text{N}(0,1)$.
Note that, as $n$ grows to infinity, $X_1/\bar{X}$ converges in distribution to the random variable equal to $\pm \infty$ with probability $1/2$. | "Unexpected" expectation | The explanation to the Monte Carlo evaluation of the ratio $\mathbb{E}[X_1/(X_1+X_2)]$ taking weird values is that the expectation does not exist. As a transform of a Cauchy $X_1/X_2$ in your Normal e | "Unexpected" expectation
The explanation to the Monte Carlo evaluation of the ratio $\mathbb{E}[X_1/(X_1+X_2)]$ taking weird values is that the expectation does not exist. As a transform of a Cauchy $X_1/X_2$ in your Normal example. Indeed,
\begin{align*}
\mathbb{E}[X_1/(X_1+X_2)]
&=\mathbb{E}[1/(1+X_2/X_1)]\\
&=\int_{-\infty}^{+\infty} \frac{1}{1+y}\,\frac{1}{\pi(1+y^2)}\text{d}y
\end{align*}
which is not integrable at $y=-1$ since equivalent to $(y+1)^{-1}$.
Note that $X_1/\bar{X}$ is not a Cauchy variate but the transform of a Cauchy variate by the function$$f:\ y \to \dfrac{n}{1+\sqrt{n-1}y}$$The reason is that$$(X_2+\ldots+X_n)\sim\text{N}(0,n-1)$$and that
$$\frac{X_1}{\bar{X}}=\dfrac{n}{1+(X_2+\ldots+X_n)/X_1}=\dfrac{n}{1+\sqrt{n-1}Z/X_1}$$where $Z\sim\text{N}(0,1)$.
Note that, as $n$ grows to infinity, $X_1/\bar{X}$ converges in distribution to the random variable equal to $\pm \infty$ with probability $1/2$. | "Unexpected" expectation
The explanation to the Monte Carlo evaluation of the ratio $\mathbb{E}[X_1/(X_1+X_2)]$ taking weird values is that the expectation does not exist. As a transform of a Cauchy $X_1/X_2$ in your Normal e |
34,225 | How does a quadratic kernel look like? | There are (at least) two ways to think about this.
One is as you mentioned: imagine the points being lifted into the shape of a quadratic function, and then being cut by a plane, producing an ellipse. This is kind of like this picture (stolen from this paper):
Another way to think about it is: the decision boundary for an SVM will always be of the form $\{ y \mid \sum_i \alpha_i k(x_i, y) = b \}$. For the kernel $k(x, y) = (x^T y + c)^2$, we have:
\begin{align}
\sum_i \alpha_i (x_i^T y + c)^2
&= \sum_i \left[ \alpha_i (x_i^T y)^2 + 2 \alpha_i x_i^T y + \alpha_i c^2 \right]
\\&= \sum_i \alpha_i y^T x_i x_i^T y + \left( \sum_i 2 \alpha_i x_i \right)^T y + c^2 \sum_i \alpha_i
\\&= y^T \left( \sum_i \alpha_i x_i x_i^T \right) y + \left( \sum_i 2 \alpha_i x_i \right)^T y + c^2 \sum_i \alpha_i
\\&= y^T Q y + r^T y + s
,\end{align}
which is itself a quadratic function. So the decision boundary is always going to be the level set of some quadratic function on the input space. | How does a quadratic kernel look like? | There are (at least) two ways to think about this.
One is as you mentioned: imagine the points being lifted into the shape of a quadratic function, and then being cut by a plane, producing an ellipse. | How does a quadratic kernel look like?
There are (at least) two ways to think about this.
One is as you mentioned: imagine the points being lifted into the shape of a quadratic function, and then being cut by a plane, producing an ellipse. This is kind of like this picture (stolen from this paper):
Another way to think about it is: the decision boundary for an SVM will always be of the form $\{ y \mid \sum_i \alpha_i k(x_i, y) = b \}$. For the kernel $k(x, y) = (x^T y + c)^2$, we have:
\begin{align}
\sum_i \alpha_i (x_i^T y + c)^2
&= \sum_i \left[ \alpha_i (x_i^T y)^2 + 2 \alpha_i x_i^T y + \alpha_i c^2 \right]
\\&= \sum_i \alpha_i y^T x_i x_i^T y + \left( \sum_i 2 \alpha_i x_i \right)^T y + c^2 \sum_i \alpha_i
\\&= y^T \left( \sum_i \alpha_i x_i x_i^T \right) y + \left( \sum_i 2 \alpha_i x_i \right)^T y + c^2 \sum_i \alpha_i
\\&= y^T Q y + r^T y + s
,\end{align}
which is itself a quadratic function. So the decision boundary is always going to be the level set of some quadratic function on the input space. | How does a quadratic kernel look like?
There are (at least) two ways to think about this.
One is as you mentioned: imagine the points being lifted into the shape of a quadratic function, and then being cut by a plane, producing an ellipse. |
34,226 | How does a quadratic kernel look like? | Suppose we have two features $(x_1, x_2)$, and we expand it into five features $(x_1^2, x_2^2, x_1, x_2, x_1x_2)$
The decision boundary is
$$
\beta_0+\beta_1x_1^2+\beta_2x_2^2+\beta_3x_1+\beta_4x_2+\beta_5x_1x_2=0
$$
The intersection with a plane is ellipsoid boundary, Which looks like this | How does a quadratic kernel look like? | Suppose we have two features $(x_1, x_2)$, and we expand it into five features $(x_1^2, x_2^2, x_1, x_2, x_1x_2)$
The decision boundary is
$$
\beta_0+\beta_1x_1^2+\beta_2x_2^2+\beta_3x_1+\beta_4x_2+\b | How does a quadratic kernel look like?
Suppose we have two features $(x_1, x_2)$, and we expand it into five features $(x_1^2, x_2^2, x_1, x_2, x_1x_2)$
The decision boundary is
$$
\beta_0+\beta_1x_1^2+\beta_2x_2^2+\beta_3x_1+\beta_4x_2+\beta_5x_1x_2=0
$$
The intersection with a plane is ellipsoid boundary, Which looks like this | How does a quadratic kernel look like?
Suppose we have two features $(x_1, x_2)$, and we expand it into five features $(x_1^2, x_2^2, x_1, x_2, x_1x_2)$
The decision boundary is
$$
\beta_0+\beta_1x_1^2+\beta_2x_2^2+\beta_3x_1+\beta_4x_2+\b |
34,227 | Is logistic regression a "semi-parametric" model? | The logistic regression is not "semi-parametric". It has only parametric component. For parametric model, the number of parameters is fixed and does not depend on the number of training data, but only depends on the model itself. This is true for logistic regression since if you have $n$ variables $X_1,\ldots,X_n$ you have $n+1$ parameters $w_0,\ldots,w_n$ to define the logistic regression model, and the number of these parameters does not increase or decrease based on the number of training data. Note that for non-parametric models you also have parameters, but the number of parameters is not fixed and depends on the number of training examples. | Is logistic regression a "semi-parametric" model? | The logistic regression is not "semi-parametric". It has only parametric component. For parametric model, the number of parameters is fixed and does not depend on the number of training data, but only | Is logistic regression a "semi-parametric" model?
The logistic regression is not "semi-parametric". It has only parametric component. For parametric model, the number of parameters is fixed and does not depend on the number of training data, but only depends on the model itself. This is true for logistic regression since if you have $n$ variables $X_1,\ldots,X_n$ you have $n+1$ parameters $w_0,\ldots,w_n$ to define the logistic regression model, and the number of these parameters does not increase or decrease based on the number of training data. Note that for non-parametric models you also have parameters, but the number of parameters is not fixed and depends on the number of training examples. | Is logistic regression a "semi-parametric" model?
The logistic regression is not "semi-parametric". It has only parametric component. For parametric model, the number of parameters is fixed and does not depend on the number of training data, but only |
34,228 | Does feature selection help improve the performance of machine learning? | You should not have any variables that you feel would obviously not be influencing the dependent variable at all, that is have only a large pool of variables that you have a hypothesis around impacting the dependent variable; you wouldn't want your model to learn noise from variables that have no logical sense in being part of the independent variable space but have spurious correlations with other vars. But apart from those obvious exclusions, the point is, how would you know which features/variables are important and which are not? You may think a certain variable will not be of much importance but when you actually fit a model, it may come up as having much more discriminatory power than you'd thought!
In tree based ensemble methods, such as XGBoost, each variable is evaluated as a potential splitting variable, which makes them robust to unimportant/irrelevant variables, because such variables that cannot discriminate between events/non-events will not be selected as the splitting variable and hence will be very low on the var importance graph as well. However, a caveat here is that if you have two (or more) highly correlated variables, the importance that you get for these may not be indicative of their actual importance (though even this doesn't affect your model's predictive performance). So you may leave all your features in and run a few iterations to see how important/not they are and the ones that consistently lie at the bottom of the var imp chart can be excluded from subsequent runs to improve computational performance. | Does feature selection help improve the performance of machine learning? | You should not have any variables that you feel would obviously not be influencing the dependent variable at all, that is have only a large pool of variables that you have a hypothesis around impactin | Does feature selection help improve the performance of machine learning?
You should not have any variables that you feel would obviously not be influencing the dependent variable at all, that is have only a large pool of variables that you have a hypothesis around impacting the dependent variable; you wouldn't want your model to learn noise from variables that have no logical sense in being part of the independent variable space but have spurious correlations with other vars. But apart from those obvious exclusions, the point is, how would you know which features/variables are important and which are not? You may think a certain variable will not be of much importance but when you actually fit a model, it may come up as having much more discriminatory power than you'd thought!
In tree based ensemble methods, such as XGBoost, each variable is evaluated as a potential splitting variable, which makes them robust to unimportant/irrelevant variables, because such variables that cannot discriminate between events/non-events will not be selected as the splitting variable and hence will be very low on the var importance graph as well. However, a caveat here is that if you have two (or more) highly correlated variables, the importance that you get for these may not be indicative of their actual importance (though even this doesn't affect your model's predictive performance). So you may leave all your features in and run a few iterations to see how important/not they are and the ones that consistently lie at the bottom of the var imp chart can be excluded from subsequent runs to improve computational performance. | Does feature selection help improve the performance of machine learning?
You should not have any variables that you feel would obviously not be influencing the dependent variable at all, that is have only a large pool of variables that you have a hypothesis around impactin |
34,229 | How to find the probability of extra Sundays in a leap year? | The Gregorian calendar favors five of the seven weekdays during leap years. Therefore the chance is not precisely $2/7$.
This was essentially problem B3 in the 1950 Putnam Mathematics Competition:
$n$ is chosen at random from the natural numbers. Show that the probability that December 25 in year $n$ is a Wednesday is not 1/7.
In the Gregorian Calendar, years that are multiples of $4$ are leap years (with $7\times 52 + 2=366$ days), but years that are multiples of $100$ are not leap years (and therefore have $7\times 52+1=365$ days), with the exception that years that are multiples of $400$ are leap years. (Many of us remember the most recent exception in $2000$.) This creates a $400$ year cycle containing $400/4 - 400/100 + 400/400 = 97$ leap years.
What is especially interesting is that the total number of days in this cycle is a whole multiple of seven:
$$400 \times (7\times 52 + 1) + 97 \times 1 \equiv 400+97 \equiv 71 \times 7 \equiv 0 \operatorname{mod} 7.$$
This shows that the $400$ year cycle comprises a whole number of weeks. Consequently, the pattern of days of the week is exactly the same from one cycle to the next.
We may therefore interpret the question as asking for the chance of $53$ Sundays when sampling randomly and uniformly from any $400$-year cycle of leap years. A brute-force calculation (using, say, the fact that January 1, 2001, was a Monday) shows that $28$ of the $97$ leap years in each cycle have $53$ Sundays. Therefore the chance is
$$\Pr(53\text{ Sundays}) = \frac{28}{97}.$$
Note that this does not equal $28/98 = 2/7$: it is slightly greater.
Incidentally, there is the same chance of $53$ Wednesdays, Fridays, Saturdays, or Mondays and only a $27/97$ chance of $53$ Tuesdays or Thursdays.
For those who would like to make more detailed calculations (and might mistrust any mathematical simplifications), here is brute-force code that computes and examines ever day of the week for a given set of years. At the end it displays the number of years with $53$ appearances of each day of the week. It is written in R.
Here is its output for the $2001-2400$ cycle:
Friday Monday Saturday Sunday Thursday Tuesday Wednesday
28 28 28 28 27 27 28
Here is the code itself.
leapyear <- function(y) {
(y %% 4 == 0 & !(y%% 100 == 0)) | (y %% 400 == 0)
}
leapyears <- seq(2001, length.out=400)
leapyears <- leapyears[leapyear(leapyears)]
results <- sapply(leapyears, function(y) {
table(weekdays(seq.Date(as.Date(paste0(y, "-01-01")), by="1 day", length.out=366)))
})
rowSums(results==53) | How to find the probability of extra Sundays in a leap year? | The Gregorian calendar favors five of the seven weekdays during leap years. Therefore the chance is not precisely $2/7$.
This was essentially problem B3 in the 1950 Putnam Mathematics Competition:
$ | How to find the probability of extra Sundays in a leap year?
The Gregorian calendar favors five of the seven weekdays during leap years. Therefore the chance is not precisely $2/7$.
This was essentially problem B3 in the 1950 Putnam Mathematics Competition:
$n$ is chosen at random from the natural numbers. Show that the probability that December 25 in year $n$ is a Wednesday is not 1/7.
In the Gregorian Calendar, years that are multiples of $4$ are leap years (with $7\times 52 + 2=366$ days), but years that are multiples of $100$ are not leap years (and therefore have $7\times 52+1=365$ days), with the exception that years that are multiples of $400$ are leap years. (Many of us remember the most recent exception in $2000$.) This creates a $400$ year cycle containing $400/4 - 400/100 + 400/400 = 97$ leap years.
What is especially interesting is that the total number of days in this cycle is a whole multiple of seven:
$$400 \times (7\times 52 + 1) + 97 \times 1 \equiv 400+97 \equiv 71 \times 7 \equiv 0 \operatorname{mod} 7.$$
This shows that the $400$ year cycle comprises a whole number of weeks. Consequently, the pattern of days of the week is exactly the same from one cycle to the next.
We may therefore interpret the question as asking for the chance of $53$ Sundays when sampling randomly and uniformly from any $400$-year cycle of leap years. A brute-force calculation (using, say, the fact that January 1, 2001, was a Monday) shows that $28$ of the $97$ leap years in each cycle have $53$ Sundays. Therefore the chance is
$$\Pr(53\text{ Sundays}) = \frac{28}{97}.$$
Note that this does not equal $28/98 = 2/7$: it is slightly greater.
Incidentally, there is the same chance of $53$ Wednesdays, Fridays, Saturdays, or Mondays and only a $27/97$ chance of $53$ Tuesdays or Thursdays.
For those who would like to make more detailed calculations (and might mistrust any mathematical simplifications), here is brute-force code that computes and examines ever day of the week for a given set of years. At the end it displays the number of years with $53$ appearances of each day of the week. It is written in R.
Here is its output for the $2001-2400$ cycle:
Friday Monday Saturday Sunday Thursday Tuesday Wednesday
28 28 28 28 27 27 28
Here is the code itself.
leapyear <- function(y) {
(y %% 4 == 0 & !(y%% 100 == 0)) | (y %% 400 == 0)
}
leapyears <- seq(2001, length.out=400)
leapyears <- leapyears[leapyear(leapyears)]
results <- sapply(leapyears, function(y) {
table(weekdays(seq.Date(as.Date(paste0(y, "-01-01")), by="1 day", length.out=366)))
})
rowSums(results==53) | How to find the probability of extra Sundays in a leap year?
The Gregorian calendar favors five of the seven weekdays during leap years. Therefore the chance is not precisely $2/7$.
This was essentially problem B3 in the 1950 Putnam Mathematics Competition:
$ |
34,230 | How to find the probability of extra Sundays in a leap year? | Yes, your reasoning is correct. In the long run, leap years are nearly equally likely to start on any day of the week. So the chance of the 2 extra days including a Sunday is about 2/7.
w huber points out that a quirk of the Gregorian Calendar causes the starting day of a leap year to be not quite uniformly distributed, so the true probability of 53 Sundays is 1% or so greater than 2/7. However 2/7 is almost certainly the answer that the authors of your statistics textbook intended you to find. | How to find the probability of extra Sundays in a leap year? | Yes, your reasoning is correct. In the long run, leap years are nearly equally likely to start on any day of the week. So the chance of the 2 extra days including a Sunday is about 2/7.
w huber points | How to find the probability of extra Sundays in a leap year?
Yes, your reasoning is correct. In the long run, leap years are nearly equally likely to start on any day of the week. So the chance of the 2 extra days including a Sunday is about 2/7.
w huber points out that a quirk of the Gregorian Calendar causes the starting day of a leap year to be not quite uniformly distributed, so the true probability of 53 Sundays is 1% or so greater than 2/7. However 2/7 is almost certainly the answer that the authors of your statistics textbook intended you to find. | How to find the probability of extra Sundays in a leap year?
Yes, your reasoning is correct. In the long run, leap years are nearly equally likely to start on any day of the week. So the chance of the 2 extra days including a Sunday is about 2/7.
w huber points |
34,231 | How do I find a variance-stabilizing transformation? | Since the question partly concerns notation and basic concepts, I will be expansive in the following answer, making sure to motivate, describe, and explain the notation, the statistical reasoning, and the mathematical steps used. I hope this has been done in a way that clearly indicates how similar problems can be solved.
"Sample of size $n$" means you have $n$ independent and identically distributed binary observations, often called "success" and "failure". The underlying distribution is determined by the chance of success, which is a number $p$. An estimate of $p$ based on the sample is the proportion of successes, written $\hat p$. It is a random variable.
Because $\hat p$ is bounded (between $0$ and $1$), it has a mean $\mathbb{E}(\hat p)$ and a variance $\operatorname{Var}(\hat p)$. These can be figured out in terms of the underlying chance of success $p$; they are
$$\mathbb{E}(\hat p)=p$$
and
$$\operatorname{Var}(\hat p) = \frac{p(1-p)}{n}.\tag{1}$$
A variance-stabilizing transformation is a function $f$ that converts all possible values of $\hat p$ into other values $Y=f(\hat p)$ in such a way that the variance of $Y$ is constant--usually taken to be $1$. This makes $Y$ simpler to work with because (at least insofar as its first two moments are concerned) it is characterized by its expectation alone, rather than by an expectation and a variance.
It's easy to change the variance of any random variable $X$: multiplying $X$ by a constant, say $\lambda$, multiplies its variance by $\lambda^2$. Dividing $X$ by the square root of its variance will thereby give it a unit variance. We would therefore like the effect of $f$ to be that of dividing $\hat p$ by the square root of $1/\operatorname{Var}(\hat p)$, no matter what value $\hat p$ might take on.
At this point it is difficult to proceed because we don't actually know $\operatorname{Var}(\hat p)$. However, we can hope the estimate $\hat p$ is close to $p$, in which case we could approximate $\operatorname{Var}(\hat p)$ by plugging $\hat p$ in place of $p$ in formula $(1)$:
$$\widehat{\operatorname{Var}}(\hat p) = \frac{\hat p(1-\hat p)}{n}.\tag{2}$$
This is enough information to find a differentiable transformation $f$. Recall that "differentiable" means $f$ has a well-defined slope at any argument $x$ (with $0\lt x \lt 1$ in this situation). The slope, written $f^\prime(x)$, can be considered a local scaling factor: it is the amount by which $f$ rescales values close to $x$. One can write $$df(x) = f^\prime(x)dx$$ where $dx$ is a small change in $x$ and $df(x)$ is the corresponding change in $f(x)$.
Above, we have seen this local scaling factor ought to be the reciprocal square root of the variance. Using the estimate of that variance in $(2)$ enables us to write the desired property of $f$ in the form
$$df(\hat p) = f^\prime(\hat p)d\hat p = \sqrt{\frac{1}{\widehat{\operatorname{Var}(\hat p)}}}d\hat p= \sqrt{\frac{n}{\hat p(1-\hat p)}}d\hat p.\tag{3}$$
The differential $d\hat p$ is the scale factor near $\hat p$ while the differential $df(\hat p)$ is the scale factor near $f(\hat p)$: as promised, they are related through multiplication by $f^\prime(\hat p)$.
The Fundamental Theorem of Calculus asserts that such first order differential equations for an unknown function $f$ are solved via integration. In this case a solution is readily found by squinting hard at the right hand side of $(3)$ in a search for similar mathematical patterns. The square root suggests we would love for both $\hat p$ and $1-\hat p$ themselves to be squares. In particular, if $\hat p = s^2$ and $1-\hat p = c^2$, then
$$\hat p(1-\hat p) = s^2(1-s^2) = s^2(c^2).$$
That ought to remind anyone of the trigonometric functions sine and cosine. Indeed, for angles $0 \le \theta \le \pi/2$, both the sine and cosine range between $0$ and $1$, implying we really can write $\hat p(\theta) = \sin^2\theta$ for some angle $\theta$. This permits us to write
$$df(\theta) = df(\hat p(\theta)) = f^\prime(\sin^2\theta)d\hat p = \sqrt{\frac{n}{\sin^2\theta\cos^2\theta}}\left(2\sin\theta\cos\theta\right)d\theta = 2\sqrt{n}d\theta.\tag{4}$$
To achieve this simple form I have performed the only real calculation in this answer: the differential of $\hat p = \sin^2\theta$ is $d\hat p=2\sin\theta\cos\theta d\theta$, as asserted by the Chain Rule.
At this point we are done, because when two differential expressions defined on a connected set (like the interval $[0,1]$) are equal, as in the two sides of equation $(4)$, the functions of which they are differentials differ only by an additive constant--but such additive changes of random variables will not alter their variances, so this doesn't matter. Consequently we may take
$$Y = 2\sqrt{n}\theta.$$
Since $\hat p = \sin^2\theta$, we may solve for $\theta=\arcsin\sqrt{\hat p}$ in terms of $\hat p$, giving one solution
$$f(\hat p) = Y = 2\sqrt{n} \arcsin\sqrt{\hat p}.$$
All other solutions are related to that one by some combination of adding a constant (which won't change the variance, as we have noted) and multiplying by some constant (which, although it changes the variance, still preserves the property of being constant).
According to this analysis, $f$ deserves only to be called an approximate variance-stabilizing transformation, rather than the variance-stabilizing transformation. It nevertheless will perform well when $\hat p$ has a reasonably high chance of being close to $p$ itself. This typically is the case when both $n\hat p$ and $n(1-\hat p)$ both exceed $5$--but the threshold $5$ can be modified to suit your own criteria for "reasonably high" and "close to." | How do I find a variance-stabilizing transformation? | Since the question partly concerns notation and basic concepts, I will be expansive in the following answer, making sure to motivate, describe, and explain the notation, the statistical reasoning, and | How do I find a variance-stabilizing transformation?
Since the question partly concerns notation and basic concepts, I will be expansive in the following answer, making sure to motivate, describe, and explain the notation, the statistical reasoning, and the mathematical steps used. I hope this has been done in a way that clearly indicates how similar problems can be solved.
"Sample of size $n$" means you have $n$ independent and identically distributed binary observations, often called "success" and "failure". The underlying distribution is determined by the chance of success, which is a number $p$. An estimate of $p$ based on the sample is the proportion of successes, written $\hat p$. It is a random variable.
Because $\hat p$ is bounded (between $0$ and $1$), it has a mean $\mathbb{E}(\hat p)$ and a variance $\operatorname{Var}(\hat p)$. These can be figured out in terms of the underlying chance of success $p$; they are
$$\mathbb{E}(\hat p)=p$$
and
$$\operatorname{Var}(\hat p) = \frac{p(1-p)}{n}.\tag{1}$$
A variance-stabilizing transformation is a function $f$ that converts all possible values of $\hat p$ into other values $Y=f(\hat p)$ in such a way that the variance of $Y$ is constant--usually taken to be $1$. This makes $Y$ simpler to work with because (at least insofar as its first two moments are concerned) it is characterized by its expectation alone, rather than by an expectation and a variance.
It's easy to change the variance of any random variable $X$: multiplying $X$ by a constant, say $\lambda$, multiplies its variance by $\lambda^2$. Dividing $X$ by the square root of its variance will thereby give it a unit variance. We would therefore like the effect of $f$ to be that of dividing $\hat p$ by the square root of $1/\operatorname{Var}(\hat p)$, no matter what value $\hat p$ might take on.
At this point it is difficult to proceed because we don't actually know $\operatorname{Var}(\hat p)$. However, we can hope the estimate $\hat p$ is close to $p$, in which case we could approximate $\operatorname{Var}(\hat p)$ by plugging $\hat p$ in place of $p$ in formula $(1)$:
$$\widehat{\operatorname{Var}}(\hat p) = \frac{\hat p(1-\hat p)}{n}.\tag{2}$$
This is enough information to find a differentiable transformation $f$. Recall that "differentiable" means $f$ has a well-defined slope at any argument $x$ (with $0\lt x \lt 1$ in this situation). The slope, written $f^\prime(x)$, can be considered a local scaling factor: it is the amount by which $f$ rescales values close to $x$. One can write $$df(x) = f^\prime(x)dx$$ where $dx$ is a small change in $x$ and $df(x)$ is the corresponding change in $f(x)$.
Above, we have seen this local scaling factor ought to be the reciprocal square root of the variance. Using the estimate of that variance in $(2)$ enables us to write the desired property of $f$ in the form
$$df(\hat p) = f^\prime(\hat p)d\hat p = \sqrt{\frac{1}{\widehat{\operatorname{Var}(\hat p)}}}d\hat p= \sqrt{\frac{n}{\hat p(1-\hat p)}}d\hat p.\tag{3}$$
The differential $d\hat p$ is the scale factor near $\hat p$ while the differential $df(\hat p)$ is the scale factor near $f(\hat p)$: as promised, they are related through multiplication by $f^\prime(\hat p)$.
The Fundamental Theorem of Calculus asserts that such first order differential equations for an unknown function $f$ are solved via integration. In this case a solution is readily found by squinting hard at the right hand side of $(3)$ in a search for similar mathematical patterns. The square root suggests we would love for both $\hat p$ and $1-\hat p$ themselves to be squares. In particular, if $\hat p = s^2$ and $1-\hat p = c^2$, then
$$\hat p(1-\hat p) = s^2(1-s^2) = s^2(c^2).$$
That ought to remind anyone of the trigonometric functions sine and cosine. Indeed, for angles $0 \le \theta \le \pi/2$, both the sine and cosine range between $0$ and $1$, implying we really can write $\hat p(\theta) = \sin^2\theta$ for some angle $\theta$. This permits us to write
$$df(\theta) = df(\hat p(\theta)) = f^\prime(\sin^2\theta)d\hat p = \sqrt{\frac{n}{\sin^2\theta\cos^2\theta}}\left(2\sin\theta\cos\theta\right)d\theta = 2\sqrt{n}d\theta.\tag{4}$$
To achieve this simple form I have performed the only real calculation in this answer: the differential of $\hat p = \sin^2\theta$ is $d\hat p=2\sin\theta\cos\theta d\theta$, as asserted by the Chain Rule.
At this point we are done, because when two differential expressions defined on a connected set (like the interval $[0,1]$) are equal, as in the two sides of equation $(4)$, the functions of which they are differentials differ only by an additive constant--but such additive changes of random variables will not alter their variances, so this doesn't matter. Consequently we may take
$$Y = 2\sqrt{n}\theta.$$
Since $\hat p = \sin^2\theta$, we may solve for $\theta=\arcsin\sqrt{\hat p}$ in terms of $\hat p$, giving one solution
$$f(\hat p) = Y = 2\sqrt{n} \arcsin\sqrt{\hat p}.$$
All other solutions are related to that one by some combination of adding a constant (which won't change the variance, as we have noted) and multiplying by some constant (which, although it changes the variance, still preserves the property of being constant).
According to this analysis, $f$ deserves only to be called an approximate variance-stabilizing transformation, rather than the variance-stabilizing transformation. It nevertheless will perform well when $\hat p$ has a reasonably high chance of being close to $p$ itself. This typically is the case when both $n\hat p$ and $n(1-\hat p)$ both exceed $5$--but the threshold $5$ can be modified to suit your own criteria for "reasonably high" and "close to." | How do I find a variance-stabilizing transformation?
Since the question partly concerns notation and basic concepts, I will be expansive in the following answer, making sure to motivate, describe, and explain the notation, the statistical reasoning, and |
34,232 | Dispersion parameter for Gamma family | Gamma distribution defined by two parameters - shape ($\alpha$) and rate ($\beta$).
There is alternative parameterization through mean ($\mu$) and shape, which is used in GLM.
We take $\mu = \alpha/\beta$ and put it into place of rate (as $\beta = \alpha/\mu$), resulting in function $Gamma(\mu,\alpha)$.
In R GLM assumes shape to be a constant (as linear regression assumes constant variance). To satisfy this assumption dispersion ($\phi$) is introduced:
$$
\phi = \frac{1}{\alpha}
$$
For the simple case glm(x ~ 1, family = Gamma(link = 'identity)), summary.glm gives you $\text{estimate}$, that is equal to $\mu$ (note that default link is 'inverse' and estimate = $1/\mu$) and $\text{dispersion}$ is $\phi$. | Dispersion parameter for Gamma family | Gamma distribution defined by two parameters - shape ($\alpha$) and rate ($\beta$).
There is alternative parameterization through mean ($\mu$) and shape, which is used in GLM.
We take $\mu = \alpha/\b | Dispersion parameter for Gamma family
Gamma distribution defined by two parameters - shape ($\alpha$) and rate ($\beta$).
There is alternative parameterization through mean ($\mu$) and shape, which is used in GLM.
We take $\mu = \alpha/\beta$ and put it into place of rate (as $\beta = \alpha/\mu$), resulting in function $Gamma(\mu,\alpha)$.
In R GLM assumes shape to be a constant (as linear regression assumes constant variance). To satisfy this assumption dispersion ($\phi$) is introduced:
$$
\phi = \frac{1}{\alpha}
$$
For the simple case glm(x ~ 1, family = Gamma(link = 'identity)), summary.glm gives you $\text{estimate}$, that is equal to $\mu$ (note that default link is 'inverse' and estimate = $1/\mu$) and $\text{dispersion}$ is $\phi$. | Dispersion parameter for Gamma family
Gamma distribution defined by two parameters - shape ($\alpha$) and rate ($\beta$).
There is alternative parameterization through mean ($\mu$) and shape, which is used in GLM.
We take $\mu = \alpha/\b |
34,233 | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without log transformation? | Not only do distributions of untransformed ratios have odd shapes not matching the assumptions of traditional statistical analysis, but there is no good interpretation of a difference in two ratios. As an aside if you can find an example where the difference in two ratios is meaningful, when the ratios do not represent proportions of a whole, please describe such a situation.
As a variable used in statistical analysis, ratios have the significant problem of being asymmetric measures, i.e., it matters greatly which value is in the denominator. This asymmetry makes it almost meaningless to add or subtract ratios. Log ratios are symmetric, and can be added and subtracted.
One can spend a good deal of time worrying about what distribution a test statistic has or correcting for the distribution's "strangeness" but it is important to first choose an effect measure that has the right mathematical and practical properties. Ratios are almost always meant to be compared by taking the ratio of ratios, or its log (i.e., double difference in logs of original measurements). | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without l | Not only do distributions of untransformed ratios have odd shapes not matching the assumptions of traditional statistical analysis, but there is no good interpretation of a difference in two ratios. | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without log transformation?
Not only do distributions of untransformed ratios have odd shapes not matching the assumptions of traditional statistical analysis, but there is no good interpretation of a difference in two ratios. As an aside if you can find an example where the difference in two ratios is meaningful, when the ratios do not represent proportions of a whole, please describe such a situation.
As a variable used in statistical analysis, ratios have the significant problem of being asymmetric measures, i.e., it matters greatly which value is in the denominator. This asymmetry makes it almost meaningless to add or subtract ratios. Log ratios are symmetric, and can be added and subtracted.
One can spend a good deal of time worrying about what distribution a test statistic has or correcting for the distribution's "strangeness" but it is important to first choose an effect measure that has the right mathematical and practical properties. Ratios are almost always meant to be compared by taking the ratio of ratios, or its log (i.e., double difference in logs of original measurements). | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without l
Not only do distributions of untransformed ratios have odd shapes not matching the assumptions of traditional statistical analysis, but there is no good interpretation of a difference in two ratios. |
34,234 | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without log transformation? | The answer from @FrankHarrell, and associated comments from him and @NickCox, answer the question admirably. I would add that the implicit focus on the shape of raw distributions of predictors and outcome variables is misplaced; in linear modeling, what's important is linearity of relations of predictors to outcome and the distribution of residuals.
I also wish to add information on two articles cited in the original question that might explain some sources of the difficulty sensed by the OP. It's important to evaluate articles critically, not just accept them because they happen to have been published.
The cited paper on misuses of log transformations by Feng et al rightly notes some abuses that are possible with log transformations, but tends to leave the impression that log transformations should be avoided rather than used intelligently. For example, the paper says:
using transformations in general and log transformation in particular can be quite problematic in practice to achieve desired objectives
with alleged difficulties noted such as:
there is no one-to-one relation between the original mean and the mean of the log-transformed data...it is not conceptually sensible to compare the variability of the data with its transformed counterpart ... comparing the means of two samples is not the same as comparing the means of their transformed versions
and concluding:
rather than trying to find an appropriate distribution and/or transformation to fit the data, one may consider abandoning this classic paradigm altogether...
I don't see that the alleged difficulties noted in that paper provide reasons to avoid informed use of logarithmic or other transformations. Others have noted more serious deficiencies in that paper. Bland, Altman and Rohlf wrote a direct response, In defence of logarithmic transformations. The full response is apparently behind a paywall, but I believe the following quotes would constitute fair use:
They do not illustrate their article with any real data, however, and appear largely to ignore the context in which log transformations are applied...They also quote out of context the people they criticise...Feng et al. also say ‘Although well-defined statistically, the quantity Exp(E(log X)) has no intuitive and biological interpretation.’ We find no problem in intuition concerning it. Although the expression looks complicated, it is simply the geometric mean.
Bland, Altman and Rohlf conclude:
Log transformation is a valuable tool in the analysis of biological and clinical data. We do not think anyone should be discouraged from using it by this ill-argued and misleading paper.
The paper that "advises to use ANOVA to test the differences among raw fold differences (FD) in immunoblotting" deals nicely with some of the technical difficulties in performing densitometry of what are called "western blots" (difficulties of which I am painfully aware), yet the almost off-hand suggestion at the end of the paper to "Determine the average FD and associated P values for the biological replicates by importing the FD from step (2) above into a statistical analysis software package such a PRISM or Analyze IT" does not seem to have received very critical review. (It also does not rule out the possibility of log-transforming the FD values in the statistical analysis.)
A suggestion to use raw FD actually contradicts the idea presented earlier in that paper that this analysis is "a very similar methodology to qPCR," or the quantitative polymerase chain reaction. Statistical analysis of qPCR is best done on the values of "cycles to threshold" or $C_t$ values. These $C_t$ values have direct $\log_2$ relations to the original amounts of the nucleic-acid sequence being analyzed. Of further note in nucleic-acid quantification, the MA plot widely used in microarray analysis is a Bland-Altman plot on logarithmic transformations of expression data. When errors are proportional to values of interest, the logarithmic transformation can make lots of sense. | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without l | The answer from @FrankHarrell, and associated comments from him and @NickCox, answer the question admirably. I would add that the implicit focus on the shape of raw distributions of predictors and out | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without log transformation?
The answer from @FrankHarrell, and associated comments from him and @NickCox, answer the question admirably. I would add that the implicit focus on the shape of raw distributions of predictors and outcome variables is misplaced; in linear modeling, what's important is linearity of relations of predictors to outcome and the distribution of residuals.
I also wish to add information on two articles cited in the original question that might explain some sources of the difficulty sensed by the OP. It's important to evaluate articles critically, not just accept them because they happen to have been published.
The cited paper on misuses of log transformations by Feng et al rightly notes some abuses that are possible with log transformations, but tends to leave the impression that log transformations should be avoided rather than used intelligently. For example, the paper says:
using transformations in general and log transformation in particular can be quite problematic in practice to achieve desired objectives
with alleged difficulties noted such as:
there is no one-to-one relation between the original mean and the mean of the log-transformed data...it is not conceptually sensible to compare the variability of the data with its transformed counterpart ... comparing the means of two samples is not the same as comparing the means of their transformed versions
and concluding:
rather than trying to find an appropriate distribution and/or transformation to fit the data, one may consider abandoning this classic paradigm altogether...
I don't see that the alleged difficulties noted in that paper provide reasons to avoid informed use of logarithmic or other transformations. Others have noted more serious deficiencies in that paper. Bland, Altman and Rohlf wrote a direct response, In defence of logarithmic transformations. The full response is apparently behind a paywall, but I believe the following quotes would constitute fair use:
They do not illustrate their article with any real data, however, and appear largely to ignore the context in which log transformations are applied...They also quote out of context the people they criticise...Feng et al. also say ‘Although well-defined statistically, the quantity Exp(E(log X)) has no intuitive and biological interpretation.’ We find no problem in intuition concerning it. Although the expression looks complicated, it is simply the geometric mean.
Bland, Altman and Rohlf conclude:
Log transformation is a valuable tool in the analysis of biological and clinical data. We do not think anyone should be discouraged from using it by this ill-argued and misleading paper.
The paper that "advises to use ANOVA to test the differences among raw fold differences (FD) in immunoblotting" deals nicely with some of the technical difficulties in performing densitometry of what are called "western blots" (difficulties of which I am painfully aware), yet the almost off-hand suggestion at the end of the paper to "Determine the average FD and associated P values for the biological replicates by importing the FD from step (2) above into a statistical analysis software package such a PRISM or Analyze IT" does not seem to have received very critical review. (It also does not rule out the possibility of log-transforming the FD values in the statistical analysis.)
A suggestion to use raw FD actually contradicts the idea presented earlier in that paper that this analysis is "a very similar methodology to qPCR," or the quantitative polymerase chain reaction. Statistical analysis of qPCR is best done on the values of "cycles to threshold" or $C_t$ values. These $C_t$ values have direct $\log_2$ relations to the original amounts of the nucleic-acid sequence being analyzed. Of further note in nucleic-acid quantification, the MA plot widely used in microarray analysis is a Bland-Altman plot on logarithmic transformations of expression data. When errors are proportional to values of interest, the logarithmic transformation can make lots of sense. | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without l
The answer from @FrankHarrell, and associated comments from him and @NickCox, answer the question admirably. I would add that the implicit focus on the shape of raw distributions of predictors and out |
34,235 | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without log transformation? | If both $X$ and $Y$ are normal with zero mean, then the ratio $X/Y$ follows a Cauchy distribution with density
$p(x) = \frac{1}{\pi \gamma} \frac{\gamma^2}{(x-x_0)^2 + \gamma^2}$
where $x_0$ is the location parameter, which is kind of a measure of the centrality of the mass, and $\gamma$ the half-width, which is kind of the standard deviation for Cauchy. It has no mean, no variance and no higher moments. | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without l | If both $X$ and $Y$ are normal with zero mean, then the ratio $X/Y$ follows a Cauchy distribution with density
$p(x) = \frac{1}{\pi \gamma} \frac{\gamma^2}{(x-x_0)^2 + \gamma^2}$
where $x_0$ is the lo | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without log transformation?
If both $X$ and $Y$ are normal with zero mean, then the ratio $X/Y$ follows a Cauchy distribution with density
$p(x) = \frac{1}{\pi \gamma} \frac{\gamma^2}{(x-x_0)^2 + \gamma^2}$
where $x_0$ is the location parameter, which is kind of a measure of the centrality of the mass, and $\gamma$ the half-width, which is kind of the standard deviation for Cauchy. It has no mean, no variance and no higher moments. | If my goal is to test the absolute change of the ratios, can I compare the ratios directly without l
If both $X$ and $Y$ are normal with zero mean, then the ratio $X/Y$ follows a Cauchy distribution with density
$p(x) = \frac{1}{\pi \gamma} \frac{\gamma^2}{(x-x_0)^2 + \gamma^2}$
where $x_0$ is the lo |
34,236 | How to test difference between times series - does "time series ANOVA" exist? | I believe your problem is a textbook case of longitudinal data analysis for which there is an extensive statistical methodology developed. In particular, analysis of response profiles is an approach that I would employ in this case.
To motivate this, I will provide some background from Applied Longitudinal Analysis by G.M. Fitzmaurice:
1.2 Longitudinal and clustered data
The defining feature of longitudinal studies is that measurements of
the same individuals are taken repeatedly through time, thereby
allowing the direct study of change over the time. The primary goal of
a longitudinal study is to characterize the change in response over
the time and the factors that influence change. (...)
A distinctive feature of longitudinal data is that they are
clustered. In longitudinal studies the clusters are composed of the repeated measurements obtained from a single individual at different
occasions. Observations within a cluster will typically exhibit
positive correlation, and this correlation must be accounted for in
the analysis. Longitudinal data also have a temporal order; the first
measurement within a cluster necessarily comes before the second
measurement, and so on. The ordering of the repeated measures has
important implications for analysis.
and regarding analyzing response profiles choice:
Methods for analyzing response profiles are appealing when there is a
single categorical covariate (perhaps denoting different treatment or
exposure groups) and when no specific a priori pattern for the
differences in the response profiles between groups can be specified.
When repeated measures are obtained at the same sequence of occasions,
the data can be summarized by the estimated mean response at each
occasion, stratified by levels of a group factor. At any given level
of the group factor, the sequence of means over time is referred to as
the mean response profile.
The nlme R library contains a glm function to evaluate such model, giving you a longitudinal "equivalent" of Anova:
library(nlme)
# Model assuming the same variance for each time point
gls.fit <-
gls(value ~ factor(time) + treatment,
data = my.data,
corr = corSymm(form = ~ 1 | object),
control = glsControl(tolerance = 0.01, msTol = 0.01,
maxIter = 1000000, msMaxIter = 1000000))
summary(gls.fit)
# Model allowing for different variance structure for each time point
gls.fit.diff.var <-
gls(value ~ factor(time) + treatment,
data = my.data,
corr = corSymm(form = ~ 1 | object),
weights = varIdent(form = ~ 1 | factor(time)),
control = glsControl(tolerance = 0.01, msTol = 0.01,
maxIter = 1000000, msMaxIter = 1000000))
summary(gls.fit.diff.var)
Unfortunately, in this case model estimation does not coverage (even though I changed control parameters for more convenient). I am afraid there is just too few cases in your data set to estimate parameters. | How to test difference between times series - does "time series ANOVA" exist? | I believe your problem is a textbook case of longitudinal data analysis for which there is an extensive statistical methodology developed. In particular, analysis of response profiles is an approach t | How to test difference between times series - does "time series ANOVA" exist?
I believe your problem is a textbook case of longitudinal data analysis for which there is an extensive statistical methodology developed. In particular, analysis of response profiles is an approach that I would employ in this case.
To motivate this, I will provide some background from Applied Longitudinal Analysis by G.M. Fitzmaurice:
1.2 Longitudinal and clustered data
The defining feature of longitudinal studies is that measurements of
the same individuals are taken repeatedly through time, thereby
allowing the direct study of change over the time. The primary goal of
a longitudinal study is to characterize the change in response over
the time and the factors that influence change. (...)
A distinctive feature of longitudinal data is that they are
clustered. In longitudinal studies the clusters are composed of the repeated measurements obtained from a single individual at different
occasions. Observations within a cluster will typically exhibit
positive correlation, and this correlation must be accounted for in
the analysis. Longitudinal data also have a temporal order; the first
measurement within a cluster necessarily comes before the second
measurement, and so on. The ordering of the repeated measures has
important implications for analysis.
and regarding analyzing response profiles choice:
Methods for analyzing response profiles are appealing when there is a
single categorical covariate (perhaps denoting different treatment or
exposure groups) and when no specific a priori pattern for the
differences in the response profiles between groups can be specified.
When repeated measures are obtained at the same sequence of occasions,
the data can be summarized by the estimated mean response at each
occasion, stratified by levels of a group factor. At any given level
of the group factor, the sequence of means over time is referred to as
the mean response profile.
The nlme R library contains a glm function to evaluate such model, giving you a longitudinal "equivalent" of Anova:
library(nlme)
# Model assuming the same variance for each time point
gls.fit <-
gls(value ~ factor(time) + treatment,
data = my.data,
corr = corSymm(form = ~ 1 | object),
control = glsControl(tolerance = 0.01, msTol = 0.01,
maxIter = 1000000, msMaxIter = 1000000))
summary(gls.fit)
# Model allowing for different variance structure for each time point
gls.fit.diff.var <-
gls(value ~ factor(time) + treatment,
data = my.data,
corr = corSymm(form = ~ 1 | object),
weights = varIdent(form = ~ 1 | factor(time)),
control = glsControl(tolerance = 0.01, msTol = 0.01,
maxIter = 1000000, msMaxIter = 1000000))
summary(gls.fit.diff.var)
Unfortunately, in this case model estimation does not coverage (even though I changed control parameters for more convenient). I am afraid there is just too few cases in your data set to estimate parameters. | How to test difference between times series - does "time series ANOVA" exist?
I believe your problem is a textbook case of longitudinal data analysis for which there is an extensive statistical methodology developed. In particular, analysis of response profiles is an approach t |
34,237 | How to test difference between times series - does "time series ANOVA" exist? | You notice correcly that usual ANOVA cannot handle this type of very heteroskedastic and highly dependent time point data. But also, other repeated measures or multivariate procedures would fail because you have only 6 replications but 10 time points, so called high dimensional data. In the end, it is proven that there cannot be an exact test for data like that.
However, there is a good approximative test, it's the Huynh-Feldt-procedure, see this technical report for references and for a generalization to possibly unequal covariance matrices that you may use. It is defined even for your very small data set. There are no assumptions on the variances and covariances. It is intended for normally distributed data, but I think you're fine with it.
You can test if the curves differ (set T = diag(10)) and if there is an interaction between treatment and time point (set T = diag(10) - 1/10). It's too simple for an extra R-package. Check this:
Tmat <- diag(10) # or Tmat <- diag(10)-1/10
Y1 <- t(matrix(my.data[my.data$treatment=="B","value"],10,3)) %*% Tmat
Y2 <- t(matrix(my.data[my.data$treatment=="A","value"],10,3)) %*% Tmat
Mean1 <- apply(Y1,2,mean)
Mean2 <- apply(Y2,2,mean)
S1 <- cov(Y1)
S2 <- cov(Y2)
F <- sum((Mean1-Mean2)**2) / (sum(diag(S1/3 + S2/3))) # see eq. 3.18
trS1sq <- 3/2 * ( (sum(diag(S1)))**2 - 2/3 * sum(S1**2)) # see eq. 3.26
trS2sq <- 3/2 * ( (sum(diag(S2)))**2 - 2/3 * sum(S2**2))
trS1S1 <- 3/2 * ( sum(S1**2) - (sum(diag(S1)))**2/2) # 3.27
trS2S2 <- 3/2 * ( sum(S2**2) - (sum(diag(S2)))**2/2) # 3.27
f <- (trS1sq + trS2sq + 2*sum(diag(S1))*sum(diag(S2))) /
(trS1S1+trS2S2+2*sum(S1*S2)) # see p. 16
f0 <- (trS1sq + trS2sq + 2*sum(diag(S1))*sum(diag(S2))) /
(trS1S1/2 + trS2S2/2)
p.value <- 1-pf(F,f,f0) | How to test difference between times series - does "time series ANOVA" exist? | You notice correcly that usual ANOVA cannot handle this type of very heteroskedastic and highly dependent time point data. But also, other repeated measures or multivariate procedures would fail becau | How to test difference between times series - does "time series ANOVA" exist?
You notice correcly that usual ANOVA cannot handle this type of very heteroskedastic and highly dependent time point data. But also, other repeated measures or multivariate procedures would fail because you have only 6 replications but 10 time points, so called high dimensional data. In the end, it is proven that there cannot be an exact test for data like that.
However, there is a good approximative test, it's the Huynh-Feldt-procedure, see this technical report for references and for a generalization to possibly unequal covariance matrices that you may use. It is defined even for your very small data set. There are no assumptions on the variances and covariances. It is intended for normally distributed data, but I think you're fine with it.
You can test if the curves differ (set T = diag(10)) and if there is an interaction between treatment and time point (set T = diag(10) - 1/10). It's too simple for an extra R-package. Check this:
Tmat <- diag(10) # or Tmat <- diag(10)-1/10
Y1 <- t(matrix(my.data[my.data$treatment=="B","value"],10,3)) %*% Tmat
Y2 <- t(matrix(my.data[my.data$treatment=="A","value"],10,3)) %*% Tmat
Mean1 <- apply(Y1,2,mean)
Mean2 <- apply(Y2,2,mean)
S1 <- cov(Y1)
S2 <- cov(Y2)
F <- sum((Mean1-Mean2)**2) / (sum(diag(S1/3 + S2/3))) # see eq. 3.18
trS1sq <- 3/2 * ( (sum(diag(S1)))**2 - 2/3 * sum(S1**2)) # see eq. 3.26
trS2sq <- 3/2 * ( (sum(diag(S2)))**2 - 2/3 * sum(S2**2))
trS1S1 <- 3/2 * ( sum(S1**2) - (sum(diag(S1)))**2/2) # 3.27
trS2S2 <- 3/2 * ( sum(S2**2) - (sum(diag(S2)))**2/2) # 3.27
f <- (trS1sq + trS2sq + 2*sum(diag(S1))*sum(diag(S2))) /
(trS1S1+trS2S2+2*sum(S1*S2)) # see p. 16
f0 <- (trS1sq + trS2sq + 2*sum(diag(S1))*sum(diag(S2))) /
(trS1S1/2 + trS2S2/2)
p.value <- 1-pf(F,f,f0) | How to test difference between times series - does "time series ANOVA" exist?
You notice correcly that usual ANOVA cannot handle this type of very heteroskedastic and highly dependent time point data. But also, other repeated measures or multivariate procedures would fail becau |
34,238 | Regression without intercept: deriving $\hat{\beta}_1$ in least squares (no matrices) | This is straightforward from the Ordinary Least Squares definition. If there is no intercept, one is minimizing $R(\beta) = \sum_{i=1}^{i=n} (y_i- \beta x_i)^2$. This is smooth as a function of $\beta$, so all minima (or maxima) occur when the derivative is zero. Differentiating with respect to $\beta$ we get $-\sum_{i=1}^{i=n} 2(y_i- \beta x_i)x_i$. Solving for $\beta$ gives the formula. | Regression without intercept: deriving $\hat{\beta}_1$ in least squares (no matrices) | This is straightforward from the Ordinary Least Squares definition. If there is no intercept, one is minimizing $R(\beta) = \sum_{i=1}^{i=n} (y_i- \beta x_i)^2$. This is smooth as a function of $\be | Regression without intercept: deriving $\hat{\beta}_1$ in least squares (no matrices)
This is straightforward from the Ordinary Least Squares definition. If there is no intercept, one is minimizing $R(\beta) = \sum_{i=1}^{i=n} (y_i- \beta x_i)^2$. This is smooth as a function of $\beta$, so all minima (or maxima) occur when the derivative is zero. Differentiating with respect to $\beta$ we get $-\sum_{i=1}^{i=n} 2(y_i- \beta x_i)x_i$. Solving for $\beta$ gives the formula. | Regression without intercept: deriving $\hat{\beta}_1$ in least squares (no matrices)
This is straightforward from the Ordinary Least Squares definition. If there is no intercept, one is minimizing $R(\beta) = \sum_{i=1}^{i=n} (y_i- \beta x_i)^2$. This is smooth as a function of $\be |
34,239 | How does local connection implied in the CNN algorithm | To understand the local connectivity, first think about giving an image as input into just a regular fully connected neural network. Each input (pixel value) is connected to every neuron in the first layer. So each neuron in the first layer is getting input from EVERY part of the image.
With a convolutional network, each neuron only receives input from a small local group of the pixels in the input image. This is what is meant by "local connectivity", all of the inputs that go into a given neuron are actually close to each other.
For your second question, yes, both the fully connected layers and the convolutional layers can be trained using back propagation. You take the errors after propagating back to the first fully connected layer and start your convolutional layer propagation using those. | How does local connection implied in the CNN algorithm | To understand the local connectivity, first think about giving an image as input into just a regular fully connected neural network. Each input (pixel value) is connected to every neuron in the first | How does local connection implied in the CNN algorithm
To understand the local connectivity, first think about giving an image as input into just a regular fully connected neural network. Each input (pixel value) is connected to every neuron in the first layer. So each neuron in the first layer is getting input from EVERY part of the image.
With a convolutional network, each neuron only receives input from a small local group of the pixels in the input image. This is what is meant by "local connectivity", all of the inputs that go into a given neuron are actually close to each other.
For your second question, yes, both the fully connected layers and the convolutional layers can be trained using back propagation. You take the errors after propagating back to the first fully connected layer and start your convolutional layer propagation using those. | How does local connection implied in the CNN algorithm
To understand the local connectivity, first think about giving an image as input into just a regular fully connected neural network. Each input (pixel value) is connected to every neuron in the first |
34,240 | How does local connection implied in the CNN algorithm | Imagine it's a digit 7 in the image, which is 4*4 image.
We need to classify the digit in the image from digits 0-9.
Consider breaking the image into 4 regions. Here, color coded as red,green, yellow and blue.
Then, each hidden node could be connected to only the pixels in one of these 4 regions, each hidden node sees only a quater of the original image.
With this new regional breakdown and the assignment of small local groups of pixels to different hidden nodes, every hidden node finds patterns in only one of the four regions in the image.
Then, each hidden node still reports to the output layer where the output layer combines the findings for discovered patterns learned separately in each region.
This is called local connected layers. | How does local connection implied in the CNN algorithm | Imagine it's a digit 7 in the image, which is 4*4 image.
We need to classify the digit in the image from digits 0-9.
Consider breaking the image into 4 regions. Here, color coded as red,green, yellow | How does local connection implied in the CNN algorithm
Imagine it's a digit 7 in the image, which is 4*4 image.
We need to classify the digit in the image from digits 0-9.
Consider breaking the image into 4 regions. Here, color coded as red,green, yellow and blue.
Then, each hidden node could be connected to only the pixels in one of these 4 regions, each hidden node sees only a quater of the original image.
With this new regional breakdown and the assignment of small local groups of pixels to different hidden nodes, every hidden node finds patterns in only one of the four regions in the image.
Then, each hidden node still reports to the output layer where the output layer combines the findings for discovered patterns learned separately in each region.
This is called local connected layers. | How does local connection implied in the CNN algorithm
Imagine it's a digit 7 in the image, which is 4*4 image.
We need to classify the digit in the image from digits 0-9.
Consider breaking the image into 4 regions. Here, color coded as red,green, yellow |
34,241 | How does local connection implied in the CNN algorithm | I hope this image will help you to understand the idea of local connectivity in CNN:
.
Neurons of the same color use the same kernel filter and share the same weights. And, neurons of different color corresponds to different feature maps. So, here we have local connectivity since one blue neuron is connected to only a small region of the image. (P.S. to produce a feature map, you need more than one blue neuron). | How does local connection implied in the CNN algorithm | I hope this image will help you to understand the idea of local connectivity in CNN:
.
Neurons of the same color use the same kernel filter and share the same weights. And, neurons of different color | How does local connection implied in the CNN algorithm
I hope this image will help you to understand the idea of local connectivity in CNN:
.
Neurons of the same color use the same kernel filter and share the same weights. And, neurons of different color corresponds to different feature maps. So, here we have local connectivity since one blue neuron is connected to only a small region of the image. (P.S. to produce a feature map, you need more than one blue neuron). | How does local connection implied in the CNN algorithm
I hope this image will help you to understand the idea of local connectivity in CNN:
.
Neurons of the same color use the same kernel filter and share the same weights. And, neurons of different color |
34,242 | How does local connection implied in the CNN algorithm | It's often poorly explained. First of all, it's presented in terms of a 4D tensor, but one dimension is just the batch dimension for processing multiple images at a time, you can ignore it for the purpose of understanding the convolution.
So images in a traditional CNN are three dimensional, channels x height x width
The filters are four dimensional and have the structure: input_channels x height x width x output_channels. You can think of them as several (#output_channels of them) 3D linear filters applied to the image.
In this image, the green box is a filter, it convolves by "gliding" in the x and y axis, but its height (#input_channels) is the same as the number of channels in the image, and so it does not move in that direction.
There are #output_channels such boxes which end up producing the channels of the next layer. | How does local connection implied in the CNN algorithm | It's often poorly explained. First of all, it's presented in terms of a 4D tensor, but one dimension is just the batch dimension for processing multiple images at a time, you can ignore it for the pur | How does local connection implied in the CNN algorithm
It's often poorly explained. First of all, it's presented in terms of a 4D tensor, but one dimension is just the batch dimension for processing multiple images at a time, you can ignore it for the purpose of understanding the convolution.
So images in a traditional CNN are three dimensional, channels x height x width
The filters are four dimensional and have the structure: input_channels x height x width x output_channels. You can think of them as several (#output_channels of them) 3D linear filters applied to the image.
In this image, the green box is a filter, it convolves by "gliding" in the x and y axis, but its height (#input_channels) is the same as the number of channels in the image, and so it does not move in that direction.
There are #output_channels such boxes which end up producing the channels of the next layer. | How does local connection implied in the CNN algorithm
It's often poorly explained. First of all, it's presented in terms of a 4D tensor, but one dimension is just the batch dimension for processing multiple images at a time, you can ignore it for the pur |
34,243 | nonlinear regression two equivalent models on paper, but different estimated parameters | Model 2 is
$$Y_1 = \frac{10 X_1}{a X_1^b X_2^c + 10X_1} + \delta$$
whereas Model 1 is
$$10X_1 \left(-1 + \frac{1}{Y_1}\right) = a X_1^b X_2^c + \varepsilon,$$
which can be solved for $Y_1$ to read
$$Y_1 = \frac{10 X_1}{a X_1^b X_2^c + 10X_1 + \varepsilon}.$$
Implicitly it is supposed the errors $\varepsilon$ or $\delta$, as the case may be, are independent, have identical distributions, and are centered at zero.
To compare the two models let's assume the variability of $\varepsilon$ is substantially less than the magnitude of $aX_1^bX_2^c + 10X_1$. We may then use the Binomial Theorem (or, equivalently, a Taylor series) to approximate the right hand side of Model 1 (to first order in $\varepsilon$) as
$$Y_1 \approx \frac{10 X_1}{a X_1^b X_2^c + 10X_1}\left(1 - \frac{\varepsilon}{a X_1^b X_2^c + 10X_1} + \cdots\right).$$
Comparing to Model 2, we see the difference between them lies in the error terms:
$$\delta \approx \frac{-10 X_1}{\left(a X_1^b X_2^c + 10X_1\right)^2} \varepsilon.$$
These are different models because if the $\varepsilon$ have identical distributions, the $\delta$ cannot--since they rescale the $\varepsilon$ by factors that depend on the variables $X_1$ and $X_2$. Conversely, if the $\delta$ have identical distributions then the $\varepsilon$ cannot.
To decide which one to use (if either), you will need additional information concerning the distributions of the errors. This can be obtained in many ways, including
Theoretical considerations. For instance, if the error is intended to represent measurement variability of $Y_1$ and that variability is known to be (roughly) constant across a range of values of $Y_1$, the Model 2 is a good choice.
Analysis of repeated measurements.
Review of diagnostic information from each model (related to the possible heteroscedasticity of the residuals).
The red curves show the correct underlying relationships. The dots show simulated data. Their vertical deviations from the red curves represent the errors. The dispersion of the errors in Model 1, at the left, visibly varies with the independent variables. The dispersion in Model 2, at the right, does not.
This figure shows data simulated with the R code below. To simplify the presentation, all values of $X_2$ were set to a constant value, causing all variation in $Y_1$ to be associated with variation in $X_1$ only. This simplification does not change the nature of the differences between the two models.
a <- 1
b <- 2
c <- 3
n <- 250
sigma <- 2
#
# Generate data according to two models.
#
set.seed(17)
x1 <- rgamma(n, 2) + 1
x2 <- rep(1, n)
epsilon <- rnorm(n, sd=sigma)
y.m1 <- 10 * x1 / (a * x1^b * x2^c + 10*x1 + epsilon)
# (Make them have comparable errors on average.)
tau <- mean(abs(-10 * x1 / (a * x1^b * x2^c + 10*x1)^2))
delta <- rnorm(n, sd=tau)
y.m2 <- 10 * x1 / (a * x1^b * x2^c + 10*x1) + delta
#
# Plot the simulated data.
#
reference <- function() curve(10 * x / (a*x^b + 10*x), add=TRUE, col="Red", lwd=2)
par(mfrow=c(1,2))
plot(x1, y.m1, main="Model 1", xlab="X1", ylab="Y1", col="#00000070")
reference()
plot(x1, y.m2, main="Model 2", xlab="X1", ylab="Y1", col="#00000070")
reference() | nonlinear regression two equivalent models on paper, but different estimated parameters | Model 2 is
$$Y_1 = \frac{10 X_1}{a X_1^b X_2^c + 10X_1} + \delta$$
whereas Model 1 is
$$10X_1 \left(-1 + \frac{1}{Y_1}\right) = a X_1^b X_2^c + \varepsilon,$$
which can be solved for $Y_1$ to read
$$Y | nonlinear regression two equivalent models on paper, but different estimated parameters
Model 2 is
$$Y_1 = \frac{10 X_1}{a X_1^b X_2^c + 10X_1} + \delta$$
whereas Model 1 is
$$10X_1 \left(-1 + \frac{1}{Y_1}\right) = a X_1^b X_2^c + \varepsilon,$$
which can be solved for $Y_1$ to read
$$Y_1 = \frac{10 X_1}{a X_1^b X_2^c + 10X_1 + \varepsilon}.$$
Implicitly it is supposed the errors $\varepsilon$ or $\delta$, as the case may be, are independent, have identical distributions, and are centered at zero.
To compare the two models let's assume the variability of $\varepsilon$ is substantially less than the magnitude of $aX_1^bX_2^c + 10X_1$. We may then use the Binomial Theorem (or, equivalently, a Taylor series) to approximate the right hand side of Model 1 (to first order in $\varepsilon$) as
$$Y_1 \approx \frac{10 X_1}{a X_1^b X_2^c + 10X_1}\left(1 - \frac{\varepsilon}{a X_1^b X_2^c + 10X_1} + \cdots\right).$$
Comparing to Model 2, we see the difference between them lies in the error terms:
$$\delta \approx \frac{-10 X_1}{\left(a X_1^b X_2^c + 10X_1\right)^2} \varepsilon.$$
These are different models because if the $\varepsilon$ have identical distributions, the $\delta$ cannot--since they rescale the $\varepsilon$ by factors that depend on the variables $X_1$ and $X_2$. Conversely, if the $\delta$ have identical distributions then the $\varepsilon$ cannot.
To decide which one to use (if either), you will need additional information concerning the distributions of the errors. This can be obtained in many ways, including
Theoretical considerations. For instance, if the error is intended to represent measurement variability of $Y_1$ and that variability is known to be (roughly) constant across a range of values of $Y_1$, the Model 2 is a good choice.
Analysis of repeated measurements.
Review of diagnostic information from each model (related to the possible heteroscedasticity of the residuals).
The red curves show the correct underlying relationships. The dots show simulated data. Their vertical deviations from the red curves represent the errors. The dispersion of the errors in Model 1, at the left, visibly varies with the independent variables. The dispersion in Model 2, at the right, does not.
This figure shows data simulated with the R code below. To simplify the presentation, all values of $X_2$ were set to a constant value, causing all variation in $Y_1$ to be associated with variation in $X_1$ only. This simplification does not change the nature of the differences between the two models.
a <- 1
b <- 2
c <- 3
n <- 250
sigma <- 2
#
# Generate data according to two models.
#
set.seed(17)
x1 <- rgamma(n, 2) + 1
x2 <- rep(1, n)
epsilon <- rnorm(n, sd=sigma)
y.m1 <- 10 * x1 / (a * x1^b * x2^c + 10*x1 + epsilon)
# (Make them have comparable errors on average.)
tau <- mean(abs(-10 * x1 / (a * x1^b * x2^c + 10*x1)^2))
delta <- rnorm(n, sd=tau)
y.m2 <- 10 * x1 / (a * x1^b * x2^c + 10*x1) + delta
#
# Plot the simulated data.
#
reference <- function() curve(10 * x / (a*x^b + 10*x), add=TRUE, col="Red", lwd=2)
par(mfrow=c(1,2))
plot(x1, y.m1, main="Model 1", xlab="X1", ylab="Y1", col="#00000070")
reference()
plot(x1, y.m2, main="Model 2", xlab="X1", ylab="Y1", col="#00000070")
reference() | nonlinear regression two equivalent models on paper, but different estimated parameters
Model 2 is
$$Y_1 = \frac{10 X_1}{a X_1^b X_2^c + 10X_1} + \delta$$
whereas Model 1 is
$$10X_1 \left(-1 + \frac{1}{Y_1}\right) = a X_1^b X_2^c + \varepsilon,$$
which can be solved for $Y_1$ to read
$$Y |
34,244 | nonlinear regression two equivalent models on paper, but different estimated parameters | There are NOT equivalent least squares models. The error in one model is a transformation of the error in the other. Whichever model's error is closer to being Normally distributed should be the better model. Edit: see whuber's answer for more elaboration on the error transformation.
There is a second matter as well, and that is the question of what numerical solution is obtained by the nonlinear least solution algorithm. The solution obtained can depends on the algorithm used to solve it as well as the starting value (initial guess) for the parameters being estimated. Depending on the algorithm and starting value, it's possible that the algorithm will exit without finding a local optimum. It's possible that a local optimum will be found which is not the global optimum.
You should want the globally optimal solution. Whether you find it is another matter. That's where it helps to know what you're doing in nonlinear optimization, which unfortunately most people performing nonlinear least squares don't. | nonlinear regression two equivalent models on paper, but different estimated parameters | There are NOT equivalent least squares models. The error in one model is a transformation of the error in the other. Whichever model's error is closer to being Normally distributed should be the bette | nonlinear regression two equivalent models on paper, but different estimated parameters
There are NOT equivalent least squares models. The error in one model is a transformation of the error in the other. Whichever model's error is closer to being Normally distributed should be the better model. Edit: see whuber's answer for more elaboration on the error transformation.
There is a second matter as well, and that is the question of what numerical solution is obtained by the nonlinear least solution algorithm. The solution obtained can depends on the algorithm used to solve it as well as the starting value (initial guess) for the parameters being estimated. Depending on the algorithm and starting value, it's possible that the algorithm will exit without finding a local optimum. It's possible that a local optimum will be found which is not the global optimum.
You should want the globally optimal solution. Whether you find it is another matter. That's where it helps to know what you're doing in nonlinear optimization, which unfortunately most people performing nonlinear least squares don't. | nonlinear regression two equivalent models on paper, but different estimated parameters
There are NOT equivalent least squares models. The error in one model is a transformation of the error in the other. Whichever model's error is closer to being Normally distributed should be the bette |
34,245 | Vowpal Wabbit: best strategy for short text data like titles & kewords | Here are some tips for enhancing the performance of VW models:
Shuffle the data prior to training. Having a non-random ordering of your dataset can really mess VW up.
You're already using multiple passes, which is good. Try also decaying the learning rate between passes, with --decay_learning_rate=.95.
Play around with the learning rate. I've had cases where --learning_rate=10 was great and other cases where --learning_rate-0.001 was great.
Try --oaa 16 or --log_multi 16 rather than --ect 16. I usually find ect to be less accurate. However, oaa is pretty slow. I've found --log_multi to be a good compromise between speed and accuracy. On 10,000 training examples, --oaa 16 should be fine.
Play with the loss function. --loss_function=hinge can sometimes yield large improvements in classification models.
Play with the --l1 and --l2 parameters, which regularize your model. --l2 in particular is useful with text data. Try something like --l2=1e-6.
For text data, try --ngram=2 and --skips=2 to add n-gram and skip grams to your models. This can help a lot.
Try --autolink=2 or --autolink=3 to fit a quadratic or cubic spline model.
Try ftrl optimization with --ftrl. This can be useful with text data or datasets with some extremely rare and some extremely common features.
Try some learning reductions:
Try a shallow neural network with --nn=1 or --nn=10.
Try a radial kernel svm with --ksvm --kernel=rbf --bandwidth=1. (This can be very slow).
Try a polynomial kernel svm with --ksvm --kernel=poly --degree=3. (This can be very slow).
Try a gbm with --boosting=25. This can be a little slow.
VW is extremely flexible, so it often takes a lot of fine tuning to get a good model on a given dataset. You can get a lot more tuning ideas here: https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments
Regarding the post you linked to: that person used vw with squared loss on an unbalanced classification problem. That's a silly thing to do, and pretty much guarantees that any linear model will always predict the dominant class. If you're worried about class balance, VW supports weights, so you can over-weight the rarer classes.
Edit: You have 100 classes and 10,000 training examples? That's an average of 100 observations per class, which isn't that many to learn from, no matter what model you use. | Vowpal Wabbit: best strategy for short text data like titles & kewords | Here are some tips for enhancing the performance of VW models:
Shuffle the data prior to training. Having a non-random ordering of your dataset can really mess VW up.
You're already using multiple p | Vowpal Wabbit: best strategy for short text data like titles & kewords
Here are some tips for enhancing the performance of VW models:
Shuffle the data prior to training. Having a non-random ordering of your dataset can really mess VW up.
You're already using multiple passes, which is good. Try also decaying the learning rate between passes, with --decay_learning_rate=.95.
Play around with the learning rate. I've had cases where --learning_rate=10 was great and other cases where --learning_rate-0.001 was great.
Try --oaa 16 or --log_multi 16 rather than --ect 16. I usually find ect to be less accurate. However, oaa is pretty slow. I've found --log_multi to be a good compromise between speed and accuracy. On 10,000 training examples, --oaa 16 should be fine.
Play with the loss function. --loss_function=hinge can sometimes yield large improvements in classification models.
Play with the --l1 and --l2 parameters, which regularize your model. --l2 in particular is useful with text data. Try something like --l2=1e-6.
For text data, try --ngram=2 and --skips=2 to add n-gram and skip grams to your models. This can help a lot.
Try --autolink=2 or --autolink=3 to fit a quadratic or cubic spline model.
Try ftrl optimization with --ftrl. This can be useful with text data or datasets with some extremely rare and some extremely common features.
Try some learning reductions:
Try a shallow neural network with --nn=1 or --nn=10.
Try a radial kernel svm with --ksvm --kernel=rbf --bandwidth=1. (This can be very slow).
Try a polynomial kernel svm with --ksvm --kernel=poly --degree=3. (This can be very slow).
Try a gbm with --boosting=25. This can be a little slow.
VW is extremely flexible, so it often takes a lot of fine tuning to get a good model on a given dataset. You can get a lot more tuning ideas here: https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments
Regarding the post you linked to: that person used vw with squared loss on an unbalanced classification problem. That's a silly thing to do, and pretty much guarantees that any linear model will always predict the dominant class. If you're worried about class balance, VW supports weights, so you can over-weight the rarer classes.
Edit: You have 100 classes and 10,000 training examples? That's an average of 100 observations per class, which isn't that many to learn from, no matter what model you use. | Vowpal Wabbit: best strategy for short text data like titles & kewords
Here are some tips for enhancing the performance of VW models:
Shuffle the data prior to training. Having a non-random ordering of your dataset can really mess VW up.
You're already using multiple p |
34,246 | How to determine if GLS improves on OLS? | The real difference between OLS and GLS is the assumptions made about the error term of the model. In OLS we (at least in CLM setup) assume that $Var(u)=\sigma^2 I$, where I is the identity matrix - such that there are no off diagonal elements different from zero. With GLS this is no longer the case (it could be, but then GLS = OLS). With GLS we assume that $Var(u) = \sigma^2 \Sigma$, where $\Sigma$ is the variance-covariance matrix.
Many text books introduce GLS with WLS, which is the GLS function that eliminates heteroskedasticity (or tries to). This means that the usual t/F statistics can be valid for the GLS estimation, but not for the OLS. This is less troublesome today, since you can just compute robust variance estimates and base you inference on that - same as you normally would.
This implies that difference between OLS and GLS is in the variance of the estimates. And the real reason, to choose, GLS over OLS is indeed to gain asymptotic efficiency (smaller variance for n $\rightarrow \infty$. It is important to know that the OLS estimates can be unbiased, even if the underlying (true) data generating process actually follows the GLS model. If GLS is unbiased then so is OLS (and vice versa).
You can very easily proof this, but basically the assumptions for consistency/unbiasedness do not rely on the variance of the estimates at all.
A more subtle point is that, unless you know the actual GLS function, it is not unbiased but only consistent.
I would therefore argue that choosing between OLS and GLS based on estimates and $R^2$ is the wrong way to think about it. The estimates of both OLS and GLS should be close to one another, if not numerically then in the size of the “impact”. If they are not, then it would most likely indicate that you have a function form misspecification(s), of that you have left out variables.
I don’t know whether or not, excluding the GLS weights covariates is justified in your case - but perhaps it worth trying to include them in the OLS estimation and see what happens? It might make the reader less “suspicious” about your conclusion (but this is pure speculation on my part). | How to determine if GLS improves on OLS? | The real difference between OLS and GLS is the assumptions made about the error term of the model. In OLS we (at least in CLM setup) assume that $Var(u)=\sigma^2 I$, where I is the identity matrix - s | How to determine if GLS improves on OLS?
The real difference between OLS and GLS is the assumptions made about the error term of the model. In OLS we (at least in CLM setup) assume that $Var(u)=\sigma^2 I$, where I is the identity matrix - such that there are no off diagonal elements different from zero. With GLS this is no longer the case (it could be, but then GLS = OLS). With GLS we assume that $Var(u) = \sigma^2 \Sigma$, where $\Sigma$ is the variance-covariance matrix.
Many text books introduce GLS with WLS, which is the GLS function that eliminates heteroskedasticity (or tries to). This means that the usual t/F statistics can be valid for the GLS estimation, but not for the OLS. This is less troublesome today, since you can just compute robust variance estimates and base you inference on that - same as you normally would.
This implies that difference between OLS and GLS is in the variance of the estimates. And the real reason, to choose, GLS over OLS is indeed to gain asymptotic efficiency (smaller variance for n $\rightarrow \infty$. It is important to know that the OLS estimates can be unbiased, even if the underlying (true) data generating process actually follows the GLS model. If GLS is unbiased then so is OLS (and vice versa).
You can very easily proof this, but basically the assumptions for consistency/unbiasedness do not rely on the variance of the estimates at all.
A more subtle point is that, unless you know the actual GLS function, it is not unbiased but only consistent.
I would therefore argue that choosing between OLS and GLS based on estimates and $R^2$ is the wrong way to think about it. The estimates of both OLS and GLS should be close to one another, if not numerically then in the size of the “impact”. If they are not, then it would most likely indicate that you have a function form misspecification(s), of that you have left out variables.
I don’t know whether or not, excluding the GLS weights covariates is justified in your case - but perhaps it worth trying to include them in the OLS estimation and see what happens? It might make the reader less “suspicious” about your conclusion (but this is pure speculation on my part). | How to determine if GLS improves on OLS?
The real difference between OLS and GLS is the assumptions made about the error term of the model. In OLS we (at least in CLM setup) assume that $Var(u)=\sigma^2 I$, where I is the identity matrix - s |
34,247 | What frequentist statistics topics should I know before learning Bayesian statistics? | It is not necessary to call it frequentist material, rather material from probability and statistics in general.
Here are some examples of prior knowledge that, in my opinion, would be handy:
What are densities, (conditional) distributions, expectations etc.?
Some specific distributional families (Beta, normal, uniform etc.)
Most likely you will want to apply Bayesian methods to real data, so
statistical software. My favorite: R
Some mathematics: Matrix algebra, integration, ...
Also, it could be handy to be familiar with some statistical models, such as the linear model $y=X\beta+u$.
Given the heavy emphasis on the likelihood, it cannot hurt to have heard about maximum likelihood before
The Bayesian paradigm being a subjective one, I am sure others will disagree with or add to this list... | What frequentist statistics topics should I know before learning Bayesian statistics? | It is not necessary to call it frequentist material, rather material from probability and statistics in general.
Here are some examples of prior knowledge that, in my opinion, would be handy:
What a | What frequentist statistics topics should I know before learning Bayesian statistics?
It is not necessary to call it frequentist material, rather material from probability and statistics in general.
Here are some examples of prior knowledge that, in my opinion, would be handy:
What are densities, (conditional) distributions, expectations etc.?
Some specific distributional families (Beta, normal, uniform etc.)
Most likely you will want to apply Bayesian methods to real data, so
statistical software. My favorite: R
Some mathematics: Matrix algebra, integration, ...
Also, it could be handy to be familiar with some statistical models, such as the linear model $y=X\beta+u$.
Given the heavy emphasis on the likelihood, it cannot hurt to have heard about maximum likelihood before
The Bayesian paradigm being a subjective one, I am sure others will disagree with or add to this list... | What frequentist statistics topics should I know before learning Bayesian statistics?
It is not necessary to call it frequentist material, rather material from probability and statistics in general.
Here are some examples of prior knowledge that, in my opinion, would be handy:
What a |
34,248 | What frequentist statistics topics should I know before learning Bayesian statistics? | You don't have to learn 'frequentist' or Bayesian statistics in any particular order. You should first learn whatever you need to understand the findings in your field, and then you should understand the mathematical (computation) and philosophical (interpretation) relationships between the techniques. There is no teacher like real data, so that is always the first concern.
There's no particular reason you couldn't learn them at the same time. It's helpful to know the gist of calculus for Bayes, which is presumably where its reputation for being "harder" comes from, but I wouldn't call it necessary now that we have much better software than just a few years ago. If you are new to statistics and want to play around with both the frequentist and Bayesian framework, I can recommend the new JASP software. If you like R, the BayesFactor package is solid.
If you want to start from frequentism, I would suggest knowing the following:
The full and exact interpretation for all of the following items.
The relationship between p-values, confidence intervals, sample sizes, power and error rates.
The relationship between Z-tests, t-tests, analysis of variance and linear regression.
The relationship between linear regression and nonlinear regression, as well as parametric versus nonparametric tests.
The relationship between dummy variables, contrasts and effects coding.
The full and exact interpretation for all of the preceding items.
That sounds like a lot, but these things are all connected in fundamental ways. Every inference boils down to the same essential thing: we want to make correct predictions about unobserved data, based on a model of observed data, by comparing two or more models. We do this by computing our confidence, for some definition of "confidence," in two or more models and taking the ratio. At its most basic, that's all.
A lot of the controversy is really just about formalizing "confidence," and while it's an important discussion that I'm glad we're having, it's also not something you need to be aware of right now. In the frequentist framework, special steps are taken to create an implicit null model to put in the denominator, whereas in the Bayesian framework, both models are stated explicitly, but the actual output and interpretations for both frameworks involve a substantial degree of subjectivity. For frequentism, it's in the construction of maximum likelihood and choice of error rate, and for Bayesians, it's in the prior. Everyone should learn both, in my view. | What frequentist statistics topics should I know before learning Bayesian statistics? | You don't have to learn 'frequentist' or Bayesian statistics in any particular order. You should first learn whatever you need to understand the findings in your field, and then you should understand | What frequentist statistics topics should I know before learning Bayesian statistics?
You don't have to learn 'frequentist' or Bayesian statistics in any particular order. You should first learn whatever you need to understand the findings in your field, and then you should understand the mathematical (computation) and philosophical (interpretation) relationships between the techniques. There is no teacher like real data, so that is always the first concern.
There's no particular reason you couldn't learn them at the same time. It's helpful to know the gist of calculus for Bayes, which is presumably where its reputation for being "harder" comes from, but I wouldn't call it necessary now that we have much better software than just a few years ago. If you are new to statistics and want to play around with both the frequentist and Bayesian framework, I can recommend the new JASP software. If you like R, the BayesFactor package is solid.
If you want to start from frequentism, I would suggest knowing the following:
The full and exact interpretation for all of the following items.
The relationship between p-values, confidence intervals, sample sizes, power and error rates.
The relationship between Z-tests, t-tests, analysis of variance and linear regression.
The relationship between linear regression and nonlinear regression, as well as parametric versus nonparametric tests.
The relationship between dummy variables, contrasts and effects coding.
The full and exact interpretation for all of the preceding items.
That sounds like a lot, but these things are all connected in fundamental ways. Every inference boils down to the same essential thing: we want to make correct predictions about unobserved data, based on a model of observed data, by comparing two or more models. We do this by computing our confidence, for some definition of "confidence," in two or more models and taking the ratio. At its most basic, that's all.
A lot of the controversy is really just about formalizing "confidence," and while it's an important discussion that I'm glad we're having, it's also not something you need to be aware of right now. In the frequentist framework, special steps are taken to create an implicit null model to put in the denominator, whereas in the Bayesian framework, both models are stated explicitly, but the actual output and interpretations for both frameworks involve a substantial degree of subjectivity. For frequentism, it's in the construction of maximum likelihood and choice of error rate, and for Bayesians, it's in the prior. Everyone should learn both, in my view. | What frequentist statistics topics should I know before learning Bayesian statistics?
You don't have to learn 'frequentist' or Bayesian statistics in any particular order. You should first learn whatever you need to understand the findings in your field, and then you should understand |
34,249 | Why do irrelevant regressors become statistically significant in large samples? | Questions:
How come an irrelevant regressor turn out statistically significant?
I think it's helpful to think about what happens as your sample size approaches the population itself. Significance testing is meant to give you an idea of whether not an effect exists in the population. This is the reason why when working with census data (that surveys the population), significance testing is meaningless (because, what are you trying to generalize to?).
With that in mind, what does "an effect in the population" mean? It simply means any relationship between variables in the population, regardless of how small (be it a 1-point or 1-person difference), even if that relationship is due to chance and randomness in the universe.
Thus, as your sample approaches the size of the population, significance tests become less and less meaningful because any difference will be "statistically significant". What you would be more interested in then is effect size - which is analogous to "practically significant".
Should I look for subject-matter explanation (i.e. try to deny irrelevance) or is this a statistical phenomenon?
It's a phenomenon - you should look at effect sizes. | Why do irrelevant regressors become statistically significant in large samples? | Questions:
How come an irrelevant regressor turn out statistically significant?
I think it's helpful to think about what happens as your sample size approaches the population itself. Significance te | Why do irrelevant regressors become statistically significant in large samples?
Questions:
How come an irrelevant regressor turn out statistically significant?
I think it's helpful to think about what happens as your sample size approaches the population itself. Significance testing is meant to give you an idea of whether not an effect exists in the population. This is the reason why when working with census data (that surveys the population), significance testing is meaningless (because, what are you trying to generalize to?).
With that in mind, what does "an effect in the population" mean? It simply means any relationship between variables in the population, regardless of how small (be it a 1-point or 1-person difference), even if that relationship is due to chance and randomness in the universe.
Thus, as your sample approaches the size of the population, significance tests become less and less meaningful because any difference will be "statistically significant". What you would be more interested in then is effect size - which is analogous to "practically significant".
Should I look for subject-matter explanation (i.e. try to deny irrelevance) or is this a statistical phenomenon?
It's a phenomenon - you should look at effect sizes. | Why do irrelevant regressors become statistically significant in large samples?
Questions:
How come an irrelevant regressor turn out statistically significant?
I think it's helpful to think about what happens as your sample size approaches the population itself. Significance te |
34,250 | Why do irrelevant regressors become statistically significant in large samples? | In addition to the excellent answers already posted, I will try from another point of view. All models are approximations, in some sense ... Look at some regression model, and some irrelevant variable is significant. What can explain it?
Maybe it just is not irrelevant, that todays scientific consensus on that matter is just wrong. Apart from that:
It could be a stand-in or proxy for some omitted variable which is relevant, and which is correlated with the irrelevant variable.
Some relevant variable, included linearly in the model, could be acting non-linearly, and your irrelevant variable could be a stand-in for that part of the relevant variable.
Some interaction between two relevant variables is important, but not included in the model. Your irrelevant variable could be a stand-in for that omitted interaction.
The irrelevant variable could just be very highly correlated with some important variable, leading to negatively correlated coefficients. This could be important especially if there are measurement errors in this variables.
There could be some observations with very high leverage, leading to strange estimates.
Surely others ... an important point is that a linear regression model could be a very good approximation with a small sample, only large effects will be significant. But a larger sample will lead to lower variance, but it cannot reduce bias due to approximations. So with larger samples those inadequacies of the model becomes manifest, and will eventually dominate over variance. | Why do irrelevant regressors become statistically significant in large samples? | In addition to the excellent answers already posted, I will try from another point of view. All models are approximations, in some sense ... Look at some regression model, and some irrelevant variabl | Why do irrelevant regressors become statistically significant in large samples?
In addition to the excellent answers already posted, I will try from another point of view. All models are approximations, in some sense ... Look at some regression model, and some irrelevant variable is significant. What can explain it?
Maybe it just is not irrelevant, that todays scientific consensus on that matter is just wrong. Apart from that:
It could be a stand-in or proxy for some omitted variable which is relevant, and which is correlated with the irrelevant variable.
Some relevant variable, included linearly in the model, could be acting non-linearly, and your irrelevant variable could be a stand-in for that part of the relevant variable.
Some interaction between two relevant variables is important, but not included in the model. Your irrelevant variable could be a stand-in for that omitted interaction.
The irrelevant variable could just be very highly correlated with some important variable, leading to negatively correlated coefficients. This could be important especially if there are measurement errors in this variables.
There could be some observations with very high leverage, leading to strange estimates.
Surely others ... an important point is that a linear regression model could be a very good approximation with a small sample, only large effects will be significant. But a larger sample will lead to lower variance, but it cannot reduce bias due to approximations. So with larger samples those inadequacies of the model becomes manifest, and will eventually dominate over variance. | Why do irrelevant regressors become statistically significant in large samples?
In addition to the excellent answers already posted, I will try from another point of view. All models are approximations, in some sense ... Look at some regression model, and some irrelevant variabl |
34,251 | Why do irrelevant regressors become statistically significant in large samples? | Even if your sample size doesn't approach your population, tiny effects become significant in large samples. This is a consequence of what statistical significance means:
If, in the population from which this sample was taken, the null
hypothesis was true, is it (XX%) likely that we would get a test
statistic at least this large in a sample of the size we have?
If your question is something about all people on Earth, then if you take a sample of 1,000,000 (not close to 7,000,000,000) even very tiny effects will be significant, because it's very unlikely to find such test statistics in large samples when the null is true.
There are lots of problems with significance testing, discussed in many places. This is one of them. The "cure" is to look at effect sizes and confidence intervals. | Why do irrelevant regressors become statistically significant in large samples? | Even if your sample size doesn't approach your population, tiny effects become significant in large samples. This is a consequence of what statistical significance means:
If, in the population from w | Why do irrelevant regressors become statistically significant in large samples?
Even if your sample size doesn't approach your population, tiny effects become significant in large samples. This is a consequence of what statistical significance means:
If, in the population from which this sample was taken, the null
hypothesis was true, is it (XX%) likely that we would get a test
statistic at least this large in a sample of the size we have?
If your question is something about all people on Earth, then if you take a sample of 1,000,000 (not close to 7,000,000,000) even very tiny effects will be significant, because it's very unlikely to find such test statistics in large samples when the null is true.
There are lots of problems with significance testing, discussed in many places. This is one of them. The "cure" is to look at effect sizes and confidence intervals. | Why do irrelevant regressors become statistically significant in large samples?
Even if your sample size doesn't approach your population, tiny effects become significant in large samples. This is a consequence of what statistical significance means:
If, in the population from w |
34,252 | Why do irrelevant regressors become statistically significant in large samples? | I have borrowed some insight from @QxV to provide an explanation of presence of a population effect even if subject-matter knowledge suggests no such effect.
Suppose there is a population-generating process (PGP) that generates populations with features $X$ and $Y$. The PGP formula is such that $Y$ and $X$ are independent random variables. Due to randomness, any finite-length realization vectors $y_{realized}$ and $x_{realized}$ have zero probability of exact uncorrelatedness, i.e. $P(y_{realized} \perp x_{realized})=0$. If so, with probability one there is a population effect. That is how effects come about in population.
Once a population effect exists, it is a matter of sample size when we will detect it in the sample and when it will become statistically significant. | Why do irrelevant regressors become statistically significant in large samples? | I have borrowed some insight from @QxV to provide an explanation of presence of a population effect even if subject-matter knowledge suggests no such effect.
Suppose there is a population-generating p | Why do irrelevant regressors become statistically significant in large samples?
I have borrowed some insight from @QxV to provide an explanation of presence of a population effect even if subject-matter knowledge suggests no such effect.
Suppose there is a population-generating process (PGP) that generates populations with features $X$ and $Y$. The PGP formula is such that $Y$ and $X$ are independent random variables. Due to randomness, any finite-length realization vectors $y_{realized}$ and $x_{realized}$ have zero probability of exact uncorrelatedness, i.e. $P(y_{realized} \perp x_{realized})=0$. If so, with probability one there is a population effect. That is how effects come about in population.
Once a population effect exists, it is a matter of sample size when we will detect it in the sample and when it will become statistically significant. | Why do irrelevant regressors become statistically significant in large samples?
I have borrowed some insight from @QxV to provide an explanation of presence of a population effect even if subject-matter knowledge suggests no such effect.
Suppose there is a population-generating p |
34,253 | Why do irrelevant regressors become statistically significant in large samples? | No. Irrelevant regressors do not become statistically significant as sample size increases. Try the following code in R.
y <- rnorm(10000000)
x <- rnorm(10000000)
summary(lm(y~x)) | Why do irrelevant regressors become statistically significant in large samples? | No. Irrelevant regressors do not become statistically significant as sample size increases. Try the following code in R.
y <- rnorm(10000000)
x <- rnorm(10000000)
summary(lm(y~x)) | Why do irrelevant regressors become statistically significant in large samples?
No. Irrelevant regressors do not become statistically significant as sample size increases. Try the following code in R.
y <- rnorm(10000000)
x <- rnorm(10000000)
summary(lm(y~x)) | Why do irrelevant regressors become statistically significant in large samples?
No. Irrelevant regressors do not become statistically significant as sample size increases. Try the following code in R.
y <- rnorm(10000000)
x <- rnorm(10000000)
summary(lm(y~x)) |
34,254 | Binomial random variable conditional on another one | Let $X = \sum_{i=1}^{n} X_i$, with $X_i \overset{iid}{\sim} Bin(1, p)$, and $Z = \sum_{i=1}^{n} Z_i$, with $Z_i \overset{iid}{\sim} Bin(1, q)$. If all the $X_i$ and $Z_i$ are mutually independent, then $Z_i | X_i \overset{iid}{\sim} Bin(1, q)$.
Now to construct $Y$ we want to throw out all the $(X_i, Z_i)$ pairs where $X_i=0$ and then count the number of times $Z_i=1$ in the remaining pairs. That makes $Y | X \sim Bin(x, q)$. We can also write $Y = \sum_{i=1}^{n} Y_i$ with $Y_i = X_i Z_i$. We know $X_i Z_i=1$ if $X_i=1$ and $Z_i=1$, otherwise it is 0. Thus $Y_i \overset{iid}{\sim} Bin(1, pq)$, and $Y \sim Bin(n, pq)$. | Binomial random variable conditional on another one | Let $X = \sum_{i=1}^{n} X_i$, with $X_i \overset{iid}{\sim} Bin(1, p)$, and $Z = \sum_{i=1}^{n} Z_i$, with $Z_i \overset{iid}{\sim} Bin(1, q)$. If all the $X_i$ and $Z_i$ are mutually independent, the | Binomial random variable conditional on another one
Let $X = \sum_{i=1}^{n} X_i$, with $X_i \overset{iid}{\sim} Bin(1, p)$, and $Z = \sum_{i=1}^{n} Z_i$, with $Z_i \overset{iid}{\sim} Bin(1, q)$. If all the $X_i$ and $Z_i$ are mutually independent, then $Z_i | X_i \overset{iid}{\sim} Bin(1, q)$.
Now to construct $Y$ we want to throw out all the $(X_i, Z_i)$ pairs where $X_i=0$ and then count the number of times $Z_i=1$ in the remaining pairs. That makes $Y | X \sim Bin(x, q)$. We can also write $Y = \sum_{i=1}^{n} Y_i$ with $Y_i = X_i Z_i$. We know $X_i Z_i=1$ if $X_i=1$ and $Z_i=1$, otherwise it is 0. Thus $Y_i \overset{iid}{\sim} Bin(1, pq)$, and $Y \sim Bin(n, pq)$. | Binomial random variable conditional on another one
Let $X = \sum_{i=1}^{n} X_i$, with $X_i \overset{iid}{\sim} Bin(1, p)$, and $Z = \sum_{i=1}^{n} Z_i$, with $Z_i \overset{iid}{\sim} Bin(1, q)$. If all the $X_i$ and $Z_i$ are mutually independent, the |
34,255 | Binomial random variable conditional on another one | Looking at the same thing in two different but equivalent ways offers insight.
A Binomial$(n,p)$ variable is the sum of $n$ independent Bernoulli$(p)$ variables. A Bernoulli variable works exactly like drawing one ticket from a box in which all tickets have either a $0$ or $1$ written on them; the proportion of the latter is $p$.
To say that $X=x$ means that $n$ such tickets were drawn from such an "$X$ box" (with replacement each time) and $x$ of them had a $1$ on it. To say that $Y$ has a Binomial$(X,q)$ distribution amounts to performing a second follow-on experiment in which $x$ draws (with replacement) are made from a separate box, the "$Y$ box," in which the proportion of tickets with $1$s is $q$. The value of $Y$ is the count of the $1$s that are drawn.
An alternative way to carry out the same procedure is not to wait until all $n$ tickets are drawn from the $X$ box. Instead, after drawing each ticket, immediately read its value. If it says $X=0$, do nothing more. If it says $X=1$, though, immediately draw a ticket from the $Y$ box and read its value.
This alternative procedure can be described by drawing a single ticket from a new box. Up to two numbers are written on each ticket, called "$X$" and "$Y$", to record a single sequence of up to two draws. According to the foregoing description, which has three outcomes, there must be three kinds of corresponding tickets:
$X=0$. These tickets model drawing a value of $0$ from the $X$ box. Their proportion within the new box, in order to emulate the properties of the first step, must equal $1-p$. Don't bother to write any value for $Y$, because $Y$ will not be observed when such a ticket is drawn.
$X=1, Y=0$. These tickets model drawing a $1$ from the $X$ box and then a $0$ from the $Y$ box.
$X=1, Y=1$. These tickets model drawing a $1$ from the $X$ box and then a $1$ from the $Y$ box.
The total proportion of tickets of types (2) and (3) must equal the proportion of $1$s in an $X$ box, namely $p$. Since $Y$ is drawn independently of $X$, the fraction of the tickets with $X=1$ for which $Y=1$ must be $q$. The fraction of the tickets with $X=1$ for which $Y=0$ similarly must be $1-q$.
To summarize, the three tickets and their proportions in the new box must be
$X=0$, proportion $p$.
$X=1, Y=0$, proportion $p(1-q)$.
$X=1, Y=1$, proportion $pq$.
What kind of variable is $Y$? According to our new (but equivalent) description, it is obtained by drawing $n$ tickets from the new box (with replacement) and counting the number of times a value of $1$ for $Y$ is observed. The only way this can happen is when the third type of ticket is drawn. These occupy a fraction $pq$ of all the tickets. This exhibits $Y$ as the sum of $n$ independent Bernoulli$(pq)$ variables, whence $Y$ has a Binomial$(n, pq)$ distribution. | Binomial random variable conditional on another one | Looking at the same thing in two different but equivalent ways offers insight.
A Binomial$(n,p)$ variable is the sum of $n$ independent Bernoulli$(p)$ variables. A Bernoulli variable works exactly li | Binomial random variable conditional on another one
Looking at the same thing in two different but equivalent ways offers insight.
A Binomial$(n,p)$ variable is the sum of $n$ independent Bernoulli$(p)$ variables. A Bernoulli variable works exactly like drawing one ticket from a box in which all tickets have either a $0$ or $1$ written on them; the proportion of the latter is $p$.
To say that $X=x$ means that $n$ such tickets were drawn from such an "$X$ box" (with replacement each time) and $x$ of them had a $1$ on it. To say that $Y$ has a Binomial$(X,q)$ distribution amounts to performing a second follow-on experiment in which $x$ draws (with replacement) are made from a separate box, the "$Y$ box," in which the proportion of tickets with $1$s is $q$. The value of $Y$ is the count of the $1$s that are drawn.
An alternative way to carry out the same procedure is not to wait until all $n$ tickets are drawn from the $X$ box. Instead, after drawing each ticket, immediately read its value. If it says $X=0$, do nothing more. If it says $X=1$, though, immediately draw a ticket from the $Y$ box and read its value.
This alternative procedure can be described by drawing a single ticket from a new box. Up to two numbers are written on each ticket, called "$X$" and "$Y$", to record a single sequence of up to two draws. According to the foregoing description, which has three outcomes, there must be three kinds of corresponding tickets:
$X=0$. These tickets model drawing a value of $0$ from the $X$ box. Their proportion within the new box, in order to emulate the properties of the first step, must equal $1-p$. Don't bother to write any value for $Y$, because $Y$ will not be observed when such a ticket is drawn.
$X=1, Y=0$. These tickets model drawing a $1$ from the $X$ box and then a $0$ from the $Y$ box.
$X=1, Y=1$. These tickets model drawing a $1$ from the $X$ box and then a $1$ from the $Y$ box.
The total proportion of tickets of types (2) and (3) must equal the proportion of $1$s in an $X$ box, namely $p$. Since $Y$ is drawn independently of $X$, the fraction of the tickets with $X=1$ for which $Y=1$ must be $q$. The fraction of the tickets with $X=1$ for which $Y=0$ similarly must be $1-q$.
To summarize, the three tickets and their proportions in the new box must be
$X=0$, proportion $p$.
$X=1, Y=0$, proportion $p(1-q)$.
$X=1, Y=1$, proportion $pq$.
What kind of variable is $Y$? According to our new (but equivalent) description, it is obtained by drawing $n$ tickets from the new box (with replacement) and counting the number of times a value of $1$ for $Y$ is observed. The only way this can happen is when the third type of ticket is drawn. These occupy a fraction $pq$ of all the tickets. This exhibits $Y$ as the sum of $n$ independent Bernoulli$(pq)$ variables, whence $Y$ has a Binomial$(n, pq)$ distribution. | Binomial random variable conditional on another one
Looking at the same thing in two different but equivalent ways offers insight.
A Binomial$(n,p)$ variable is the sum of $n$ independent Bernoulli$(p)$ variables. A Bernoulli variable works exactly li |
34,256 | Binomial random variable conditional on another one | Have you tried calculating the marginal distribution? In general, for a discrete Binomial random variable the following is true:
\begin{align*}
p(y) &= \sum_x p(y,x)\\
&=\sum_x p(y|x)p(x)
\end{align*}
so all you need to do is show the following:
\begin{align*}
p(y) &= \sum_{x=0}^{\infty} \binom{x}{y}q^y(1-q)^{x-y}\binom{n}{x}p^x(1-p)^{n-x}\\
&=\,\,\,\vdots\\
&=\binom{n}{y} (pq)^y (1-pq)^{n-y}
\end{align*}
Is this a trivial problem, I don't think so. | Binomial random variable conditional on another one | Have you tried calculating the marginal distribution? In general, for a discrete Binomial random variable the following is true:
\begin{align*}
p(y) &= \sum_x p(y,x)\\
&=\sum_x p(y|x)p(x)
\end{align*} | Binomial random variable conditional on another one
Have you tried calculating the marginal distribution? In general, for a discrete Binomial random variable the following is true:
\begin{align*}
p(y) &= \sum_x p(y,x)\\
&=\sum_x p(y|x)p(x)
\end{align*}
so all you need to do is show the following:
\begin{align*}
p(y) &= \sum_{x=0}^{\infty} \binom{x}{y}q^y(1-q)^{x-y}\binom{n}{x}p^x(1-p)^{n-x}\\
&=\,\,\,\vdots\\
&=\binom{n}{y} (pq)^y (1-pq)^{n-y}
\end{align*}
Is this a trivial problem, I don't think so. | Binomial random variable conditional on another one
Have you tried calculating the marginal distribution? In general, for a discrete Binomial random variable the following is true:
\begin{align*}
p(y) &= \sum_x p(y,x)\\
&=\sum_x p(y|x)p(x)
\end{align*} |
34,257 | Binomial random variable conditional on another one | Following Dan, the algebraic computation can actually be done, and it's not so complicated:
$$
P(Y=y) = \sum_{x=y}^n P(Y=y|X=x)P(X=x) = \sum_{x=y}^n \binom xy q^y(1-q)^{x-y} \binom nx p^x(1-p)^{n-x}
$$
expanding the binomial coefficients and cancelling $x!$:
$$ \frac{n!}{y!} p^y q^y \sum_{x=y}^n \frac 1{(x-y)!}(1-q)^{x-y}\frac 1{(n-x)!} p^{x-y}(1-p)^{n-x} $$
change variables: $t=x-y$:
$$ \frac{n!}{y!} (pq)^y \sum_{t=0}^{n-y} \frac 1{t!(n-y-t)!} (1-q)^tp^t(1-p)^{n-y-t}=\\ \frac{n!}{y!(n-y)!} (pq)^y \sum_{t=0}^{n-y} \frac {(n-y)!}{t!(n-y-t)!} (p-pq)^t(1-p)^{n-y-t}=\\ \binom ny (pq)^y (p-pq+1-p)^{n-y}=\binom ny (pq)^y (1-pq)^{n-y}$$
Using the binomial | Binomial random variable conditional on another one | Following Dan, the algebraic computation can actually be done, and it's not so complicated:
$$
P(Y=y) = \sum_{x=y}^n P(Y=y|X=x)P(X=x) = \sum_{x=y}^n \binom xy q^y(1-q)^{x-y} \binom nx p^x(1-p)^{n-x}
| Binomial random variable conditional on another one
Following Dan, the algebraic computation can actually be done, and it's not so complicated:
$$
P(Y=y) = \sum_{x=y}^n P(Y=y|X=x)P(X=x) = \sum_{x=y}^n \binom xy q^y(1-q)^{x-y} \binom nx p^x(1-p)^{n-x}
$$
expanding the binomial coefficients and cancelling $x!$:
$$ \frac{n!}{y!} p^y q^y \sum_{x=y}^n \frac 1{(x-y)!}(1-q)^{x-y}\frac 1{(n-x)!} p^{x-y}(1-p)^{n-x} $$
change variables: $t=x-y$:
$$ \frac{n!}{y!} (pq)^y \sum_{t=0}^{n-y} \frac 1{t!(n-y-t)!} (1-q)^tp^t(1-p)^{n-y-t}=\\ \frac{n!}{y!(n-y)!} (pq)^y \sum_{t=0}^{n-y} \frac {(n-y)!}{t!(n-y-t)!} (p-pq)^t(1-p)^{n-y-t}=\\ \binom ny (pq)^y (p-pq+1-p)^{n-y}=\binom ny (pq)^y (1-pq)^{n-y}$$
Using the binomial | Binomial random variable conditional on another one
Following Dan, the algebraic computation can actually be done, and it's not so complicated:
$$
P(Y=y) = \sum_{x=y}^n P(Y=y|X=x)P(X=x) = \sum_{x=y}^n \binom xy q^y(1-q)^{x-y} \binom nx p^x(1-p)^{n-x}
|
34,258 | High precision with low recall SVM | The quality of your classifier, as those metrics show, will depend on how you intend to use it. E.g.
It is a great classifier if you data is a set of documents, if you are looking for documents of type H, and you're main concern is to make sure that most relevant documents are retrieved (high recall on H). Furthermore, the precision of H, i.e. the percentage of retrieved documents are relevant, is high too, so that's even better in case having irrelevant document amongst the retrieved documents is costly.
It is a terrible classifier if you try to retrieve as many documents of type R as possible, because the recall on R is 0.23 only, which means you are going to miss 77% of the documents.
It is a great classifier if you want to retrieve just a few documents of type R (low recall on R doesn't matter in this case) but having irrelevant document amongst the retrieved documents is costly (since you have a high precision on R, you won't have to pay too much for irrelevant documents).
etc.
(Btw there is unfortunately no consensus on the confusion matrix notation, so when you post a conversion matrix, you might want to specify where the predicted and true values are, even though in most cases, we can infer it from the precision/recall values) | High precision with low recall SVM | The quality of your classifier, as those metrics show, will depend on how you intend to use it. E.g.
It is a great classifier if you data is a set of documents, if you are looking for documents of ty | High precision with low recall SVM
The quality of your classifier, as those metrics show, will depend on how you intend to use it. E.g.
It is a great classifier if you data is a set of documents, if you are looking for documents of type H, and you're main concern is to make sure that most relevant documents are retrieved (high recall on H). Furthermore, the precision of H, i.e. the percentage of retrieved documents are relevant, is high too, so that's even better in case having irrelevant document amongst the retrieved documents is costly.
It is a terrible classifier if you try to retrieve as many documents of type R as possible, because the recall on R is 0.23 only, which means you are going to miss 77% of the documents.
It is a great classifier if you want to retrieve just a few documents of type R (low recall on R doesn't matter in this case) but having irrelevant document amongst the retrieved documents is costly (since you have a high precision on R, you won't have to pay too much for irrelevant documents).
etc.
(Btw there is unfortunately no consensus on the confusion matrix notation, so when you post a conversion matrix, you might want to specify where the predicted and true values are, even though in most cases, we can infer it from the precision/recall values) | High precision with low recall SVM
The quality of your classifier, as those metrics show, will depend on how you intend to use it. E.g.
It is a great classifier if you data is a set of documents, if you are looking for documents of ty |
34,259 | High precision with low recall SVM | One difficulty in answering the question is that you didn't mention what is the nature of the data set that you are actually using the classifier for. Franck's answer is excellent, but he assumes you are using it to find documents. The response may be a bit different if your application is medical research / clinical trials or evaluating the performance of your SVM for a stock or futures trading system, etc.
In your case the support values for H & R are very different and this has implications for the bias inherent in metrics such as Precision, Recall and F1 that you are using.
Depending on the application, it may be preferable to use unbiased metrics which adjust for the differences in support. For example, the unbiased version of Precision is Markedness, defined as: Markedness = Precision + NPV - 1 = TP/(TP+FP) +FN/(FN+TP) = 0.72 for your example. The other unbiased metric is Informedness = TPR - FPR = distance from random on the ROC chart, which comes out to be 0.22 in your case. The geometric mean of Markedness & Informedness is the Matthews correlation coefficient = 0.40 for your example.
Whether or not the low value of Sensitivity (Recall) for class R is a problem depends on the associated "cost" of this error in your particular case. | High precision with low recall SVM | One difficulty in answering the question is that you didn't mention what is the nature of the data set that you are actually using the classifier for. Franck's answer is excellent, but he assumes you | High precision with low recall SVM
One difficulty in answering the question is that you didn't mention what is the nature of the data set that you are actually using the classifier for. Franck's answer is excellent, but he assumes you are using it to find documents. The response may be a bit different if your application is medical research / clinical trials or evaluating the performance of your SVM for a stock or futures trading system, etc.
In your case the support values for H & R are very different and this has implications for the bias inherent in metrics such as Precision, Recall and F1 that you are using.
Depending on the application, it may be preferable to use unbiased metrics which adjust for the differences in support. For example, the unbiased version of Precision is Markedness, defined as: Markedness = Precision + NPV - 1 = TP/(TP+FP) +FN/(FN+TP) = 0.72 for your example. The other unbiased metric is Informedness = TPR - FPR = distance from random on the ROC chart, which comes out to be 0.22 in your case. The geometric mean of Markedness & Informedness is the Matthews correlation coefficient = 0.40 for your example.
Whether or not the low value of Sensitivity (Recall) for class R is a problem depends on the associated "cost" of this error in your particular case. | High precision with low recall SVM
One difficulty in answering the question is that you didn't mention what is the nature of the data set that you are actually using the classifier for. Franck's answer is excellent, but he assumes you |
34,260 | Kurtosis of made up distribution | There will be an infinite number of distributions that look very similar to your drawing, with a variety of different values for kurtosis.
With the particular conditions in your question and given we hold the crossover point to be inside, or at least not too far outside $\pm 1$, it should be the case that you get a slightly larger kurtosis than for the normal. I will show three cases where that happens, and then I'll show one where it is smaller -- and explain what causes it to happen.
Given that $\phi(x)$ and $\Phi(x)$ are the standard normal pdf and cdf respectively, let's write ourselves a little function
$$f(x) = \begin{cases} \phi(x) &\mbox{;}\quad |x| > t \\
a+b.g(x) & \mbox{;}\quad |x| ≤ t \end{cases} \ $$
for some continuous, symmetric density $g$ (with corresponding cdf $G$), with mean $0$, such that $b = \frac{\Phi(t)\, –\, ½\, –\, t.\phi(t)}{G(t)\, –\, ½\, –\, t.g(t)}$ and $a = \phi(t)-b.g(t)$.
That is, $a$ and $b$ are chosen to make the density continuous and integrate to $1$.
Example 1 Consider $g(x) = 3\, \phi(3x)$ and $t=1$,
which looks something like your drawing, here generated by the following R code:
f <- function(x, t=1,
dg=function(x) 2*dnorm(2*x),
pg=function(x) pnorm(2*x),
b=(pnorm(t) - 0.5 - t*dnorm(t))/ (pg(t) - 0.5 - t*dg(t)),
a=dnorm(t)-b*dg(t) ) {
ifelse(abs(x)>t,dnorm(x),a+b*dg(x))
}
f1 <- function(x) f(x,t=1,dg=function(x) 3*dnorm(3*x),pg=function(x) pnorm(3*x))
curve(f1,-4,4,col=2)
lines(x,dnorm(x),col=3)
Now the calculations. Let's make a function to evaluate $x^pf_1(x)$:
fp <- function(x,p=2) x^p*f1(x)
so we can evaluate the moments. First the variance:
integrate(fp,-Inf,Inf) # should be just smaller than 1
0.9828341 with absolute error < 1.4e-07
Next the fourth central moment:
integrate(fp,-Inf,Inf,p=4) # should be just smaller than 3
2.990153 with absolute error < 8.3e-06
We need the ratio of those numbers, which should have about 5 figure accuracy
integrate(fp,-Inf,Inf,p=4)$value/(integrate(fp,-Inf,Inf)$value^2)
[1] 3.095515
So the kurtosis is about 3.0955, slightly larger than for the normal case.
Of course we could compute it algebraically and get an exact answer, but there's no need, this tells us what we want to know.
Example 2 With the function $f$ defined above we can try it for all manner of $g$'s.
Here's the Laplace:
library(distr)
D <- DExp(rate = 1)
f2 <- function(x) f(x,t=1,dg=d(D),pg=p(D))
curve(f2,-4,4,col=2)
lines(x,dnorm(x),col=3)
fp2 <- function(x,p=2) x^p*f2(x)
integrate(fp2,-Inf,Inf) # should be just smaller than 1
0.9911295 with absolute error < 1.1e-07
integrate(fp2,-Inf,Inf,p=4) # should be just smaller than 3
2.995212 with absolute error < 5.9e-06
integrate(fp2,-Inf,Inf,p=4)$value/(integrate(fp2,-Inf,Inf)$value^2)
[1] 3.049065
Unsurprisingly, a similar result.
Example 3: Let's take $g$ to be a Cauchy distribution (a Student-t distribution with 1 d.f.), but with scale 2/3 (that is, if $h(x)$ is a standard Cauchy, $g(x) = 1.5 h(1.5 x)$, and again set the threshold, t (giving the points, $\pm t$, outside which we 'switch' to the normal), to be 1.
dg <- function(x) 1.5*dt(1.5*x,df=1)
pg <- function(x) pt(1.5*x,df=1)
f3 <- function(x) f(x,t=1,dg=dg,pg=pg)
curve(f3,-4,4,col=2)
lines(x,dnorm(x),col=3)
fp3 <- function(x,p=2) x^p*f3(x)
integrate(fp3,-Inf,Inf) # should be just smaller than 1
0.9915525 with absolute error < 1.1e-07
integrate(fp3,-Inf,Inf,p=4) # should be just smaller than 3
2.995066 with absolute error < 6.2e-06
integrate(fp3,-Inf,Inf,p=4)$value/(integrate(fp2,-Inf,Inf)$value^2)
[1] 3.048917
And just to demonstrate that we have actually got a proper density:
integrate(f3,-Inf,Inf)
1 with absolute error < 9.4e-05
Example 4: However, what happens when we change t?
Take $g$ and $G$ as the previous example, but change the threshold to $t=2$:
f4 <- function(x) f(x,t=2,dg=dg,pg=pg)
curve(f4,-4,4,col=2)
lines(x,dnorm(x),col=3)
fp4 <- function(x,p=2) x^p*f4(x)
integrate(fp4,-Inf,Inf,p=4)$value/(integrate(fp2,-Inf,Inf)$value^2)
[1] 2.755231
How does this happen?
Well, it's important to know that kurtosis is (speaking slightly loosely) 1+ the squared variance about $\mu\pm\sigma$:
All three distributions have the same mean and variance.
The black curve is the standard normal density. The green curve shows a fairly concentrated distribution about $\mu\pm\sigma$ (that is, the variance about $\mu\pm\sigma$ is small, leading to a kurtosis that approaches toward 1, the smallest possible). The red curve shows a case where the distribution is "pushed away" from $\mu\pm\sigma$; that is the kurtosis is large.
With that in mind, if we set the threshold points far enough outside $\mu\pm\sigma$ we can push the kurtosis below 3, and still have a higher peak. | Kurtosis of made up distribution | There will be an infinite number of distributions that look very similar to your drawing, with a variety of different values for kurtosis.
With the particular conditions in your question and given we | Kurtosis of made up distribution
There will be an infinite number of distributions that look very similar to your drawing, with a variety of different values for kurtosis.
With the particular conditions in your question and given we hold the crossover point to be inside, or at least not too far outside $\pm 1$, it should be the case that you get a slightly larger kurtosis than for the normal. I will show three cases where that happens, and then I'll show one where it is smaller -- and explain what causes it to happen.
Given that $\phi(x)$ and $\Phi(x)$ are the standard normal pdf and cdf respectively, let's write ourselves a little function
$$f(x) = \begin{cases} \phi(x) &\mbox{;}\quad |x| > t \\
a+b.g(x) & \mbox{;}\quad |x| ≤ t \end{cases} \ $$
for some continuous, symmetric density $g$ (with corresponding cdf $G$), with mean $0$, such that $b = \frac{\Phi(t)\, –\, ½\, –\, t.\phi(t)}{G(t)\, –\, ½\, –\, t.g(t)}$ and $a = \phi(t)-b.g(t)$.
That is, $a$ and $b$ are chosen to make the density continuous and integrate to $1$.
Example 1 Consider $g(x) = 3\, \phi(3x)$ and $t=1$,
which looks something like your drawing, here generated by the following R code:
f <- function(x, t=1,
dg=function(x) 2*dnorm(2*x),
pg=function(x) pnorm(2*x),
b=(pnorm(t) - 0.5 - t*dnorm(t))/ (pg(t) - 0.5 - t*dg(t)),
a=dnorm(t)-b*dg(t) ) {
ifelse(abs(x)>t,dnorm(x),a+b*dg(x))
}
f1 <- function(x) f(x,t=1,dg=function(x) 3*dnorm(3*x),pg=function(x) pnorm(3*x))
curve(f1,-4,4,col=2)
lines(x,dnorm(x),col=3)
Now the calculations. Let's make a function to evaluate $x^pf_1(x)$:
fp <- function(x,p=2) x^p*f1(x)
so we can evaluate the moments. First the variance:
integrate(fp,-Inf,Inf) # should be just smaller than 1
0.9828341 with absolute error < 1.4e-07
Next the fourth central moment:
integrate(fp,-Inf,Inf,p=4) # should be just smaller than 3
2.990153 with absolute error < 8.3e-06
We need the ratio of those numbers, which should have about 5 figure accuracy
integrate(fp,-Inf,Inf,p=4)$value/(integrate(fp,-Inf,Inf)$value^2)
[1] 3.095515
So the kurtosis is about 3.0955, slightly larger than for the normal case.
Of course we could compute it algebraically and get an exact answer, but there's no need, this tells us what we want to know.
Example 2 With the function $f$ defined above we can try it for all manner of $g$'s.
Here's the Laplace:
library(distr)
D <- DExp(rate = 1)
f2 <- function(x) f(x,t=1,dg=d(D),pg=p(D))
curve(f2,-4,4,col=2)
lines(x,dnorm(x),col=3)
fp2 <- function(x,p=2) x^p*f2(x)
integrate(fp2,-Inf,Inf) # should be just smaller than 1
0.9911295 with absolute error < 1.1e-07
integrate(fp2,-Inf,Inf,p=4) # should be just smaller than 3
2.995212 with absolute error < 5.9e-06
integrate(fp2,-Inf,Inf,p=4)$value/(integrate(fp2,-Inf,Inf)$value^2)
[1] 3.049065
Unsurprisingly, a similar result.
Example 3: Let's take $g$ to be a Cauchy distribution (a Student-t distribution with 1 d.f.), but with scale 2/3 (that is, if $h(x)$ is a standard Cauchy, $g(x) = 1.5 h(1.5 x)$, and again set the threshold, t (giving the points, $\pm t$, outside which we 'switch' to the normal), to be 1.
dg <- function(x) 1.5*dt(1.5*x,df=1)
pg <- function(x) pt(1.5*x,df=1)
f3 <- function(x) f(x,t=1,dg=dg,pg=pg)
curve(f3,-4,4,col=2)
lines(x,dnorm(x),col=3)
fp3 <- function(x,p=2) x^p*f3(x)
integrate(fp3,-Inf,Inf) # should be just smaller than 1
0.9915525 with absolute error < 1.1e-07
integrate(fp3,-Inf,Inf,p=4) # should be just smaller than 3
2.995066 with absolute error < 6.2e-06
integrate(fp3,-Inf,Inf,p=4)$value/(integrate(fp2,-Inf,Inf)$value^2)
[1] 3.048917
And just to demonstrate that we have actually got a proper density:
integrate(f3,-Inf,Inf)
1 with absolute error < 9.4e-05
Example 4: However, what happens when we change t?
Take $g$ and $G$ as the previous example, but change the threshold to $t=2$:
f4 <- function(x) f(x,t=2,dg=dg,pg=pg)
curve(f4,-4,4,col=2)
lines(x,dnorm(x),col=3)
fp4 <- function(x,p=2) x^p*f4(x)
integrate(fp4,-Inf,Inf,p=4)$value/(integrate(fp2,-Inf,Inf)$value^2)
[1] 2.755231
How does this happen?
Well, it's important to know that kurtosis is (speaking slightly loosely) 1+ the squared variance about $\mu\pm\sigma$:
All three distributions have the same mean and variance.
The black curve is the standard normal density. The green curve shows a fairly concentrated distribution about $\mu\pm\sigma$ (that is, the variance about $\mu\pm\sigma$ is small, leading to a kurtosis that approaches toward 1, the smallest possible). The red curve shows a case where the distribution is "pushed away" from $\mu\pm\sigma$; that is the kurtosis is large.
With that in mind, if we set the threshold points far enough outside $\mu\pm\sigma$ we can push the kurtosis below 3, and still have a higher peak. | Kurtosis of made up distribution
There will be an infinite number of distributions that look very similar to your drawing, with a variety of different values for kurtosis.
With the particular conditions in your question and given we |
34,261 | Kurtosis of made up distribution | Kurtosis is a rather misunderstood concept (I find L.T. De Carlo's paper "On the Meaning and Use of Kurtosis" (1997) a sensible and valuable discussion and presentation of the issues involved).
So I will take the naive view, and I will construct a density, $g_X(x)$, with "thinner middle and higher value at mode", compared to the standard normal density, but identical "tails" with the latter. I do not claim that this density exhibits "excess kurtosis".
This density will necessarily be step-wise. In order to have identical left and right "tails", its functional form for the intervals $(-\infty, -a)$ and $(a,\infty)$, where $a>0$, should be identical to the standard normal $\phi(x)$ density.
In the middle interval, $(-a,a)$, it should have some other functional form, call it $h(x)$. This $h(x)$ should be symmetric around zero, and satisfy
1) $h(0) > \phi(0) = 1/\sqrt{2\pi}$ so that the value of the density at the mode will be higher than the value of the standard normal, and
2) $\phi(-a) = h(-a) = h(a) = \phi(a)$ so that $g_X(x)$ is continuous.
More over, $g_X(x)$ should integrate to unity over the domain, in order to be a proper density.
So this density will be
$$g_X(x) = \begin{matrix}
\phi(x) &-\infty<x\le -a\\
h(x) &-a\le x \le a\\
\phi(x) & a\le x<\infty
\end{matrix}$$
subject to the previously mentioned restrictions on $h(x)$ and also, subject to
$$\int_{-\infty}^{-a}\phi(t)dt + \int_{-a}^ah(t)dt + \int_{a}^{\infty}\phi(t)dt =1$$
which is equivalent to require that the probability mass under $h(x)$ in the interval $(-a,a)$ must be equal with the probability mass under $\phi(x)$ in the same interval:
$$\int_{-a}^{-a}\left(h(t)- \phi(t)\right)dt =0 \Rightarrow \int_{0}^{a}\left(h(t)- \phi(t)\right)dt=0 $$
the last part due to the symmetry properties.
To obtain something specific, we will "try" the density of the zero-mean Laplace distribution for $h(x)$
$$h(x)= \frac 1{2b} e^{-\frac {|x|}{b}},\; b>0$$
To satisfy the various requirements set previously we must have:
For higher value at mode,
$$h(0)= \frac 1{2b} > \phi(0) = \frac {1}{\sqrt{2\pi}} \Rightarrow 0<b < \sqrt{\pi/2} \qquad [1]$$
For continuity,
$$h(a) = \phi(a) \Rightarrow \frac 1{2b} e^{-\frac {a}{b}} = \frac {1}{\sqrt {2\pi}}e^{-\frac 12a^2}$$
$$\Rightarrow -\ln(2b) - \frac {a}{b} = -\ln(\sqrt {2\pi}) -\frac 12a^2 \Rightarrow \frac 12a^2 - \frac {a}{b} +\ln\frac{\sqrt {\pi/2}}{b}$$
This is a quadratic in $a$. Its discriminant is
$$\Delta_a = \frac 1{b^2} - 4\cdot \frac 12 \cdot\ln\frac{\sqrt {\pi/2}}{b} > 0$$
(it can be easily verified that it is always positive). More over, we keep only the positive root since $a>0$ so
$$a^* = \frac 1b + \sqrt{\Delta_a}\qquad [2]$$
Finally the requirement for the density to integrate to unity translates into
$$\int_{0}^{a^*}\frac 1{2b} e^{-\frac {|x|}{b}} dt = \int_{0}^{a^*}\phi(t)dt $$
which by straightforward integration leads to
$$1-e^{-\frac {a^*}{b}} = 2\left(\Phi(a^*) - \frac 12\right) = \operatorname{erf}(a^*/\sqrt2)\qquad [3]$$
which can be solved numerically for $b^*$, and so completely determine the density we are after.
Of course other functional forms symmetric around zero could be tried, the laplacian pdf was just for expositional purposes. | Kurtosis of made up distribution | Kurtosis is a rather misunderstood concept (I find L.T. De Carlo's paper "On the Meaning and Use of Kurtosis" (1997) a sensible and valuable discussion and presentation of the issues involved).
So I | Kurtosis of made up distribution
Kurtosis is a rather misunderstood concept (I find L.T. De Carlo's paper "On the Meaning and Use of Kurtosis" (1997) a sensible and valuable discussion and presentation of the issues involved).
So I will take the naive view, and I will construct a density, $g_X(x)$, with "thinner middle and higher value at mode", compared to the standard normal density, but identical "tails" with the latter. I do not claim that this density exhibits "excess kurtosis".
This density will necessarily be step-wise. In order to have identical left and right "tails", its functional form for the intervals $(-\infty, -a)$ and $(a,\infty)$, where $a>0$, should be identical to the standard normal $\phi(x)$ density.
In the middle interval, $(-a,a)$, it should have some other functional form, call it $h(x)$. This $h(x)$ should be symmetric around zero, and satisfy
1) $h(0) > \phi(0) = 1/\sqrt{2\pi}$ so that the value of the density at the mode will be higher than the value of the standard normal, and
2) $\phi(-a) = h(-a) = h(a) = \phi(a)$ so that $g_X(x)$ is continuous.
More over, $g_X(x)$ should integrate to unity over the domain, in order to be a proper density.
So this density will be
$$g_X(x) = \begin{matrix}
\phi(x) &-\infty<x\le -a\\
h(x) &-a\le x \le a\\
\phi(x) & a\le x<\infty
\end{matrix}$$
subject to the previously mentioned restrictions on $h(x)$ and also, subject to
$$\int_{-\infty}^{-a}\phi(t)dt + \int_{-a}^ah(t)dt + \int_{a}^{\infty}\phi(t)dt =1$$
which is equivalent to require that the probability mass under $h(x)$ in the interval $(-a,a)$ must be equal with the probability mass under $\phi(x)$ in the same interval:
$$\int_{-a}^{-a}\left(h(t)- \phi(t)\right)dt =0 \Rightarrow \int_{0}^{a}\left(h(t)- \phi(t)\right)dt=0 $$
the last part due to the symmetry properties.
To obtain something specific, we will "try" the density of the zero-mean Laplace distribution for $h(x)$
$$h(x)= \frac 1{2b} e^{-\frac {|x|}{b}},\; b>0$$
To satisfy the various requirements set previously we must have:
For higher value at mode,
$$h(0)= \frac 1{2b} > \phi(0) = \frac {1}{\sqrt{2\pi}} \Rightarrow 0<b < \sqrt{\pi/2} \qquad [1]$$
For continuity,
$$h(a) = \phi(a) \Rightarrow \frac 1{2b} e^{-\frac {a}{b}} = \frac {1}{\sqrt {2\pi}}e^{-\frac 12a^2}$$
$$\Rightarrow -\ln(2b) - \frac {a}{b} = -\ln(\sqrt {2\pi}) -\frac 12a^2 \Rightarrow \frac 12a^2 - \frac {a}{b} +\ln\frac{\sqrt {\pi/2}}{b}$$
This is a quadratic in $a$. Its discriminant is
$$\Delta_a = \frac 1{b^2} - 4\cdot \frac 12 \cdot\ln\frac{\sqrt {\pi/2}}{b} > 0$$
(it can be easily verified that it is always positive). More over, we keep only the positive root since $a>0$ so
$$a^* = \frac 1b + \sqrt{\Delta_a}\qquad [2]$$
Finally the requirement for the density to integrate to unity translates into
$$\int_{0}^{a^*}\frac 1{2b} e^{-\frac {|x|}{b}} dt = \int_{0}^{a^*}\phi(t)dt $$
which by straightforward integration leads to
$$1-e^{-\frac {a^*}{b}} = 2\left(\Phi(a^*) - \frac 12\right) = \operatorname{erf}(a^*/\sqrt2)\qquad [3]$$
which can be solved numerically for $b^*$, and so completely determine the density we are after.
Of course other functional forms symmetric around zero could be tried, the laplacian pdf was just for expositional purposes. | Kurtosis of made up distribution
Kurtosis is a rather misunderstood concept (I find L.T. De Carlo's paper "On the Meaning and Use of Kurtosis" (1997) a sensible and valuable discussion and presentation of the issues involved).
So I |
34,262 | Kurtosis of made up distribution | The kurtosis of this distribution will probably be higher than that of a normal distribution. I say probably because I am basing this on a rough drawing, and although it might be possible to prove that moving mass in this way always increases kurtosis, I am not positive about that.
Although it is true that it has the same tails as a normal distribution, this distribution will have a lower variance than the normal distribution from which it is derived. Which means that its tails will match the tails of some normal distribution, but not of a normal distribution with the same variance as it. So, the normalized tails will in fact be thicker than the tails of a normal distribution. And, although thicker tails does not automatically mean more kurtosis, in this case the normalized fourth moment will probably also be larger. | Kurtosis of made up distribution | The kurtosis of this distribution will probably be higher than that of a normal distribution. I say probably because I am basing this on a rough drawing, and although it might be possible to prove th | Kurtosis of made up distribution
The kurtosis of this distribution will probably be higher than that of a normal distribution. I say probably because I am basing this on a rough drawing, and although it might be possible to prove that moving mass in this way always increases kurtosis, I am not positive about that.
Although it is true that it has the same tails as a normal distribution, this distribution will have a lower variance than the normal distribution from which it is derived. Which means that its tails will match the tails of some normal distribution, but not of a normal distribution with the same variance as it. So, the normalized tails will in fact be thicker than the tails of a normal distribution. And, although thicker tails does not automatically mean more kurtosis, in this case the normalized fourth moment will probably also be larger. | Kurtosis of made up distribution
The kurtosis of this distribution will probably be higher than that of a normal distribution. I say probably because I am basing this on a rough drawing, and although it might be possible to prove th |
34,263 | Kurtosis of made up distribution | It looks like the OP is trying to establish a connection between "peakedness" and kurtosis by keeping the tails fixed and making the distribution more "peaked." There is an effect on kurtosis here, but it is so slight that it is hardly worth a mention. Here is a theorem to support that assertion.
Theorem 1: Consider any probability distribution with finite fourth moment. Construct a new probability distribution by replacing the mass in the $[\mu - \sigma, \mu + \sigma] $ range, keeping the mass outside of $[\mu - \sigma, \mu + \sigma] $ fixed, and keeping the mean and standard deviation at $\mu, \sigma$. Then the difference between the minimum and maximum Pearson moment kurtosis values over all such replacements is $\le 0.25$.
Comment: The proof is constructive; you can actually identify the min and max kurtosis replacements in this setting. Further, 0.25 is an upper bound on the kurtosis range, depending on the distribution. For example, with a normal distribution, the range bound is 0.141, rather than 0.25.
On the other hand, there is a huge effect of tails on kurtosis, as is given by the following theorem:
Theorem 2: Consider any probability distribution with finite fourth moment. Construct a new probability distribution by replacing the mass outside the $[\mu - \sigma, \mu + \sigma] $ range, keeping the mass in $[\mu - \sigma, \mu + \sigma] $ fixed, and keeping the mean and standard deviation at $\mu, \sigma$. Then the difference between the minimum and maximum Pearson moment kurtosis values over all such replacements is unbounded; i.e., the new distribution can be chosen so that kurtosis is aribitrarily large.
Comment: These two theorems show that the effect of tails on Pearson moment kurtosis is infinite, while the effect of "peakedness" is $\le 0.25$. | Kurtosis of made up distribution | It looks like the OP is trying to establish a connection between "peakedness" and kurtosis by keeping the tails fixed and making the distribution more "peaked." There is an effect on kurtosis here, bu | Kurtosis of made up distribution
It looks like the OP is trying to establish a connection between "peakedness" and kurtosis by keeping the tails fixed and making the distribution more "peaked." There is an effect on kurtosis here, but it is so slight that it is hardly worth a mention. Here is a theorem to support that assertion.
Theorem 1: Consider any probability distribution with finite fourth moment. Construct a new probability distribution by replacing the mass in the $[\mu - \sigma, \mu + \sigma] $ range, keeping the mass outside of $[\mu - \sigma, \mu + \sigma] $ fixed, and keeping the mean and standard deviation at $\mu, \sigma$. Then the difference between the minimum and maximum Pearson moment kurtosis values over all such replacements is $\le 0.25$.
Comment: The proof is constructive; you can actually identify the min and max kurtosis replacements in this setting. Further, 0.25 is an upper bound on the kurtosis range, depending on the distribution. For example, with a normal distribution, the range bound is 0.141, rather than 0.25.
On the other hand, there is a huge effect of tails on kurtosis, as is given by the following theorem:
Theorem 2: Consider any probability distribution with finite fourth moment. Construct a new probability distribution by replacing the mass outside the $[\mu - \sigma, \mu + \sigma] $ range, keeping the mass in $[\mu - \sigma, \mu + \sigma] $ fixed, and keeping the mean and standard deviation at $\mu, \sigma$. Then the difference between the minimum and maximum Pearson moment kurtosis values over all such replacements is unbounded; i.e., the new distribution can be chosen so that kurtosis is aribitrarily large.
Comment: These two theorems show that the effect of tails on Pearson moment kurtosis is infinite, while the effect of "peakedness" is $\le 0.25$. | Kurtosis of made up distribution
It looks like the OP is trying to establish a connection between "peakedness" and kurtosis by keeping the tails fixed and making the distribution more "peaked." There is an effect on kurtosis here, bu |
34,264 | Do I need a balanced sample (50% yes, 50% no) to run logistic regression? | This is not so much a problem of logistic regression per se as it is a problem with classification accuracy as a performance measure. Note that balancing the data set is not necessarily the only valid approach. If one of the classes is actually much more common in the population (and not merely in your sample), a naive model (classifying everything as belonging to the most common category) really is a good guess. If the error costs are not symmetric, balancing the data set might lead you to err in the wrong direction (the more costly one).
The problem also often comes up the other way around: Training/evaluating on some artificially balanced data set before using the resulting model in a strongly unbalanced situation (think detecting fraud or diagnosing a rare disease) where the usefulness of the model is not nearly as high as the raw accuracy would suggest. It all depends on your objectives and your cost structure. | Do I need a balanced sample (50% yes, 50% no) to run logistic regression? | This is not so much a problem of logistic regression per se as it is a problem with classification accuracy as a performance measure. Note that balancing the data set is not necessarily the only valid | Do I need a balanced sample (50% yes, 50% no) to run logistic regression?
This is not so much a problem of logistic regression per se as it is a problem with classification accuracy as a performance measure. Note that balancing the data set is not necessarily the only valid approach. If one of the classes is actually much more common in the population (and not merely in your sample), a naive model (classifying everything as belonging to the most common category) really is a good guess. If the error costs are not symmetric, balancing the data set might lead you to err in the wrong direction (the more costly one).
The problem also often comes up the other way around: Training/evaluating on some artificially balanced data set before using the resulting model in a strongly unbalanced situation (think detecting fraud or diagnosing a rare disease) where the usefulness of the model is not nearly as high as the raw accuracy would suggest. It all depends on your objectives and your cost structure. | Do I need a balanced sample (50% yes, 50% no) to run logistic regression?
This is not so much a problem of logistic regression per se as it is a problem with classification accuracy as a performance measure. Note that balancing the data set is not necessarily the only valid |
34,265 | Do I need a balanced sample (50% yes, 50% no) to run logistic regression? | Yes; it will affect the results. Logistic regression fits an MLE by minimizing an objective function which is evaluated at all the data points. If the data is unbalanced then the minimization will be unbalanced too.
While your example is not extreme, you will get different answers if you re-balance.
A good explanation of this and how to address it is in King and Zeng, http://gking.harvard.edu/files/gking/files/0s.pdf. | Do I need a balanced sample (50% yes, 50% no) to run logistic regression? | Yes; it will affect the results. Logistic regression fits an MLE by minimizing an objective function which is evaluated at all the data points. If the data is unbalanced then the minimization will b | Do I need a balanced sample (50% yes, 50% no) to run logistic regression?
Yes; it will affect the results. Logistic regression fits an MLE by minimizing an objective function which is evaluated at all the data points. If the data is unbalanced then the minimization will be unbalanced too.
While your example is not extreme, you will get different answers if you re-balance.
A good explanation of this and how to address it is in King and Zeng, http://gking.harvard.edu/files/gking/files/0s.pdf. | Do I need a balanced sample (50% yes, 50% no) to run logistic regression?
Yes; it will affect the results. Logistic regression fits an MLE by minimizing an objective function which is evaluated at all the data points. If the data is unbalanced then the minimization will b |
34,266 | Do I need a balanced sample (50% yes, 50% no) to run logistic regression? | Class imbalance can be a real problem.
An alternative to down-sampling would be to assign costs to the different classes, which is supported in popular toolkits.
E.g. look for the -j parameter in SvmLight (for support-vector regression), or the -w in LibLinear (for different kinds of linear regression). | Do I need a balanced sample (50% yes, 50% no) to run logistic regression? | Class imbalance can be a real problem.
An alternative to down-sampling would be to assign costs to the different classes, which is supported in popular toolkits.
E.g. look for the -j parameter in Svm | Do I need a balanced sample (50% yes, 50% no) to run logistic regression?
Class imbalance can be a real problem.
An alternative to down-sampling would be to assign costs to the different classes, which is supported in popular toolkits.
E.g. look for the -j parameter in SvmLight (for support-vector regression), or the -w in LibLinear (for different kinds of linear regression). | Do I need a balanced sample (50% yes, 50% no) to run logistic regression?
Class imbalance can be a real problem.
An alternative to down-sampling would be to assign costs to the different classes, which is supported in popular toolkits.
E.g. look for the -j parameter in Svm |
34,267 | Meaning of “post hoc” multiple comparisons | As always, your question implicitly asks for some authoritative answer that might very well not exist. Scheffé's method and Tukey's HSD are usually called post-hoc tests, used for unplanned comparisons and conducted after an omnibus test but that's not a requirement for all such methods.
The main argument for a distinction between planned and unplanned tests is that if you always intended to conduct a limited number of tests (planned contrasts), you don't necessarily need to adjust the error level. If, on the other hand, you are just reporting/testing the differences that look big (post-hoc tests), you might be “capitalizing on chance” and you should adjust not only of the tests you conduct/report but for all possible pairwise comparisons/contrasts in your design.
One issue with all this is that it makes the evaluation of the evidence and the result of a study contingent on the intentions of the experimenter, a most counter-intuitive and undesirable state of affairs. This is sometimes held as an argument against null-hypothesis significance testing as used within psychology. | Meaning of “post hoc” multiple comparisons | As always, your question implicitly asks for some authoritative answer that might very well not exist. Scheffé's method and Tukey's HSD are usually called post-hoc tests, used for unplanned comparison | Meaning of “post hoc” multiple comparisons
As always, your question implicitly asks for some authoritative answer that might very well not exist. Scheffé's method and Tukey's HSD are usually called post-hoc tests, used for unplanned comparisons and conducted after an omnibus test but that's not a requirement for all such methods.
The main argument for a distinction between planned and unplanned tests is that if you always intended to conduct a limited number of tests (planned contrasts), you don't necessarily need to adjust the error level. If, on the other hand, you are just reporting/testing the differences that look big (post-hoc tests), you might be “capitalizing on chance” and you should adjust not only of the tests you conduct/report but for all possible pairwise comparisons/contrasts in your design.
One issue with all this is that it makes the evaluation of the evidence and the result of a study contingent on the intentions of the experimenter, a most counter-intuitive and undesirable state of affairs. This is sometimes held as an argument against null-hypothesis significance testing as used within psychology. | Meaning of “post hoc” multiple comparisons
As always, your question implicitly asks for some authoritative answer that might very well not exist. Scheffé's method and Tukey's HSD are usually called post-hoc tests, used for unplanned comparison |
34,268 | Meaning of “post hoc” multiple comparisons | Your original thought was incorrect. Howell's is correct.
Take the simple t-test, some people just use them for planned or post hoc but adjust their p-values for multiple comparisons.
Both of the tests you mention are typically used post-hoc but could be used for planned tests if the planned tests are expected to have multiple comparison issues. For example, in an ANOVA situation where you do planned contrasts after the ANOVA you may want to test non-orthogonal or simply recognize that there are still problems with multiple comparisons. In that case you might use a traditionally post-hoc procedure.
Also, the conclusions you can reach regarding the two kinds of testing are quite different. A post-hoc test allows you to draw tentative conclusions to guide further research while the planned test speaks directly to the theories that you're examining. | Meaning of “post hoc” multiple comparisons | Your original thought was incorrect. Howell's is correct.
Take the simple t-test, some people just use them for planned or post hoc but adjust their p-values for multiple comparisons.
Both of the t | Meaning of “post hoc” multiple comparisons
Your original thought was incorrect. Howell's is correct.
Take the simple t-test, some people just use them for planned or post hoc but adjust their p-values for multiple comparisons.
Both of the tests you mention are typically used post-hoc but could be used for planned tests if the planned tests are expected to have multiple comparison issues. For example, in an ANOVA situation where you do planned contrasts after the ANOVA you may want to test non-orthogonal or simply recognize that there are still problems with multiple comparisons. In that case you might use a traditionally post-hoc procedure.
Also, the conclusions you can reach regarding the two kinds of testing are quite different. A post-hoc test allows you to draw tentative conclusions to guide further research while the planned test speaks directly to the theories that you're examining. | Meaning of “post hoc” multiple comparisons
Your original thought was incorrect. Howell's is correct.
Take the simple t-test, some people just use them for planned or post hoc but adjust their p-values for multiple comparisons.
Both of the t |
34,269 | Meaning of “post hoc” multiple comparisons | Many people--as well as SPSS--do in fact use the term "post hoc" the way you initially described. Howell's usage may be more common. But rather than argue about which definition is "right," the important thing is to know that when you see the term used it may mean different things. Becase it is so inconsistently used, it's probably best to avoid using the term yourself unless it is absolutely clear what you mean. This problem is discussed in Frane, 2015, "Planned Hypothesis Tests Are Not Necessarily Exempt From Multiplicity Adjustment"
Incidentally, Howell's classification of given multiple comparison procedures as being strictly for a priori or unplanned tests appears to be rather arbitrary. | Meaning of “post hoc” multiple comparisons | Many people--as well as SPSS--do in fact use the term "post hoc" the way you initially described. Howell's usage may be more common. But rather than argue about which definition is "right," the import | Meaning of “post hoc” multiple comparisons
Many people--as well as SPSS--do in fact use the term "post hoc" the way you initially described. Howell's usage may be more common. But rather than argue about which definition is "right," the important thing is to know that when you see the term used it may mean different things. Becase it is so inconsistently used, it's probably best to avoid using the term yourself unless it is absolutely clear what you mean. This problem is discussed in Frane, 2015, "Planned Hypothesis Tests Are Not Necessarily Exempt From Multiplicity Adjustment"
Incidentally, Howell's classification of given multiple comparison procedures as being strictly for a priori or unplanned tests appears to be rather arbitrary. | Meaning of “post hoc” multiple comparisons
Many people--as well as SPSS--do in fact use the term "post hoc" the way you initially described. Howell's usage may be more common. But rather than argue about which definition is "right," the import |
34,270 | Jaccard Similarity - From Data Mining book - Homework problem | Each item in T has an $\frac{m}{n}$ chance of also being in S. The expected number of items common to S & T is therefore $\frac{m^2}{n}$.
Exp. $\text{Jaccard Similarity} = \dfrac{\text{No. of common items}}{\text{Size of T} + \text{Size of S} - \text{Number of common items}} = \dfrac{m}{2n - m}$ (after simplification.) | Jaccard Similarity - From Data Mining book - Homework problem | Each item in T has an $\frac{m}{n}$ chance of also being in S. The expected number of items common to S & T is therefore $\frac{m^2}{n}$.
Exp. $\text{Jaccard Similarity} = \dfrac{\text{No. of common i | Jaccard Similarity - From Data Mining book - Homework problem
Each item in T has an $\frac{m}{n}$ chance of also being in S. The expected number of items common to S & T is therefore $\frac{m^2}{n}$.
Exp. $\text{Jaccard Similarity} = \dfrac{\text{No. of common items}}{\text{Size of T} + \text{Size of S} - \text{Number of common items}} = \dfrac{m}{2n - m}$ (after simplification.) | Jaccard Similarity - From Data Mining book - Homework problem
Each item in T has an $\frac{m}{n}$ chance of also being in S. The expected number of items common to S & T is therefore $\frac{m^2}{n}$.
Exp. $\text{Jaccard Similarity} = \dfrac{\text{No. of common i |
34,271 | Jaccard Similarity - From Data Mining book - Homework problem | The above answer assumes that an element in $T$ may be repeated several times in $S$ (i.e. $S$ and $T$ are not sets but multisets); else the probability will not be $m/n$ uniformly.
I expect the answer should be more along the following lines:-
Let the number of common elements between $S$ and $T$ be $k$.
Then, as mentioned by ack_inc in the comment to his answer, Jaccard similarity $Sim(S,T)=k/(2m-k)$.
Now, $Pr(Sim(S,T)=k/(2m-k))$ will be $\dfrac{{m\choose {k}} {n-k\choose m-k}}{n\choose m}$ since there are $n$ total elements, of which $m$ are in $S$ and $k$ are common. So the number of ways we can choose $m$ elements for $T$ is given by ${m \choose k}$ (choosing the $k$ common elements from $S$) times ${n-m\choose m-k}$ (choosing remaining $m-k$ elements from $n-k$ elements).
Thus,
$E(Sim(S,T))=\sum_{k=0}^{m} \dfrac{k}{2m-k} \dfrac{{m\choose {k}} {n-m\choose m-k}}{n\choose m}$.
However, simplifying the above expression is beyond my limited knowledge of combinatorial identities. If anyone can do so, kindly update the answer. | Jaccard Similarity - From Data Mining book - Homework problem | The above answer assumes that an element in $T$ may be repeated several times in $S$ (i.e. $S$ and $T$ are not sets but multisets); else the probability will not be $m/n$ uniformly.
I expect the answe | Jaccard Similarity - From Data Mining book - Homework problem
The above answer assumes that an element in $T$ may be repeated several times in $S$ (i.e. $S$ and $T$ are not sets but multisets); else the probability will not be $m/n$ uniformly.
I expect the answer should be more along the following lines:-
Let the number of common elements between $S$ and $T$ be $k$.
Then, as mentioned by ack_inc in the comment to his answer, Jaccard similarity $Sim(S,T)=k/(2m-k)$.
Now, $Pr(Sim(S,T)=k/(2m-k))$ will be $\dfrac{{m\choose {k}} {n-k\choose m-k}}{n\choose m}$ since there are $n$ total elements, of which $m$ are in $S$ and $k$ are common. So the number of ways we can choose $m$ elements for $T$ is given by ${m \choose k}$ (choosing the $k$ common elements from $S$) times ${n-m\choose m-k}$ (choosing remaining $m-k$ elements from $n-k$ elements).
Thus,
$E(Sim(S,T))=\sum_{k=0}^{m} \dfrac{k}{2m-k} \dfrac{{m\choose {k}} {n-m\choose m-k}}{n\choose m}$.
However, simplifying the above expression is beyond my limited knowledge of combinatorial identities. If anyone can do so, kindly update the answer. | Jaccard Similarity - From Data Mining book - Homework problem
The above answer assumes that an element in $T$ may be repeated several times in $S$ (i.e. $S$ and $T$ are not sets but multisets); else the probability will not be $m/n$ uniformly.
I expect the answe |
34,272 | Jaccard Similarity - From Data Mining book - Homework problem | I'm posting an alternative solution.
Jaccard similarity of two sets $S$ and $T$ is defined as the fraction of elements these two sets have in common, i.e. $\text{sim}(S,T)=|S\cap T|/|S\cup T|$. Suppose we chose $m$-element subsets $S$ and $T$ uniformly at random from an $n$-element set. What is the expected Jaccard similarity of these two sets? Suppose the $|S\cap T|=k$ for some $0\le k\le m$. Notice that for the first set, $S$, we have $\binom{n}{m}$ choices, while for $T$ we have $\binom{m}{k}\binom{n-m}{m-k}$ choices, because $k$ elements must be from $S$ and $m-k$ elements must not be from $S$. This gives us $$\Pr[|S\cap T|=k]=\frac{\binom{m}{k}\binom{n-m}{m-k}}{\binom{n}{m}},$$ meaning that $$\text{E}[\text{sim}(S,T)]=\sum_{k=0}^m\frac{\binom{m}{k}\binom{n-m}{m-k}}{\binom{n}{m}}\frac{k}{2m-k}.$$ Even though $\text{E}[|S\cap T|/|S\cup T|]\neq\text{E}[|S\cap T|]/\text{E}[|S\cup T|]=m/(2n-m)$, this expression seems to give good approximation.
Thanks to Mitja Trampus for pointing out an alternate solution, with $$\Pr[|S\cap T|=k]=\binom{m}{k}\frac{\binom{m}{k}}{\binom{n}{k}}\frac{\binom{n-m}{m-k}}{\binom{n}{m}},$$
giving the following expression:
$$\text{E}[\text{sim}(S,T)]=\sum_{k=0}^m\binom{m}{k}\frac{\binom{m}{k}}{\binom{n}{k}}\frac{\binom{n-m}{m-k}}{\binom{n}{m}}\frac{k}{2m-k}.$$
(The above expressions are, of course, equivalent.)
EDIT: Regarding the simplification, perhaps applying the following identity (from Aigner's book, page 13) could work: $$\binom{n}{m}\binom{m}{k}=\binom{n}{k}\binom{n-k}{m-k}.$$ | Jaccard Similarity - From Data Mining book - Homework problem | I'm posting an alternative solution.
Jaccard similarity of two sets $S$ and $T$ is defined as the fraction of elements these two sets have in common, i.e. $\text{sim}(S,T)=|S\cap T|/|S\cup T|$. Suppos | Jaccard Similarity - From Data Mining book - Homework problem
I'm posting an alternative solution.
Jaccard similarity of two sets $S$ and $T$ is defined as the fraction of elements these two sets have in common, i.e. $\text{sim}(S,T)=|S\cap T|/|S\cup T|$. Suppose we chose $m$-element subsets $S$ and $T$ uniformly at random from an $n$-element set. What is the expected Jaccard similarity of these two sets? Suppose the $|S\cap T|=k$ for some $0\le k\le m$. Notice that for the first set, $S$, we have $\binom{n}{m}$ choices, while for $T$ we have $\binom{m}{k}\binom{n-m}{m-k}$ choices, because $k$ elements must be from $S$ and $m-k$ elements must not be from $S$. This gives us $$\Pr[|S\cap T|=k]=\frac{\binom{m}{k}\binom{n-m}{m-k}}{\binom{n}{m}},$$ meaning that $$\text{E}[\text{sim}(S,T)]=\sum_{k=0}^m\frac{\binom{m}{k}\binom{n-m}{m-k}}{\binom{n}{m}}\frac{k}{2m-k}.$$ Even though $\text{E}[|S\cap T|/|S\cup T|]\neq\text{E}[|S\cap T|]/\text{E}[|S\cup T|]=m/(2n-m)$, this expression seems to give good approximation.
Thanks to Mitja Trampus for pointing out an alternate solution, with $$\Pr[|S\cap T|=k]=\binom{m}{k}\frac{\binom{m}{k}}{\binom{n}{k}}\frac{\binom{n-m}{m-k}}{\binom{n}{m}},$$
giving the following expression:
$$\text{E}[\text{sim}(S,T)]=\sum_{k=0}^m\binom{m}{k}\frac{\binom{m}{k}}{\binom{n}{k}}\frac{\binom{n-m}{m-k}}{\binom{n}{m}}\frac{k}{2m-k}.$$
(The above expressions are, of course, equivalent.)
EDIT: Regarding the simplification, perhaps applying the following identity (from Aigner's book, page 13) could work: $$\binom{n}{m}\binom{m}{k}=\binom{n}{k}\binom{n-k}{m-k}.$$ | Jaccard Similarity - From Data Mining book - Homework problem
I'm posting an alternative solution.
Jaccard similarity of two sets $S$ and $T$ is defined as the fraction of elements these two sets have in common, i.e. $\text{sim}(S,T)=|S\cap T|/|S\cup T|$. Suppos |
34,273 | Jaccard Similarity - From Data Mining book - Homework problem | I agree with blazs answer - just want to add a small correction (credit to another guy in the course who pointed it out).
The summation does not start at 0. (you'll see it if you make n=100 and m=99)
$$
\text{E}[\text{sim}(S,T)]=\sum_{k=max(0, 2m-n)}^m\binom{m}{k}\frac{\binom{m}{k}}{\binom{n}{k}}\frac{\binom{n-m}{m-k}}{\binom{n}{m}}\frac{k}{2m-k}.
$$ | Jaccard Similarity - From Data Mining book - Homework problem | I agree with blazs answer - just want to add a small correction (credit to another guy in the course who pointed it out).
The summation does not start at 0. (you'll see it if you make n=100 and m=99)
| Jaccard Similarity - From Data Mining book - Homework problem
I agree with blazs answer - just want to add a small correction (credit to another guy in the course who pointed it out).
The summation does not start at 0. (you'll see it if you make n=100 and m=99)
$$
\text{E}[\text{sim}(S,T)]=\sum_{k=max(0, 2m-n)}^m\binom{m}{k}\frac{\binom{m}{k}}{\binom{n}{k}}\frac{\binom{n-m}{m-k}}{\binom{n}{m}}\frac{k}{2m-k}.
$$ | Jaccard Similarity - From Data Mining book - Homework problem
I agree with blazs answer - just want to add a small correction (credit to another guy in the course who pointed it out).
The summation does not start at 0. (you'll see it if you make n=100 and m=99)
|
34,274 | Jaccard Similarity - From Data Mining book - Homework problem | I just want to add the following:
As pointed out, ack_inc's answer is not correct and can serve only as an approximation. Also, the lower bound should be $\max\{0, 2m - n\}$ instead of $0$, as GM1313 mentions.
I needed to compute the similarities for $1\leq m\leq n$ where $n = 5000$ or even bigger, so computing all the probabilities $P_{n, m}[|S\cap T| = k]$ (blindly following the definition)
takes a lot of time,
gives wrong results due to floating-point arithmetics, e.g., $p = 0.0$.
Therefore, I used the fact that ${a \choose b + 1} = \frac{a - b}{b + 1}{a \choose b}$ to speed up the process, and compute the probabilities recursively, starting with the biggest one.
I also used the approximation $m / (2n - m)$ and actually, it is really good, especially for larger values of $n$ (curves for $n\in\{20, 100, 5000\}$): | Jaccard Similarity - From Data Mining book - Homework problem | I just want to add the following:
As pointed out, ack_inc's answer is not correct and can serve only as an approximation. Also, the lower bound should be $\max\{0, 2m - n\}$ instead of $0$, as GM1313 | Jaccard Similarity - From Data Mining book - Homework problem
I just want to add the following:
As pointed out, ack_inc's answer is not correct and can serve only as an approximation. Also, the lower bound should be $\max\{0, 2m - n\}$ instead of $0$, as GM1313 mentions.
I needed to compute the similarities for $1\leq m\leq n$ where $n = 5000$ or even bigger, so computing all the probabilities $P_{n, m}[|S\cap T| = k]$ (blindly following the definition)
takes a lot of time,
gives wrong results due to floating-point arithmetics, e.g., $p = 0.0$.
Therefore, I used the fact that ${a \choose b + 1} = \frac{a - b}{b + 1}{a \choose b}$ to speed up the process, and compute the probabilities recursively, starting with the biggest one.
I also used the approximation $m / (2n - m)$ and actually, it is really good, especially for larger values of $n$ (curves for $n\in\{20, 100, 5000\}$): | Jaccard Similarity - From Data Mining book - Homework problem
I just want to add the following:
As pointed out, ack_inc's answer is not correct and can serve only as an approximation. Also, the lower bound should be $\max\{0, 2m - n\}$ instead of $0$, as GM1313 |
34,275 | Test Cox proportional hazard assumption (Bad Schoenfeld residuals) | It is likely that the large sample size is responsible for the seemingly strong evidence against the PH assumption. P-values are a function of sample size, and their usefulness declines when sample size grows very large as the null hypothesis is never exactly true. They don't help too much with your question here, which is not "is the PH assumption satisfied" but "is the deviation from the PH assumption so large that inference is impaired".
One way to assess this for categorical variables, which your model seems to mostly contain, is by a log-minus-log plot. This is explained e.g. in this book and easily implemented in R using the rms library
library(rms)
myfit = survfit(Surv(time, status) ~ catvar)
survplot(myfit, loglog=T)
When the PH assumption holds, the lines are parallel, and their vertical distance is the log hazard ratio.
Convergent curves are seen when the difference between the groups decreases with time, and divergent curves when it increases, which indicates some deviation from the PH assumption. Crossing of the curves indicates a more severe deviation, with the effect of group membership changing signs. | Test Cox proportional hazard assumption (Bad Schoenfeld residuals) | It is likely that the large sample size is responsible for the seemingly strong evidence against the PH assumption. P-values are a function of sample size, and their usefulness declines when sample si | Test Cox proportional hazard assumption (Bad Schoenfeld residuals)
It is likely that the large sample size is responsible for the seemingly strong evidence against the PH assumption. P-values are a function of sample size, and their usefulness declines when sample size grows very large as the null hypothesis is never exactly true. They don't help too much with your question here, which is not "is the PH assumption satisfied" but "is the deviation from the PH assumption so large that inference is impaired".
One way to assess this for categorical variables, which your model seems to mostly contain, is by a log-minus-log plot. This is explained e.g. in this book and easily implemented in R using the rms library
library(rms)
myfit = survfit(Surv(time, status) ~ catvar)
survplot(myfit, loglog=T)
When the PH assumption holds, the lines are parallel, and their vertical distance is the log hazard ratio.
Convergent curves are seen when the difference between the groups decreases with time, and divergent curves when it increases, which indicates some deviation from the PH assumption. Crossing of the curves indicates a more severe deviation, with the effect of group membership changing signs. | Test Cox proportional hazard assumption (Bad Schoenfeld residuals)
It is likely that the large sample size is responsible for the seemingly strong evidence against the PH assumption. P-values are a function of sample size, and their usefulness declines when sample si |
34,276 | Dummy coding for contrasts: 0,1 vs. 1,-1 | "Dichotomous Predictor Variables", there are two ways to code dichotomous predictors: using the contrast 0,1 or the contrast 1,-1.
There is no limit to the number of ways they can be coded. Those two are merely the most common (indeed between them, almost ubiquitous), and probably the easiest to deal with.
I kind of understand the distinction here (0,1 is dummy coding and 1,-1 adds to one group and subtracts from the other) but don't understand which to use in my regression.
Whichever is more convenient/appropriate. If you have a designed experiment with equal numbers in each, there are some nice aspects to the second approach; if you don't the first is probably easier in several ways.
For example, if I have two dichotomous predictors, gender (m/f) and athlete (y/n), I could use contrasts 0,1 on both or 1,-1 on both.
What would be the interpretation of a main effect or an interaction effect when using the two different contrasts?
a) (i) Consider a gender main effect (without interaction for simplicity) {m=0, f=1} - then the coefficient corresponding to that dummy will measure the difference in mean between females and males (and the intercept would be the mean of the males).
(ii) For {m=-1, f=1} the gender main effect is half the difference in mean, and the intercept is the average of the means (if the design is balanced it is also the average of all the data). Equivalently, the the main effect is the difference of each group mean from the intercept.
b) (i) consider an interaction between gender{m=0,f=1} and athlete {n=0,y=1}
Now the intercept represents the mean of the male non-athletes (0,0), the gender main effect is the difference between the means of the female non-athletes and male non-athletes, the athlete main effect represents the difference between the mean of the male athletes and the male non-athletes and the interaction is the difference of two differences - it's the mean athlete/non-athlete difference for females minus the mean athlete/non-athlete difference for makes.
(ii) consider an interaction between gender{m=-1,f=-1} and athlete {n=-1,y=1}
Now the intercept represents the mean of the four group-means (and if the design was completely balanced it would also be the overall mean). The intercept is a quarter of what it was before.
The main effects are averages of difference effects - the gender effect is the average of the female-male difference within atheletes and the female-male difference within non-athletes. The athlete main effect is the average of the athlete/non-athlete difference within females and the athlete/non-athlete difference within males.
Does it depend on whether my cells are of different sizes?
What do you mean by 'different sizes'? Do you mean that the number of observations in each cell are different? (If so, I largely addressed that above - equal cell numbers gives additional meanings/simplifies the interpretation, such as making the intercept the the grand mean of the data rather than just the mean of group means.) | Dummy coding for contrasts: 0,1 vs. 1,-1 | "Dichotomous Predictor Variables", there are two ways to code dichotomous predictors: using the contrast 0,1 or the contrast 1,-1.
There is no limit to the number of ways they can be coded. Those two | Dummy coding for contrasts: 0,1 vs. 1,-1
"Dichotomous Predictor Variables", there are two ways to code dichotomous predictors: using the contrast 0,1 or the contrast 1,-1.
There is no limit to the number of ways they can be coded. Those two are merely the most common (indeed between them, almost ubiquitous), and probably the easiest to deal with.
I kind of understand the distinction here (0,1 is dummy coding and 1,-1 adds to one group and subtracts from the other) but don't understand which to use in my regression.
Whichever is more convenient/appropriate. If you have a designed experiment with equal numbers in each, there are some nice aspects to the second approach; if you don't the first is probably easier in several ways.
For example, if I have two dichotomous predictors, gender (m/f) and athlete (y/n), I could use contrasts 0,1 on both or 1,-1 on both.
What would be the interpretation of a main effect or an interaction effect when using the two different contrasts?
a) (i) Consider a gender main effect (without interaction for simplicity) {m=0, f=1} - then the coefficient corresponding to that dummy will measure the difference in mean between females and males (and the intercept would be the mean of the males).
(ii) For {m=-1, f=1} the gender main effect is half the difference in mean, and the intercept is the average of the means (if the design is balanced it is also the average of all the data). Equivalently, the the main effect is the difference of each group mean from the intercept.
b) (i) consider an interaction between gender{m=0,f=1} and athlete {n=0,y=1}
Now the intercept represents the mean of the male non-athletes (0,0), the gender main effect is the difference between the means of the female non-athletes and male non-athletes, the athlete main effect represents the difference between the mean of the male athletes and the male non-athletes and the interaction is the difference of two differences - it's the mean athlete/non-athlete difference for females minus the mean athlete/non-athlete difference for makes.
(ii) consider an interaction between gender{m=-1,f=-1} and athlete {n=-1,y=1}
Now the intercept represents the mean of the four group-means (and if the design was completely balanced it would also be the overall mean). The intercept is a quarter of what it was before.
The main effects are averages of difference effects - the gender effect is the average of the female-male difference within atheletes and the female-male difference within non-athletes. The athlete main effect is the average of the athlete/non-athlete difference within females and the athlete/non-athlete difference within males.
Does it depend on whether my cells are of different sizes?
What do you mean by 'different sizes'? Do you mean that the number of observations in each cell are different? (If so, I largely addressed that above - equal cell numbers gives additional meanings/simplifies the interpretation, such as making the intercept the the grand mean of the data rather than just the mean of group means.) | Dummy coding for contrasts: 0,1 vs. 1,-1
"Dichotomous Predictor Variables", there are two ways to code dichotomous predictors: using the contrast 0,1 or the contrast 1,-1.
There is no limit to the number of ways they can be coded. Those two |
34,277 | Example where a simple correlation coefficient has a sign opposite to that of the corresponding partial correlation coefficient | The sign of partial correlation coefficient is the same as the sign of linear regression coefficient. (In fact, partial $r$ is just one of the ways to standardize regressional $b$.) So, if we have some variables, for example three, $X$, $Y$, $Z$, and you want to know the sign of $r_{XY.Z}$ - the partial correlation between $X$ and $Y$ - you will be enough to know the sign of $b_X$ in regression of $Y$ by $X$ and $Z$ (or $b_Y$ in regression of $X$ by $Y$ and $Z$).
If we assume that the three variables are centered (their means were brought to 0), the formula of a linear regression coefficient found in many textbooks could be written as follows:
$b_X = \frac{SCP_{XY}SCP_{ZZ} - SCP_{ZY}SCP_{XZ}} {SS_XSS_Z - SCP_{XZ}^2}$
where SCP stands for "sum-of-crossproducts" and SS for "sum-of-squares". The denominator here is always positive, so the sign of $b_X$ depends entirely on the numerator. We can expand what is an SCP, for example $SCP_{XY}$:
$SCP_{XY} = \sqrt{SS_X}\sqrt{SS_Y}r_{XY}$
If we substitute all SCP in the numerator accordingly and then simplify we'll get that the numerator is proportional to the quantity
$r_{XY}-r_{ZY}r_{XZ}$
and its sign is the sign of this quantity. So, whatever the sign of zero-order correlation $r_{XY}$, the sign of partial correlation $r_{XY.Z}$ is determined by the last expression. Below is an example: $r_{XY}=.314$, $r_{YZ}=.589$, $r_{XZ}=.606$, $r_{XY.Z}=-.067$, negative because $.314-.589*.606<0$.
X Y Z
1.339 -1.097 .014
.619 1.022 .792
-.722 1.127 .699
-.695 -1.081 -2.016
1.421 .318 1.068
1.467 .002 1.284
-.619 .692 -.691
-.319 1.228 2.002
.478 -1.056 -1.281
.490 .704 1.151
-.316 1.204 .030
-.203 .021 1.176
.168 1.732 1.741
.763 1.090 1.834
2.734 -.227 1.044
-1.603 -.447 -2.056
-.846 -.024 -.335
-.009 .132 .932
-.304 .118 -.938
-.612 -1.878 -1.655
-1.370 -.607 -.499
-.921 -.893 -1.136
-.534 .312 -.282
-.136 -1.189 -1.203
.406 .752 .338
-.069 .559 -.227
.534 -.547 .167
-.450 .417 -.512
1.364 1.319 1.327
-1.019 .190 -.157
1.608 .588 .861
-1.909 -.871 -1.322
.488 -.266 .361
-1.492 -1.645 -1.216
.533 .006 .791
-.341 .890 .939
-.862 .873 -.342
-2.076 -1.051 -1.160
.059 1.314 -.456
-.666 -.652 -1.761
-.742 .885 .606
-.333 -.087 -1.040
.789 .684 1.322
-.121 1.006 .766
.528 -.190 .206
.944 1.752 2.055
-.368 -.548 -.619
-.655 .432 -.141
-.663 -1.176 -1.164
-.799 -1.607 -1.844
.563 -.052 -.011
-.959 -1.281 .267
1.256 .323 .569
-.099 .869 -.693
.813 -1.057 -1.393
1.443 1.519 1.180
1.513 1.662 1.160
1.488 .494 -.285
-.247 .808 .324
-.903 .086 -.912
.750 -1.304 .717
-1.665 -.847 -1.045
-1.945 -.480 -.439
.105 .804 1.303
-.524 1.251 1.201
-.277 -1.400 -.391
-.936 -1.406 -.215
2.029 .318 1.128
-1.214 1.002 -1.313
-.180 .205 -.845
-.364 1.176 -.428
1.087 1.167 1.743
-.736 -.779 -1.038
-.386 1.176 .167
.022 .120 1.399
.749 -1.324 1.507
-.262 -.438 -1.634
-1.199 -.206 -.439
-.339 -1.687 -1.082
-1.529 -1.969 -1.179
-1.028 -.806 -1.331
-1.080 -1.855 -1.958
.072 -.523 .044
-.096 .481 -.214
.220 -.221 .931
1.217 -.801 .412
-1.542 .398 -.735
-1.238 1.301 -.361
.320 .806 .951
-.039 -.198 -.526
.588 -.001 .860
-.682 -1.109 -.607
.767 -.381 .255
-.783 .338 .475
.120 1.227 .345
-.207 -.607 .130
1.450 1.145 .721
-.903 .127 .646
1.567 1.106 .477
.382 -.942 .404 | Example where a simple correlation coefficient has a sign opposite to that of the corresponding part | The sign of partial correlation coefficient is the same as the sign of linear regression coefficient. (In fact, partial $r$ is just one of the ways to standardize regressional $b$.) So, if we have som | Example where a simple correlation coefficient has a sign opposite to that of the corresponding partial correlation coefficient
The sign of partial correlation coefficient is the same as the sign of linear regression coefficient. (In fact, partial $r$ is just one of the ways to standardize regressional $b$.) So, if we have some variables, for example three, $X$, $Y$, $Z$, and you want to know the sign of $r_{XY.Z}$ - the partial correlation between $X$ and $Y$ - you will be enough to know the sign of $b_X$ in regression of $Y$ by $X$ and $Z$ (or $b_Y$ in regression of $X$ by $Y$ and $Z$).
If we assume that the three variables are centered (their means were brought to 0), the formula of a linear regression coefficient found in many textbooks could be written as follows:
$b_X = \frac{SCP_{XY}SCP_{ZZ} - SCP_{ZY}SCP_{XZ}} {SS_XSS_Z - SCP_{XZ}^2}$
where SCP stands for "sum-of-crossproducts" and SS for "sum-of-squares". The denominator here is always positive, so the sign of $b_X$ depends entirely on the numerator. We can expand what is an SCP, for example $SCP_{XY}$:
$SCP_{XY} = \sqrt{SS_X}\sqrt{SS_Y}r_{XY}$
If we substitute all SCP in the numerator accordingly and then simplify we'll get that the numerator is proportional to the quantity
$r_{XY}-r_{ZY}r_{XZ}$
and its sign is the sign of this quantity. So, whatever the sign of zero-order correlation $r_{XY}$, the sign of partial correlation $r_{XY.Z}$ is determined by the last expression. Below is an example: $r_{XY}=.314$, $r_{YZ}=.589$, $r_{XZ}=.606$, $r_{XY.Z}=-.067$, negative because $.314-.589*.606<0$.
X Y Z
1.339 -1.097 .014
.619 1.022 .792
-.722 1.127 .699
-.695 -1.081 -2.016
1.421 .318 1.068
1.467 .002 1.284
-.619 .692 -.691
-.319 1.228 2.002
.478 -1.056 -1.281
.490 .704 1.151
-.316 1.204 .030
-.203 .021 1.176
.168 1.732 1.741
.763 1.090 1.834
2.734 -.227 1.044
-1.603 -.447 -2.056
-.846 -.024 -.335
-.009 .132 .932
-.304 .118 -.938
-.612 -1.878 -1.655
-1.370 -.607 -.499
-.921 -.893 -1.136
-.534 .312 -.282
-.136 -1.189 -1.203
.406 .752 .338
-.069 .559 -.227
.534 -.547 .167
-.450 .417 -.512
1.364 1.319 1.327
-1.019 .190 -.157
1.608 .588 .861
-1.909 -.871 -1.322
.488 -.266 .361
-1.492 -1.645 -1.216
.533 .006 .791
-.341 .890 .939
-.862 .873 -.342
-2.076 -1.051 -1.160
.059 1.314 -.456
-.666 -.652 -1.761
-.742 .885 .606
-.333 -.087 -1.040
.789 .684 1.322
-.121 1.006 .766
.528 -.190 .206
.944 1.752 2.055
-.368 -.548 -.619
-.655 .432 -.141
-.663 -1.176 -1.164
-.799 -1.607 -1.844
.563 -.052 -.011
-.959 -1.281 .267
1.256 .323 .569
-.099 .869 -.693
.813 -1.057 -1.393
1.443 1.519 1.180
1.513 1.662 1.160
1.488 .494 -.285
-.247 .808 .324
-.903 .086 -.912
.750 -1.304 .717
-1.665 -.847 -1.045
-1.945 -.480 -.439
.105 .804 1.303
-.524 1.251 1.201
-.277 -1.400 -.391
-.936 -1.406 -.215
2.029 .318 1.128
-1.214 1.002 -1.313
-.180 .205 -.845
-.364 1.176 -.428
1.087 1.167 1.743
-.736 -.779 -1.038
-.386 1.176 .167
.022 .120 1.399
.749 -1.324 1.507
-.262 -.438 -1.634
-1.199 -.206 -.439
-.339 -1.687 -1.082
-1.529 -1.969 -1.179
-1.028 -.806 -1.331
-1.080 -1.855 -1.958
.072 -.523 .044
-.096 .481 -.214
.220 -.221 .931
1.217 -.801 .412
-1.542 .398 -.735
-1.238 1.301 -.361
.320 .806 .951
-.039 -.198 -.526
.588 -.001 .860
-.682 -1.109 -.607
.767 -.381 .255
-.783 .338 .475
.120 1.227 .345
-.207 -.607 .130
1.450 1.145 .721
-.903 .127 .646
1.567 1.106 .477
.382 -.942 .404 | Example where a simple correlation coefficient has a sign opposite to that of the corresponding part
The sign of partial correlation coefficient is the same as the sign of linear regression coefficient. (In fact, partial $r$ is just one of the ways to standardize regressional $b$.) So, if we have som |
34,278 | Example where a simple correlation coefficient has a sign opposite to that of the corresponding partial correlation coefficient | ttnphns gave a very good answer, but to complete the examination question .... I take it you want to know intuitively why the partial and simple autocorrelations could have opposite signs.
Consider the following fictional scenario. In a given town, people get fatter as they get older. As a consequence, their doctors recommend that they exercise more. So older (and fatter) people exercise more than young, skinny ones. The correlation between weight and exercise would be positive (simple correlation). But if you adjust for age, you would find that those who exercise have lower weight than those that do not exercise (for a given age).
Ignoring age distorts the effect of weight and exercise.
Something similar happens with categorical data, where it is called Simpson's paradox. | Example where a simple correlation coefficient has a sign opposite to that of the corresponding part | ttnphns gave a very good answer, but to complete the examination question .... I take it you want to know intuitively why the partial and simple autocorrelations could have opposite signs.
Consider t | Example where a simple correlation coefficient has a sign opposite to that of the corresponding partial correlation coefficient
ttnphns gave a very good answer, but to complete the examination question .... I take it you want to know intuitively why the partial and simple autocorrelations could have opposite signs.
Consider the following fictional scenario. In a given town, people get fatter as they get older. As a consequence, their doctors recommend that they exercise more. So older (and fatter) people exercise more than young, skinny ones. The correlation between weight and exercise would be positive (simple correlation). But if you adjust for age, you would find that those who exercise have lower weight than those that do not exercise (for a given age).
Ignoring age distorts the effect of weight and exercise.
Something similar happens with categorical data, where it is called Simpson's paradox. | Example where a simple correlation coefficient has a sign opposite to that of the corresponding part
ttnphns gave a very good answer, but to complete the examination question .... I take it you want to know intuitively why the partial and simple autocorrelations could have opposite signs.
Consider t |
34,279 | Subsample bootstrapping | There are two methods related to your question. One is the m out of n bootstrap and the other is random subsampling. In his original proposal Efron picked the bootstrap sample size to be the same as the original sample size. There was no specific requirement to do that but the idea was to mimic random sampling from the population as closely as possible. However there are situations where this ordinary bootstrap is inconsistent. Bickel and Ren among others showed that taking a smaller sample size m can lead to consistent results. This works asymptotically with m and n both tending to infinity but at a rate so that m/n goes to 0. Random subsampling was introduced by Hartigan and McCarthy in the late 1960s about a decade before the bootstrap. It uses a procedure of randomly sampling subsets of the original sample. It may be that you could take either of these approaches with your data.
For information on the m out of n bootstrap you can consult either of the following books that I authored/co-authored:
An Introduction to Bootstrap Methods with Applications to R
Bootstrap Methods: A Guide for Practitioners and Researchers
This book by Politis, Romano and Wolf goes into random subsampling in great detail:
Subsampling | Subsample bootstrapping | There are two methods related to your question. One is the m out of n bootstrap and the other is random subsampling. In his original proposal Efron picked the bootstrap sample size to be the same as | Subsample bootstrapping
There are two methods related to your question. One is the m out of n bootstrap and the other is random subsampling. In his original proposal Efron picked the bootstrap sample size to be the same as the original sample size. There was no specific requirement to do that but the idea was to mimic random sampling from the population as closely as possible. However there are situations where this ordinary bootstrap is inconsistent. Bickel and Ren among others showed that taking a smaller sample size m can lead to consistent results. This works asymptotically with m and n both tending to infinity but at a rate so that m/n goes to 0. Random subsampling was introduced by Hartigan and McCarthy in the late 1960s about a decade before the bootstrap. It uses a procedure of randomly sampling subsets of the original sample. It may be that you could take either of these approaches with your data.
For information on the m out of n bootstrap you can consult either of the following books that I authored/co-authored:
An Introduction to Bootstrap Methods with Applications to R
Bootstrap Methods: A Guide for Practitioners and Researchers
This book by Politis, Romano and Wolf goes into random subsampling in great detail:
Subsampling | Subsample bootstrapping
There are two methods related to your question. One is the m out of n bootstrap and the other is random subsampling. In his original proposal Efron picked the bootstrap sample size to be the same as |
34,280 | What's a good approach to estimate the probability of word frequencies? | I think you want to have a look at what the text-mining people call smoothing. A simple smoothing technique is to add one to every word count, so no word has a zero probability estimate - essentially pretend that every word occurs once more than it does in reality. Generalized, this is sometimes called "Laplace smoothing" or "additive smoothing" - it's a form of shrinkage applied to the probability estimation.
Most of the time for simple problems add-one smoothing smoothing will work ok, so it's a good starting point if you're trying to get started, which is what it sounds like.
However there's many more techniques, and you need to be careful applying this "add one" to bigrams/n-grams. There's a very rich literature if you want to get into it. Look up Good-Turing Smoothing and Katz Smoothing, and m-estimate smoothing to get an idea of the flavor of these techniques. | What's a good approach to estimate the probability of word frequencies? | I think you want to have a look at what the text-mining people call smoothing. A simple smoothing technique is to add one to every word count, so no word has a zero probability estimate - essentially | What's a good approach to estimate the probability of word frequencies?
I think you want to have a look at what the text-mining people call smoothing. A simple smoothing technique is to add one to every word count, so no word has a zero probability estimate - essentially pretend that every word occurs once more than it does in reality. Generalized, this is sometimes called "Laplace smoothing" or "additive smoothing" - it's a form of shrinkage applied to the probability estimation.
Most of the time for simple problems add-one smoothing smoothing will work ok, so it's a good starting point if you're trying to get started, which is what it sounds like.
However there's many more techniques, and you need to be careful applying this "add one" to bigrams/n-grams. There's a very rich literature if you want to get into it. Look up Good-Turing Smoothing and Katz Smoothing, and m-estimate smoothing to get an idea of the flavor of these techniques. | What's a good approach to estimate the probability of word frequencies?
I think you want to have a look at what the text-mining people call smoothing. A simple smoothing technique is to add one to every word count, so no word has a zero probability estimate - essentially |
34,281 | What's a good approach to estimate the probability of word frequencies? | It is hard to answer your needs without more detail. In text analysis, word frequencies are replaced by tf*idf which stands for "term frequency times inverse document frequency". This is an empirical score that corrects for the occurrence of terms that are frequent in the corpus and thus do not discriminate documents. It is widely used to compare texts, in particular through the cosine similarity measure.
In practice, you compute the frequency of the term in the document (tf) and multiply it by the log of the inverse fraction of documents containing the term (idf).
The site of Python's NLTK (natural language toolkit) contains an implementation of it, along with other tools and a good deal of explanations.
That said, if what you really want is an estimator of the probability of occurrence of the word, I don't know if you can get better than the frequency. And if 0 count is an issue, you can use the the Bayesian estimator (k+1) / (n+1), where k and n are word count and text size respectively.
Edit: for a great read about IDF, take a look at S. Robertson's paper Understanding Inverse Document Frequency | What's a good approach to estimate the probability of word frequencies? | It is hard to answer your needs without more detail. In text analysis, word frequencies are replaced by tf*idf which stands for "term frequency times inverse document frequency". This is an empirical | What's a good approach to estimate the probability of word frequencies?
It is hard to answer your needs without more detail. In text analysis, word frequencies are replaced by tf*idf which stands for "term frequency times inverse document frequency". This is an empirical score that corrects for the occurrence of terms that are frequent in the corpus and thus do not discriminate documents. It is widely used to compare texts, in particular through the cosine similarity measure.
In practice, you compute the frequency of the term in the document (tf) and multiply it by the log of the inverse fraction of documents containing the term (idf).
The site of Python's NLTK (natural language toolkit) contains an implementation of it, along with other tools and a good deal of explanations.
That said, if what you really want is an estimator of the probability of occurrence of the word, I don't know if you can get better than the frequency. And if 0 count is an issue, you can use the the Bayesian estimator (k+1) / (n+1), where k and n are word count and text size respectively.
Edit: for a great read about IDF, take a look at S. Robertson's paper Understanding Inverse Document Frequency | What's a good approach to estimate the probability of word frequencies?
It is hard to answer your needs without more detail. In text analysis, word frequencies are replaced by tf*idf which stands for "term frequency times inverse document frequency". This is an empirical |
34,282 | Variance of the product of a random matrix and a random vector | I'll assume that the elements of $\mathbf{y}$ are i.i.d. and likewise for the elements of $\mathbf{X}$. This is important, though, so be forewarned!
The diagonal elements of the covariance matrix equal the sum of $m$ products of i.i.d. random variates, so the variance will equal $m \mathbb{V}(x_{ij}y_j)$, which variance you have above in your first row.
The off-diagonal elements all equal zero, as the rows of $\mathbf{X}$ are independent. To see this, without loss of generality assume $\mathbb{E}x_{ij} = \mathbb{E}y_i = 0 \space \forall\thinspace i,j$. Define $\mathbf{x}_i$ as the $i^{\text{th}}$ row of $\mathbf{X}$, transposed to be a column vector. Then:
$\text{Cov}(\mathbf{x_i^\text{T}y},\mathbf{x_j^\text{T}y}) = \mathbb{E}(\mathbf{x_i^\text{T}y})^\text{T}(\mathbf{x_j^\text{T}y}) = \mathbb{E}\mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}=\mathbb{E}_y\mathbb{E}_x \mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}$
Note that $\mathbf{x}_i\mathbf{x}_j^\text{T}$ is a matrix, the $(p,q)^\text{th}$ element of which equals $x_{ip}x_{jq}$. When $i \ne j$, the expectation with respect to $x$ of $\mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}$ equals 0 for any $\mathbf{y}$, as each element is just the expectation of the product of two independent r.v.s with mean 0 times $y_py_q$. Consequently, the entire expectation equals 0. | Variance of the product of a random matrix and a random vector | I'll assume that the elements of $\mathbf{y}$ are i.i.d. and likewise for the elements of $\mathbf{X}$. This is important, though, so be forewarned!
The diagonal elements of the covariance matrix eq | Variance of the product of a random matrix and a random vector
I'll assume that the elements of $\mathbf{y}$ are i.i.d. and likewise for the elements of $\mathbf{X}$. This is important, though, so be forewarned!
The diagonal elements of the covariance matrix equal the sum of $m$ products of i.i.d. random variates, so the variance will equal $m \mathbb{V}(x_{ij}y_j)$, which variance you have above in your first row.
The off-diagonal elements all equal zero, as the rows of $\mathbf{X}$ are independent. To see this, without loss of generality assume $\mathbb{E}x_{ij} = \mathbb{E}y_i = 0 \space \forall\thinspace i,j$. Define $\mathbf{x}_i$ as the $i^{\text{th}}$ row of $\mathbf{X}$, transposed to be a column vector. Then:
$\text{Cov}(\mathbf{x_i^\text{T}y},\mathbf{x_j^\text{T}y}) = \mathbb{E}(\mathbf{x_i^\text{T}y})^\text{T}(\mathbf{x_j^\text{T}y}) = \mathbb{E}\mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}=\mathbb{E}_y\mathbb{E}_x \mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}$
Note that $\mathbf{x}_i\mathbf{x}_j^\text{T}$ is a matrix, the $(p,q)^\text{th}$ element of which equals $x_{ip}x_{jq}$. When $i \ne j$, the expectation with respect to $x$ of $\mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}$ equals 0 for any $\mathbf{y}$, as each element is just the expectation of the product of two independent r.v.s with mean 0 times $y_py_q$. Consequently, the entire expectation equals 0. | Variance of the product of a random matrix and a random vector
I'll assume that the elements of $\mathbf{y}$ are i.i.d. and likewise for the elements of $\mathbf{X}$. This is important, though, so be forewarned!
The diagonal elements of the covariance matrix eq |
34,283 | Variance of the product of a random matrix and a random vector | The co-variance matrix of $W x$ is
$$
\large{\mathrm{V}[Wx] = \mathrm{diag}(S m) + M \Sigma M^T},
$$
where $\mathrm{diag}(S m)$ is the diagonal matrix with the vector $Sm$ on its diagonal.
This is a generalization of jbowman's answer when we don't assume the entries of the vector are independent, but instead have covariance matrix $\Sigma$.
More precisely:
We let $W$ be a random matrix with iid. entries, mean $\mathrm{E}[W]=M$ and scalar variances $\mathrm{V}[W_{i,j}] = S_{i,j}$.
We let $x$ be a random vector with co-variance matrix $\mathrm{V}[x]=\Sigma$ and raw second moments $\mathrm{E}[x_i^2] = m_i$.
The the means $W$ work on the covariance matrix in the term $M\Sigma M^T$; and the variances of $W$ work on the second moment of $x$ in the term $\mathrm{diag}(Sm)$.
Proof.
I couldn't find that formula in the Matrix Cookbook, but it follows relatively simply from the law of total variance: $\mathrm{V}(y) = \mathrm{E}(\mathrm{V}(y \mid x)) + \mathrm{V}(\mathrm{E}(y \mid x))$.
For the second term
\begin{align}
\mathrm{V}(\mathrm{E}(y \mid x))
&=
\mathrm{V}(\mathrm{E}(Wx \mid x))
\\&=
\mathrm{V}(\mathrm{E}(W) x)
\\&=
\mathrm{V}(M x)
\\&=
M \Sigma M^T
.
\end{align}
For the term,
$\mathrm{V}(Wx \mid x)$, note that we may assume $M=\mathrm{E}(W)=0$ since
$\mathrm{V}((W'+M)x \mid x)
= \mathrm{V}(W'x \mid x) + \mathrm{V}(Mx \mid x)$, where the second term is constant (given $x$) and thus has variance 0.
Assuming $x$ is constant and $W$ thus mean 0, $Wx$ is a vector of independent entries, and so the co-variance matrix is just a diagonal.
We can compute those diagonal entries:
\begin{align}
(\mathrm{V} Wx)_{ii}
&=
\mathrm{E}(W_ix)^2 - E[W_ix]^2
\\&=
\sum_{k,\ell}
\mathrm{E}(W_{i,k}W_{i,\ell}) x_k x_\ell - 0
\\&=
\label{eq:reduce-sum}
\sum_k
\mathrm{E}(W_{i,k}^2) x_k^2
\\&=
\sum_k
S_{i,k} x_k^2
\\&=
\langle S_i, x^2\rangle,
\end{align}
where again we used that $W$ is here assumed to be mean 0.
And so $\mathrm{V}(Wx)$ is a diagonal matrix with $S (x^2)$ on the diagonal.
Putting it all together we get
\begin{align}
\mathrm{V}(y)
&= \mathrm{E}(\mathrm{V}(y \mid x)) + \mathrm{V}(\mathrm{E}(y \mid x))
\\&= \mathrm{diag}(S \mathrm E(x^2)) + M\Sigma M^T.
\end{align} | Variance of the product of a random matrix and a random vector | The co-variance matrix of $W x$ is
$$
\large{\mathrm{V}[Wx] = \mathrm{diag}(S m) + M \Sigma M^T},
$$
where $\mathrm{diag}(S m)$ is the diagonal matrix with the vector $Sm$ on its diagonal.
This is a g | Variance of the product of a random matrix and a random vector
The co-variance matrix of $W x$ is
$$
\large{\mathrm{V}[Wx] = \mathrm{diag}(S m) + M \Sigma M^T},
$$
where $\mathrm{diag}(S m)$ is the diagonal matrix with the vector $Sm$ on its diagonal.
This is a generalization of jbowman's answer when we don't assume the entries of the vector are independent, but instead have covariance matrix $\Sigma$.
More precisely:
We let $W$ be a random matrix with iid. entries, mean $\mathrm{E}[W]=M$ and scalar variances $\mathrm{V}[W_{i,j}] = S_{i,j}$.
We let $x$ be a random vector with co-variance matrix $\mathrm{V}[x]=\Sigma$ and raw second moments $\mathrm{E}[x_i^2] = m_i$.
The the means $W$ work on the covariance matrix in the term $M\Sigma M^T$; and the variances of $W$ work on the second moment of $x$ in the term $\mathrm{diag}(Sm)$.
Proof.
I couldn't find that formula in the Matrix Cookbook, but it follows relatively simply from the law of total variance: $\mathrm{V}(y) = \mathrm{E}(\mathrm{V}(y \mid x)) + \mathrm{V}(\mathrm{E}(y \mid x))$.
For the second term
\begin{align}
\mathrm{V}(\mathrm{E}(y \mid x))
&=
\mathrm{V}(\mathrm{E}(Wx \mid x))
\\&=
\mathrm{V}(\mathrm{E}(W) x)
\\&=
\mathrm{V}(M x)
\\&=
M \Sigma M^T
.
\end{align}
For the term,
$\mathrm{V}(Wx \mid x)$, note that we may assume $M=\mathrm{E}(W)=0$ since
$\mathrm{V}((W'+M)x \mid x)
= \mathrm{V}(W'x \mid x) + \mathrm{V}(Mx \mid x)$, where the second term is constant (given $x$) and thus has variance 0.
Assuming $x$ is constant and $W$ thus mean 0, $Wx$ is a vector of independent entries, and so the co-variance matrix is just a diagonal.
We can compute those diagonal entries:
\begin{align}
(\mathrm{V} Wx)_{ii}
&=
\mathrm{E}(W_ix)^2 - E[W_ix]^2
\\&=
\sum_{k,\ell}
\mathrm{E}(W_{i,k}W_{i,\ell}) x_k x_\ell - 0
\\&=
\label{eq:reduce-sum}
\sum_k
\mathrm{E}(W_{i,k}^2) x_k^2
\\&=
\sum_k
S_{i,k} x_k^2
\\&=
\langle S_i, x^2\rangle,
\end{align}
where again we used that $W$ is here assumed to be mean 0.
And so $\mathrm{V}(Wx)$ is a diagonal matrix with $S (x^2)$ on the diagonal.
Putting it all together we get
\begin{align}
\mathrm{V}(y)
&= \mathrm{E}(\mathrm{V}(y \mid x)) + \mathrm{V}(\mathrm{E}(y \mid x))
\\&= \mathrm{diag}(S \mathrm E(x^2)) + M\Sigma M^T.
\end{align} | Variance of the product of a random matrix and a random vector
The co-variance matrix of $W x$ is
$$
\large{\mathrm{V}[Wx] = \mathrm{diag}(S m) + M \Sigma M^T},
$$
where $\mathrm{diag}(S m)$ is the diagonal matrix with the vector $Sm$ on its diagonal.
This is a g |
34,284 | Variance of the product of a random matrix and a random vector | From [1]
The joint-covariance matrix of the product of a real random matrix $X$ of dimension $m\times m$ and a real random matrix $y$ of dimension $m\times 1$ is a real matrix of dimension $m\times m$. The element on the $k^\textrm{th}$ row and $l^\textrm{th}$ column of the joint-covariance matrix, denoted as $\operatorname {E} \left[(\mathbf{X} \,\mathbf{y}- \operatorname {E} \left[\mathbf{X} \,\mathbf{y} \right] )(\mathbf{X} \,\mathbf{y}- \operatorname {E} \left[\mathbf{X} \,\mathbf{y} \right] )^{\top }\right]_{k,l}$, is given as
$$\sum\limits_{i=1}^m\sum\limits_{j=1}^m
\Bigl(
\operatorname {cov}_X( X_{ki}, X_{lj})
+ \operatorname {E}_X \left[ X_{ki} \right]
\operatorname {E}_X \left[ X_{lj} \right]
\Bigr)\Bigl(
\operatorname {cov}_Y( y_{i}, y_{j}
)
+ \operatorname {E}_Y \left[ y_{i} \right]
\operatorname {E}_Y \left[ y_{j} \right]
\Bigr) -\operatorname {E}_X \left[ X_{ki} \right]
\operatorname {E}_X \left[ X_{lj} \right] \operatorname {E}_Y \left[ y_{i} \right]
\operatorname {E}_Y \left[ y_{j} \right]
$$
Bibliography
[1]
Proof Verification: Joint variance of the product of a random matrix with a random vector | Variance of the product of a random matrix and a random vector | From [1]
The joint-covariance matrix of the product of a real random matrix $X$ of dimension $m\times m$ and a real random matrix $y$ of dimension $m\times 1$ is a real matrix of dimension $m\times m$ | Variance of the product of a random matrix and a random vector
From [1]
The joint-covariance matrix of the product of a real random matrix $X$ of dimension $m\times m$ and a real random matrix $y$ of dimension $m\times 1$ is a real matrix of dimension $m\times m$. The element on the $k^\textrm{th}$ row and $l^\textrm{th}$ column of the joint-covariance matrix, denoted as $\operatorname {E} \left[(\mathbf{X} \,\mathbf{y}- \operatorname {E} \left[\mathbf{X} \,\mathbf{y} \right] )(\mathbf{X} \,\mathbf{y}- \operatorname {E} \left[\mathbf{X} \,\mathbf{y} \right] )^{\top }\right]_{k,l}$, is given as
$$\sum\limits_{i=1}^m\sum\limits_{j=1}^m
\Bigl(
\operatorname {cov}_X( X_{ki}, X_{lj})
+ \operatorname {E}_X \left[ X_{ki} \right]
\operatorname {E}_X \left[ X_{lj} \right]
\Bigr)\Bigl(
\operatorname {cov}_Y( y_{i}, y_{j}
)
+ \operatorname {E}_Y \left[ y_{i} \right]
\operatorname {E}_Y \left[ y_{j} \right]
\Bigr) -\operatorname {E}_X \left[ X_{ki} \right]
\operatorname {E}_X \left[ X_{lj} \right] \operatorname {E}_Y \left[ y_{i} \right]
\operatorname {E}_Y \left[ y_{j} \right]
$$
Bibliography
[1]
Proof Verification: Joint variance of the product of a random matrix with a random vector | Variance of the product of a random matrix and a random vector
From [1]
The joint-covariance matrix of the product of a real random matrix $X$ of dimension $m\times m$ and a real random matrix $y$ of dimension $m\times 1$ is a real matrix of dimension $m\times m$ |
34,285 | Whether to report confidence intervals of effect sizes such as $r$ and $\eta^2$? | The answer is almost always: report both. This way, your audience can decide on the interestingness and importance of your results, instead of just having to believe you. Confidence intervals are similarly always useful, because they give a neat indication of both effect size, and significance, in one. Even better if it's on a graph :)
It may be best to look at all the possible outcomes, remembering that a significance level ($\alpha$) is an arbitrary threshold, and you can choose anyone you like (higher ones are of course harder to defend).
Low p-value, low effect size: You've got a result, you're pretty sure it's not down to chance, but it doesn't really say anything interesting. An example might be a new drug, that significantly improves on an old one. But if the improvement is a only 2%, then your result might not means so much when weighed up against other factors (like extra costs or new side effects)
High p-value, high effect size: Looks like you're on to some thing, but you can't say for certain that it wasn't just the result of chance. For this drug, things look promising, but you DEFINITELY want to do some more testing, probably with a modified experiment that (hopefully) will reduce some of that ridiculous variability you're seeing.
High p-value, low effect size: Nothing interesting here. Go and design a better experiment.
Low p-value, large effect size: Win. You've got a big effect, and you're sure it's not down to chance. If it's a new drug, then you've got a good chance that your results will make a big impact, and it'll get pushed out to market quickly, even if it costs more, or if there are some side effects.
(Note: not pushing drugs here, I'm as mistrustful as the next paranoiac about big pharma, but drugs make for good statistical examples :) | Whether to report confidence intervals of effect sizes such as $r$ and $\eta^2$? | The answer is almost always: report both. This way, your audience can decide on the interestingness and importance of your results, instead of just having to believe you. Confidence intervals are simi | Whether to report confidence intervals of effect sizes such as $r$ and $\eta^2$?
The answer is almost always: report both. This way, your audience can decide on the interestingness and importance of your results, instead of just having to believe you. Confidence intervals are similarly always useful, because they give a neat indication of both effect size, and significance, in one. Even better if it's on a graph :)
It may be best to look at all the possible outcomes, remembering that a significance level ($\alpha$) is an arbitrary threshold, and you can choose anyone you like (higher ones are of course harder to defend).
Low p-value, low effect size: You've got a result, you're pretty sure it's not down to chance, but it doesn't really say anything interesting. An example might be a new drug, that significantly improves on an old one. But if the improvement is a only 2%, then your result might not means so much when weighed up against other factors (like extra costs or new side effects)
High p-value, high effect size: Looks like you're on to some thing, but you can't say for certain that it wasn't just the result of chance. For this drug, things look promising, but you DEFINITELY want to do some more testing, probably with a modified experiment that (hopefully) will reduce some of that ridiculous variability you're seeing.
High p-value, low effect size: Nothing interesting here. Go and design a better experiment.
Low p-value, large effect size: Win. You've got a big effect, and you're sure it's not down to chance. If it's a new drug, then you've got a good chance that your results will make a big impact, and it'll get pushed out to market quickly, even if it costs more, or if there are some side effects.
(Note: not pushing drugs here, I'm as mistrustful as the next paranoiac about big pharma, but drugs make for good statistical examples :) | Whether to report confidence intervals of effect sizes such as $r$ and $\eta^2$?
The answer is almost always: report both. This way, your audience can decide on the interestingness and importance of your results, instead of just having to believe you. Confidence intervals are simi |
34,286 | Whether to report confidence intervals of effect sizes such as $r$ and $\eta^2$? | Yes, you should calculate intervals for your statistics. The hard part is not justifying the interval presentation, it would be more difficult to justify not presenting it. It is very likely the exact effect size, mean, or whatever statistic you have calculated as a model for your data is not the true value. Calculating an interval reflects that A) you know that it's not the true value, and B) this is the area where you believe the true value to be. Intervals around the values you calculate allow one to make inferences beyond simple significance tests as well as indicating the power of the experiment.
Also, with a simple linear regression, as you have described, it is often best to report the beta coefficient, and a confidence interval. You could also report one for whatever effect size you select. If your predictor is continuous it's kinda hard to imagine what the CI looks like and it's best to plot it. Your stats package should be able to help with that. For example, in R you could get predict() to return CI values to plot. | Whether to report confidence intervals of effect sizes such as $r$ and $\eta^2$? | Yes, you should calculate intervals for your statistics. The hard part is not justifying the interval presentation, it would be more difficult to justify not presenting it. It is very likely the exa | Whether to report confidence intervals of effect sizes such as $r$ and $\eta^2$?
Yes, you should calculate intervals for your statistics. The hard part is not justifying the interval presentation, it would be more difficult to justify not presenting it. It is very likely the exact effect size, mean, or whatever statistic you have calculated as a model for your data is not the true value. Calculating an interval reflects that A) you know that it's not the true value, and B) this is the area where you believe the true value to be. Intervals around the values you calculate allow one to make inferences beyond simple significance tests as well as indicating the power of the experiment.
Also, with a simple linear regression, as you have described, it is often best to report the beta coefficient, and a confidence interval. You could also report one for whatever effect size you select. If your predictor is continuous it's kinda hard to imagine what the CI looks like and it's best to plot it. Your stats package should be able to help with that. For example, in R you could get predict() to return CI values to plot. | Whether to report confidence intervals of effect sizes such as $r$ and $\eta^2$?
Yes, you should calculate intervals for your statistics. The hard part is not justifying the interval presentation, it would be more difficult to justify not presenting it. It is very likely the exa |
34,287 | Problem with informative censoring | This is an attempt to answer the request you made in the comments.
Independence between $T$ and $C$ vs non-informative censoring
In the following, I assume random right censoring.
Take a sample of i.i.d. survival times
$$(y_1, \delta_1), \ldots{}, (y_n, \delta_n),$$
where $y_i = \min(t_i, c_i)$ is the minimum between the survival time and the censoring time, and where $\delta_i = I(t_i \leq c_i)$ is the event indicator. So, using my notation, $T$ is the event time random variable with density $f(\cdot)$ and survival time $S(\cdot)$, while $C$ is the censoring time random variable with density $g(\cdot)$ and survival $G(\cdot)$.
Under independence between $T$ and $C$, the likelihood function's contribution to an event time $(y_i, 1)$ is easily seen to be
$$"\Pr[T=y_i, C > y_i]" = G(y_i) f(y_i).$$
Similarly, the likelihood function's contribution to censored data $(y_i, 0)$ is
$$"\Pr[C=y_i, T > y_i]" = S(y_i) g(y_i). $$
The likelihood function for the complete data can therefore be written as
$$L = \prod_{i=1}^{n} \left[G(y_i) f(y_i)\right]^{\delta_i} \left[S(y_i) g(y_i)\right]^{1- \delta_i}.$$
Now, assume that the distribution of $C$ does not depend on the parameters of the distribution of $T$. Then the factors $G(y_i)^{\delta_i} g(y_i)^{1-\delta_i}$ are non-informative and can be factored out:
$$L \propto \prod_{i=1}^{n} f(y_i)^{\delta_i} S(y_i)^{1- \delta_i}.$$
This is the usual likelihood when dealing with survival data. Loosely speaking, independence between $T$ and $C$ allows you to split the joint contribution of $T$ and $C$ into their marginal contributions whereas the non-informative censoring assumption allows you to get rid of $g(\cdot)$ and $G(\cdot)$. | Problem with informative censoring | This is an attempt to answer the request you made in the comments.
Independence between $T$ and $C$ vs non-informative censoring
In the following, I assume random right censoring.
Take a sample of i.i | Problem with informative censoring
This is an attempt to answer the request you made in the comments.
Independence between $T$ and $C$ vs non-informative censoring
In the following, I assume random right censoring.
Take a sample of i.i.d. survival times
$$(y_1, \delta_1), \ldots{}, (y_n, \delta_n),$$
where $y_i = \min(t_i, c_i)$ is the minimum between the survival time and the censoring time, and where $\delta_i = I(t_i \leq c_i)$ is the event indicator. So, using my notation, $T$ is the event time random variable with density $f(\cdot)$ and survival time $S(\cdot)$, while $C$ is the censoring time random variable with density $g(\cdot)$ and survival $G(\cdot)$.
Under independence between $T$ and $C$, the likelihood function's contribution to an event time $(y_i, 1)$ is easily seen to be
$$"\Pr[T=y_i, C > y_i]" = G(y_i) f(y_i).$$
Similarly, the likelihood function's contribution to censored data $(y_i, 0)$ is
$$"\Pr[C=y_i, T > y_i]" = S(y_i) g(y_i). $$
The likelihood function for the complete data can therefore be written as
$$L = \prod_{i=1}^{n} \left[G(y_i) f(y_i)\right]^{\delta_i} \left[S(y_i) g(y_i)\right]^{1- \delta_i}.$$
Now, assume that the distribution of $C$ does not depend on the parameters of the distribution of $T$. Then the factors $G(y_i)^{\delta_i} g(y_i)^{1-\delta_i}$ are non-informative and can be factored out:
$$L \propto \prod_{i=1}^{n} f(y_i)^{\delta_i} S(y_i)^{1- \delta_i}.$$
This is the usual likelihood when dealing with survival data. Loosely speaking, independence between $T$ and $C$ allows you to split the joint contribution of $T$ and $C$ into their marginal contributions whereas the non-informative censoring assumption allows you to get rid of $g(\cdot)$ and $G(\cdot)$. | Problem with informative censoring
This is an attempt to answer the request you made in the comments.
Independence between $T$ and $C$ vs non-informative censoring
In the following, I assume random right censoring.
Take a sample of i.i |
34,288 | How do I handle predictor variables from different distributions in logistic regression? | Of course you can normalize your parameters, this would also increase the speed of the learning algorithm.
In order to have comparable $\beta$ at the end of the execution of the algorithm you should, for each feature $x_i$, compute its mean $\mu_i$ and its range $r_i = \max_i - \min_i$. Then you change each $r[x_i]$ value, ie the value of feature $x_i$ for a record $r$, with:
$$\frac{r[x_i] - \mu_i}{r_i}$$
Now your $r[x_i]$ values lie in the interval [-1,1], so you can compare your $\beta$ with more confidence and thus your odds ratio. This also shorten the time to find the best set of $\beta$ if you are using gradient descent. Just remember to normalize your features if you want to predict the class of a new record $r'$.
You can also add higher order features but this lead to overfitting. Usually, as long as you add more parameters is better to add regularization, that try to avoid overfitting by decreasing the magnitude of your $\beta$. This is obtained adding this term to the logistic regression cost function
$$\lambda\sum_{i=0}^{n}\beta_i^2$$
where $\lambda$ tune the power of the regularization.
I would suggest to have a look to Stanford's classes about machine learning here: http://www.ml-class.org/course/video/preview_list, Unit 6 and 7. | How do I handle predictor variables from different distributions in logistic regression? | Of course you can normalize your parameters, this would also increase the speed of the learning algorithm.
In order to have comparable $\beta$ at the end of the execution of the algorithm you should, | How do I handle predictor variables from different distributions in logistic regression?
Of course you can normalize your parameters, this would also increase the speed of the learning algorithm.
In order to have comparable $\beta$ at the end of the execution of the algorithm you should, for each feature $x_i$, compute its mean $\mu_i$ and its range $r_i = \max_i - \min_i$. Then you change each $r[x_i]$ value, ie the value of feature $x_i$ for a record $r$, with:
$$\frac{r[x_i] - \mu_i}{r_i}$$
Now your $r[x_i]$ values lie in the interval [-1,1], so you can compare your $\beta$ with more confidence and thus your odds ratio. This also shorten the time to find the best set of $\beta$ if you are using gradient descent. Just remember to normalize your features if you want to predict the class of a new record $r'$.
You can also add higher order features but this lead to overfitting. Usually, as long as you add more parameters is better to add regularization, that try to avoid overfitting by decreasing the magnitude of your $\beta$. This is obtained adding this term to the logistic regression cost function
$$\lambda\sum_{i=0}^{n}\beta_i^2$$
where $\lambda$ tune the power of the regularization.
I would suggest to have a look to Stanford's classes about machine learning here: http://www.ml-class.org/course/video/preview_list, Unit 6 and 7. | How do I handle predictor variables from different distributions in logistic regression?
Of course you can normalize your parameters, this would also increase the speed of the learning algorithm.
In order to have comparable $\beta$ at the end of the execution of the algorithm you should, |
34,289 | How do I handle predictor variables from different distributions in logistic regression? | @Simone makes some good points, so I will just throw in a couple of complementary tidbits. Although normalization can help with things like speed, logistic regression does not make assumptions about the distributions of your predictor variables. Thus, you don't have to normalize. Second, while adding a squared term can lead to overfitting (and you need to be cautious about that) it is permissible. What that would mean is that the probability of success is higher in the middle of a predictor's range than at the extremes (or vice versa). | How do I handle predictor variables from different distributions in logistic regression? | @Simone makes some good points, so I will just throw in a couple of complementary tidbits. Although normalization can help with things like speed, logistic regression does not make assumptions about | How do I handle predictor variables from different distributions in logistic regression?
@Simone makes some good points, so I will just throw in a couple of complementary tidbits. Although normalization can help with things like speed, logistic regression does not make assumptions about the distributions of your predictor variables. Thus, you don't have to normalize. Second, while adding a squared term can lead to overfitting (and you need to be cautious about that) it is permissible. What that would mean is that the probability of success is higher in the middle of a predictor's range than at the extremes (or vice versa). | How do I handle predictor variables from different distributions in logistic regression?
@Simone makes some good points, so I will just throw in a couple of complementary tidbits. Although normalization can help with things like speed, logistic regression does not make assumptions about |
34,290 | How do I handle predictor variables from different distributions in logistic regression? | In theory, the scale of your inputs are irrelevant to logistic regression. You can "theoretically" multiply $X_1$ by $10^{10^{10^{10}}}$ and the estimate for $\beta_1$ will adjust accordingly. It will be $10^{-10^{10^{10}}}$ times smaller than the original $\beta_1$, due to the invariance property of MLEs.
But try getting R to do the above adjusted regression - it will freak out (won't even be able to construct the X matrix).
This is a bit like the cholesky decomposition algorithm for calculating a matrix square root. Yes, in exact mathematics, cholesky decomposition never involves taking square root of negative number, but round off errors, and floating point arithmetic may lead to such cases.
You can take any linear combination of your X variables, and the predicted values will be the same.
If we take @simone's advice, and using the re-scaled X variables for fitting the model. But we can use the invariance property of MLE to get the beta that we want, after using numerically stable input X variables. It may be that the beta on the original scale may be easier to interpret than the beta on @simone's transformed one. So, we have the transformed $x_{ij}$ ($i$th observation for the $j$th variable), call it $\tilde{x}_{ij}$, defined by:
$$\tilde{x}_{ij}=a_{j}x_{ij}+b_{j}$$
@simone's choice corresponds to $a_{j}=\frac{1}{x_{[N]j}-x_{[1]j}}$ and $b_j=\frac{\overline{x}_{j}}{x_{[N]j}-x_{[1]j}}$ (using $x_{[i]j}$ to denote the $i$th order statistic of the $j$th variable, i.e. $x_{[N]j}\geq x_{[N-1]j}\geq\dots\geq x_{[1]j}$). The $a_j$ and $b_j$ can be thought of as algorithm parameters (chosen to make the algorithm more stable and/or run faster). We then fit a logistic regression using $\tilde{x}_{ij}$, and get parameter estimates $\tilde{\beta}_j$. Thus we write out the linear predictor:
$$z_i = \tilde{\beta}_0 + \sum_j\tilde{x}_{ij}\tilde{\beta}_j$$
Now substitute the equation for $\tilde{x}_{ij}$ and you get:
$$z_i = \tilde{\beta}_0 + \sum_j(a_{j}x_{ij}+b_{j})\tilde{\beta}_j=\beta_0+\sum_jx_{ij}\beta_j$$
Where
$$\begin{array}{c c}\beta_0=\tilde{\beta}_0+\sum_jb_{j}\tilde{\beta}_j & \;\;\;\;\;\; & \beta_j=a_j\tilde{\beta}_j \end{array}$$
You can see that theoretically, the parameters $a_j,b_j$ make no difference at all: any choice (apart from $a_j=0$) will lead to the same likelihood, because the linear predictor is unchanged. It even works for more complicated linear transforms, such as representing the X matrix by its principal components (which involves rotations). So we can back-transform the results to get the betas that we want for interpretation. | How do I handle predictor variables from different distributions in logistic regression? | In theory, the scale of your inputs are irrelevant to logistic regression. You can "theoretically" multiply $X_1$ by $10^{10^{10^{10}}}$ and the estimate for $\beta_1$ will adjust accordingly. It wi | How do I handle predictor variables from different distributions in logistic regression?
In theory, the scale of your inputs are irrelevant to logistic regression. You can "theoretically" multiply $X_1$ by $10^{10^{10^{10}}}$ and the estimate for $\beta_1$ will adjust accordingly. It will be $10^{-10^{10^{10}}}$ times smaller than the original $\beta_1$, due to the invariance property of MLEs.
But try getting R to do the above adjusted regression - it will freak out (won't even be able to construct the X matrix).
This is a bit like the cholesky decomposition algorithm for calculating a matrix square root. Yes, in exact mathematics, cholesky decomposition never involves taking square root of negative number, but round off errors, and floating point arithmetic may lead to such cases.
You can take any linear combination of your X variables, and the predicted values will be the same.
If we take @simone's advice, and using the re-scaled X variables for fitting the model. But we can use the invariance property of MLE to get the beta that we want, after using numerically stable input X variables. It may be that the beta on the original scale may be easier to interpret than the beta on @simone's transformed one. So, we have the transformed $x_{ij}$ ($i$th observation for the $j$th variable), call it $\tilde{x}_{ij}$, defined by:
$$\tilde{x}_{ij}=a_{j}x_{ij}+b_{j}$$
@simone's choice corresponds to $a_{j}=\frac{1}{x_{[N]j}-x_{[1]j}}$ and $b_j=\frac{\overline{x}_{j}}{x_{[N]j}-x_{[1]j}}$ (using $x_{[i]j}$ to denote the $i$th order statistic of the $j$th variable, i.e. $x_{[N]j}\geq x_{[N-1]j}\geq\dots\geq x_{[1]j}$). The $a_j$ and $b_j$ can be thought of as algorithm parameters (chosen to make the algorithm more stable and/or run faster). We then fit a logistic regression using $\tilde{x}_{ij}$, and get parameter estimates $\tilde{\beta}_j$. Thus we write out the linear predictor:
$$z_i = \tilde{\beta}_0 + \sum_j\tilde{x}_{ij}\tilde{\beta}_j$$
Now substitute the equation for $\tilde{x}_{ij}$ and you get:
$$z_i = \tilde{\beta}_0 + \sum_j(a_{j}x_{ij}+b_{j})\tilde{\beta}_j=\beta_0+\sum_jx_{ij}\beta_j$$
Where
$$\begin{array}{c c}\beta_0=\tilde{\beta}_0+\sum_jb_{j}\tilde{\beta}_j & \;\;\;\;\;\; & \beta_j=a_j\tilde{\beta}_j \end{array}$$
You can see that theoretically, the parameters $a_j,b_j$ make no difference at all: any choice (apart from $a_j=0$) will lead to the same likelihood, because the linear predictor is unchanged. It even works for more complicated linear transforms, such as representing the X matrix by its principal components (which involves rotations). So we can back-transform the results to get the betas that we want for interpretation. | How do I handle predictor variables from different distributions in logistic regression?
In theory, the scale of your inputs are irrelevant to logistic regression. You can "theoretically" multiply $X_1$ by $10^{10^{10^{10}}}$ and the estimate for $\beta_1$ will adjust accordingly. It wi |
34,291 | The "sum" of prediction intervals | In short, no, you don't just add the limits. Maybe if the predictions were perfectly correlated, but that's not usually the case at all.
Typically (if the model assumes independence) and you want an interval for a sum of predicted values, you might then think that you can treat the predictions as independent, but they generally aren't independent even when the observations are, because the predictions generally share parameter estimates.
In ordinary regression it's fairly straightforward; you can work out the mean and standard deviation of the sum and construct a t interval similar to the way you would for a single prediction.
If the model is the multiple regression model $y = X \beta + e$ with $e$ ~ $ N(0, \sigma^2 I)$, and you're predicting a vector of future values, $y_f$, where you have a set of predictors for those predictions, $X_f$.
then you want an interval for $a' y_f$ (where in your case, $a$ is a vector of $1$s)
Then $R = a' (y_f - \hat{y}_f)$ ~ $N(0, \sigma^2 m)$
where $m = a' (I + X_f (X'X)^{-1} X_f') a$
so $Q = R/(\sqrt{m}.s)$ is distributed as (standard) Student t with d.f. the d.f. in the estimate of the variance, $\sigma^2$, which for regression is normally $n-p$, where $p$ is the number of predictors including the constant. From the interval for Q, you can then back out an interval for R and then a prediction interval for $a'y_f$.
Assuming I didn't screw up along the way. | The "sum" of prediction intervals | In short, no, you don't just add the limits. Maybe if the predictions were perfectly correlated, but that's not usually the case at all.
Typically (if the model assumes independence) and you want an i | The "sum" of prediction intervals
In short, no, you don't just add the limits. Maybe if the predictions were perfectly correlated, but that's not usually the case at all.
Typically (if the model assumes independence) and you want an interval for a sum of predicted values, you might then think that you can treat the predictions as independent, but they generally aren't independent even when the observations are, because the predictions generally share parameter estimates.
In ordinary regression it's fairly straightforward; you can work out the mean and standard deviation of the sum and construct a t interval similar to the way you would for a single prediction.
If the model is the multiple regression model $y = X \beta + e$ with $e$ ~ $ N(0, \sigma^2 I)$, and you're predicting a vector of future values, $y_f$, where you have a set of predictors for those predictions, $X_f$.
then you want an interval for $a' y_f$ (where in your case, $a$ is a vector of $1$s)
Then $R = a' (y_f - \hat{y}_f)$ ~ $N(0, \sigma^2 m)$
where $m = a' (I + X_f (X'X)^{-1} X_f') a$
so $Q = R/(\sqrt{m}.s)$ is distributed as (standard) Student t with d.f. the d.f. in the estimate of the variance, $\sigma^2$, which for regression is normally $n-p$, where $p$ is the number of predictors including the constant. From the interval for Q, you can then back out an interval for R and then a prediction interval for $a'y_f$.
Assuming I didn't screw up along the way. | The "sum" of prediction intervals
In short, no, you don't just add the limits. Maybe if the predictions were perfectly correlated, but that's not usually the case at all.
Typically (if the model assumes independence) and you want an i |
34,292 | The "sum" of prediction intervals | In order to compute the variance of a sum of forecasts you need to incorporate the covariance between these forecasts. Thus compute the varriance and covariance of the observed series for as many lags as the length of your forecast. Compute the sum of these variances and covariances in a standard manner and the use this to talk about uncertainty in the sum of your forecasts. WE use this routinely to take daily predictions and convert them to probabilities of making month-end numbers. | The "sum" of prediction intervals | In order to compute the variance of a sum of forecasts you need to incorporate the covariance between these forecasts. Thus compute the varriance and covariance of the observed series for as many lags | The "sum" of prediction intervals
In order to compute the variance of a sum of forecasts you need to incorporate the covariance between these forecasts. Thus compute the varriance and covariance of the observed series for as many lags as the length of your forecast. Compute the sum of these variances and covariances in a standard manner and the use this to talk about uncertainty in the sum of your forecasts. WE use this routinely to take daily predictions and convert them to probabilities of making month-end numbers. | The "sum" of prediction intervals
In order to compute the variance of a sum of forecasts you need to incorporate the covariance between these forecasts. Thus compute the varriance and covariance of the observed series for as many lags |
34,293 | Textbook with list of hypothesis tests and practical guidance on use | Statistical Rules of Thumb (Wiley, 2002), by van Belle, has a lot of useful rules of thumb for applied statistics. | Textbook with list of hypothesis tests and practical guidance on use | Statistical Rules of Thumb (Wiley, 2002), by van Belle, has a lot of useful rules of thumb for applied statistics. | Textbook with list of hypothesis tests and practical guidance on use
Statistical Rules of Thumb (Wiley, 2002), by van Belle, has a lot of useful rules of thumb for applied statistics. | Textbook with list of hypothesis tests and practical guidance on use
Statistical Rules of Thumb (Wiley, 2002), by van Belle, has a lot of useful rules of thumb for applied statistics. |
34,294 | Textbook with list of hypothesis tests and practical guidance on use | In general, I would have a look at statistics books in your domain of application (e.g., whether it is psychology, ecology, medical, sociology, etc.).
Such books tend to have less rigour.
Instead, such books often try to give useful decision rules to assist researchers where statistics is not the main interest of the researcher.
Here are a few suggestions coming from a behavioural and social sciences perspective.
Multivariate books
If you want practical tips on techniques like multiple regression, factor analysis, PCA, and so forth, these books are options:
Hair et al Multivariate data analysis: This has very few formulas but lots of flow charts and simple decision rules designed to assist less-mathematically inclined social scientists implement multivariate stats.
Tabachnick and Fidell Multivariate Statistics. This arguably has more rigour than Hair et al, but it does have sections devoted to giving practical advice.
SPSS Cookbook
I know a lot of psychology research students who are looking for a cookbook approach (perhaps just to get themselves started) to analysing their data turn to the SPSS Survival Manual. However, this is SPSS centric and more about tips on implementing analyses in SPSS. | Textbook with list of hypothesis tests and practical guidance on use | In general, I would have a look at statistics books in your domain of application (e.g., whether it is psychology, ecology, medical, sociology, etc.).
Such books tend to have less rigour.
Instead, suc | Textbook with list of hypothesis tests and practical guidance on use
In general, I would have a look at statistics books in your domain of application (e.g., whether it is psychology, ecology, medical, sociology, etc.).
Such books tend to have less rigour.
Instead, such books often try to give useful decision rules to assist researchers where statistics is not the main interest of the researcher.
Here are a few suggestions coming from a behavioural and social sciences perspective.
Multivariate books
If you want practical tips on techniques like multiple regression, factor analysis, PCA, and so forth, these books are options:
Hair et al Multivariate data analysis: This has very few formulas but lots of flow charts and simple decision rules designed to assist less-mathematically inclined social scientists implement multivariate stats.
Tabachnick and Fidell Multivariate Statistics. This arguably has more rigour than Hair et al, but it does have sections devoted to giving practical advice.
SPSS Cookbook
I know a lot of psychology research students who are looking for a cookbook approach (perhaps just to get themselves started) to analysing their data turn to the SPSS Survival Manual. However, this is SPSS centric and more about tips on implementing analyses in SPSS. | Textbook with list of hypothesis tests and practical guidance on use
In general, I would have a look at statistics books in your domain of application (e.g., whether it is psychology, ecology, medical, sociology, etc.).
Such books tend to have less rigour.
Instead, suc |
34,295 | Textbook with list of hypothesis tests and practical guidance on use | For a thorough overview of tests, I can recommend the Handbook of Parametric and Nonparametric Statistics by David Sheskin. | Textbook with list of hypothesis tests and practical guidance on use | For a thorough overview of tests, I can recommend the Handbook of Parametric and Nonparametric Statistics by David Sheskin. | Textbook with list of hypothesis tests and practical guidance on use
For a thorough overview of tests, I can recommend the Handbook of Parametric and Nonparametric Statistics by David Sheskin. | Textbook with list of hypothesis tests and practical guidance on use
For a thorough overview of tests, I can recommend the Handbook of Parametric and Nonparametric Statistics by David Sheskin. |
34,296 | Textbook with list of hypothesis tests and practical guidance on use | Biometry,
by Sokal and Rohlf
has a fairly comprehensive table with such information on the inside of the front and back covers, but these tables apparently didn't make (perhaps due to placement) into Google's digitized version. | Textbook with list of hypothesis tests and practical guidance on use | Biometry,
by Sokal and Rohlf
has a fairly comprehensive table with such information on the inside of the front and back covers, but these tables apparently didn't make (perhaps due to placement) into | Textbook with list of hypothesis tests and practical guidance on use
Biometry,
by Sokal and Rohlf
has a fairly comprehensive table with such information on the inside of the front and back covers, but these tables apparently didn't make (perhaps due to placement) into Google's digitized version. | Textbook with list of hypothesis tests and practical guidance on use
Biometry,
by Sokal and Rohlf
has a fairly comprehensive table with such information on the inside of the front and back covers, but these tables apparently didn't make (perhaps due to placement) into |
34,297 | Textbook with list of hypothesis tests and practical guidance on use | Could this help?
The following table shows general guidelines for choosing a statistical analysis. We emphasize that these are general guidelines and should not be construed as hard and fast rules.
From the UCLA Stata/SAS/R tutorial pages. I use a revised version in class. | Textbook with list of hypothesis tests and practical guidance on use | Could this help?
The following table shows general guidelines for choosing a statistical analysis. We emphasize that these are general guidelines and should not be construed as hard and fast rules.
| Textbook with list of hypothesis tests and practical guidance on use
Could this help?
The following table shows general guidelines for choosing a statistical analysis. We emphasize that these are general guidelines and should not be construed as hard and fast rules.
From the UCLA Stata/SAS/R tutorial pages. I use a revised version in class. | Textbook with list of hypothesis tests and practical guidance on use
Could this help?
The following table shows general guidelines for choosing a statistical analysis. We emphasize that these are general guidelines and should not be construed as hard and fast rules.
|
34,298 | How do I choose what SVM kernels to use? | Do your analysis with several different kernels. Make sure you cross-validate. Choose the kernel that performs the best during cross-validation and fit it to your whole dataset.
/edit: Here is some example code in R, for a classification SVM:
#Use a support vector machine to predict iris species
library(caret)
library(caTools)
#Choose x and y
x <- iris[,c("Sepal.Length","Sepal.Width","Petal.Length","Petal.Width")]
y <- iris$Species
#Pre-Compute CV folds so we can use the same ones for all models
CV_Folds <- createMultiFolds(y, k = 10, times = 5)
#Fit a Linear SVM
L_model <- train(x,y,method="svmLinear",tuneLength=5,
trControl=trainControl(method='repeatedCV',index=CV_Folds))
#Fit a Poly SVM
P_model <- train(x,y,method="svmPoly",tuneLength=5,
trControl=trainControl(method='repeatedCV',index=CV_Folds))
#Fit a Radial SVM
R_model <- train(x,y,method="svmRadial",tuneLength=5,
trControl=trainControl(method='repeatedCV',index=CV_Folds))
#Compare 3 models:
resamps <- resamples(list(Linear = L_model, Poly = P_model, Radial = R_model))
summary(resamps)
bwplot(resamps, metric = "Accuracy")
densityplot(resamps, metric = "Accuracy")
#Test a model's predictive accuracy Using Area under the ROC curve
#Ideally, this should be done with a SEPERATE test set
pSpecies <- predict(L_model,x,type='prob')
colAUC(pSpecies,y,plot=TRUE) | How do I choose what SVM kernels to use? | Do your analysis with several different kernels. Make sure you cross-validate. Choose the kernel that performs the best during cross-validation and fit it to your whole dataset.
/edit: Here is some e | How do I choose what SVM kernels to use?
Do your analysis with several different kernels. Make sure you cross-validate. Choose the kernel that performs the best during cross-validation and fit it to your whole dataset.
/edit: Here is some example code in R, for a classification SVM:
#Use a support vector machine to predict iris species
library(caret)
library(caTools)
#Choose x and y
x <- iris[,c("Sepal.Length","Sepal.Width","Petal.Length","Petal.Width")]
y <- iris$Species
#Pre-Compute CV folds so we can use the same ones for all models
CV_Folds <- createMultiFolds(y, k = 10, times = 5)
#Fit a Linear SVM
L_model <- train(x,y,method="svmLinear",tuneLength=5,
trControl=trainControl(method='repeatedCV',index=CV_Folds))
#Fit a Poly SVM
P_model <- train(x,y,method="svmPoly",tuneLength=5,
trControl=trainControl(method='repeatedCV',index=CV_Folds))
#Fit a Radial SVM
R_model <- train(x,y,method="svmRadial",tuneLength=5,
trControl=trainControl(method='repeatedCV',index=CV_Folds))
#Compare 3 models:
resamps <- resamples(list(Linear = L_model, Poly = P_model, Radial = R_model))
summary(resamps)
bwplot(resamps, metric = "Accuracy")
densityplot(resamps, metric = "Accuracy")
#Test a model's predictive accuracy Using Area under the ROC curve
#Ideally, this should be done with a SEPERATE test set
pSpecies <- predict(L_model,x,type='prob')
colAUC(pSpecies,y,plot=TRUE) | How do I choose what SVM kernels to use?
Do your analysis with several different kernels. Make sure you cross-validate. Choose the kernel that performs the best during cross-validation and fit it to your whole dataset.
/edit: Here is some e |
34,299 | Combining auto.arima() and ets() from the forecast package | The likelihoods from the two model classes, and hence the AIC values, are not comparable due to different initialization assumptions. So your function is not valid. I suggest you try out the two model classes on your series and see which gives the best out-of-sample forecasts. | Combining auto.arima() and ets() from the forecast package | The likelihoods from the two model classes, and hence the AIC values, are not comparable due to different initialization assumptions. So your function is not valid. I suggest you try out the two model | Combining auto.arima() and ets() from the forecast package
The likelihoods from the two model classes, and hence the AIC values, are not comparable due to different initialization assumptions. So your function is not valid. I suggest you try out the two model classes on your series and see which gives the best out-of-sample forecasts. | Combining auto.arima() and ets() from the forecast package
The likelihoods from the two model classes, and hence the AIC values, are not comparable due to different initialization assumptions. So your function is not valid. I suggest you try out the two model |
34,300 | Boxplots as tables | I tend to think that boxplots will convey more effective information if there are numerous empirical distributions that you want to summarize into a single figure. If you only have two or three groups, editors may ask you to provide numerical summaries instead, either because it is more suitable for the journal policy, or because readers won't gain much insight into the data from a figure. If you provide the three quartiles, range, and optionally the mean $\pm$ SD, then an advertised reader should have a clear idea of the shape of the distribution (symmetry, presence of outlying values, etc.).
I would suggest two critical reviews by Andrew Gelman (the first goes the other way around, but still it provides insightful ideas):
Gelman, A, Pasarica, C, and Dodhia, R. Let's practice what we preach. The American Statistician (2002) 56(2): 121-130.
Gelman, A. Why Tables are Really Much Better than Graphs. (also discussed on his blog) | Boxplots as tables | I tend to think that boxplots will convey more effective information if there are numerous empirical distributions that you want to summarize into a single figure. If you only have two or three groups | Boxplots as tables
I tend to think that boxplots will convey more effective information if there are numerous empirical distributions that you want to summarize into a single figure. If you only have two or three groups, editors may ask you to provide numerical summaries instead, either because it is more suitable for the journal policy, or because readers won't gain much insight into the data from a figure. If you provide the three quartiles, range, and optionally the mean $\pm$ SD, then an advertised reader should have a clear idea of the shape of the distribution (symmetry, presence of outlying values, etc.).
I would suggest two critical reviews by Andrew Gelman (the first goes the other way around, but still it provides insightful ideas):
Gelman, A, Pasarica, C, and Dodhia, R. Let's practice what we preach. The American Statistician (2002) 56(2): 121-130.
Gelman, A. Why Tables are Really Much Better than Graphs. (also discussed on his blog) | Boxplots as tables
I tend to think that boxplots will convey more effective information if there are numerous empirical distributions that you want to summarize into a single figure. If you only have two or three groups |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.