idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
37,901 | Can PC1 explain more than 90% of variance? | Let $S \in \mathbb{R}^{p \times p}$ be the sample covariance matrix (which equals to $X'X$ for centered and intercept-free design matrix $X$), and $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_p \geq 0$ be all the non-negative eigenvalues of $S$, then the total variation percentage explained by the first $k$ PCs is given by
\begin{align}
p_k = \frac{\sum_{j = 1}^k \lambda_j}{\sum_{j = 1}^p \lambda_j}, \; k = 1, 2, \ldots, p.
\end{align}
In particular, the total variation percentage explained by the first PC is
\begin{align}
p_1 = \frac{\lambda_1}{\sum_{j = 1}^p\lambda_j}. \tag{1}
\end{align}
In view of $(1)$, theoretically there are no upper bounds that are strictly less than $1$ for $p_1$. In other words, there exists $X$ such that $p_1$ can be arbitrarily close to $1$. For example, any design matrix $X$ in the form of $(2)$ below satisfies that its first PC explains $1 - \varepsilon$ total variation:
\begin{align}
X = U\begin{bmatrix}
\operatorname{diag}(\sqrt{1 - \varepsilon}, c, \ldots, c) \\
0
\end{bmatrix}V', \tag{2}
\end{align}
where $c = \frac{\sqrt{\varepsilon}}{\sqrt{p - 1}}$, $U$ and $V$ are arbitrary order $n$ and order $p$ orthogonal matrices. Therefore, it is not surprising at all to observe cases such that $p_1 > 90\%$. | Can PC1 explain more than 90% of variance? | Let $S \in \mathbb{R}^{p \times p}$ be the sample covariance matrix (which equals to $X'X$ for centered and intercept-free design matrix $X$), and $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_p | Can PC1 explain more than 90% of variance?
Let $S \in \mathbb{R}^{p \times p}$ be the sample covariance matrix (which equals to $X'X$ for centered and intercept-free design matrix $X$), and $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_p \geq 0$ be all the non-negative eigenvalues of $S$, then the total variation percentage explained by the first $k$ PCs is given by
\begin{align}
p_k = \frac{\sum_{j = 1}^k \lambda_j}{\sum_{j = 1}^p \lambda_j}, \; k = 1, 2, \ldots, p.
\end{align}
In particular, the total variation percentage explained by the first PC is
\begin{align}
p_1 = \frac{\lambda_1}{\sum_{j = 1}^p\lambda_j}. \tag{1}
\end{align}
In view of $(1)$, theoretically there are no upper bounds that are strictly less than $1$ for $p_1$. In other words, there exists $X$ such that $p_1$ can be arbitrarily close to $1$. For example, any design matrix $X$ in the form of $(2)$ below satisfies that its first PC explains $1 - \varepsilon$ total variation:
\begin{align}
X = U\begin{bmatrix}
\operatorname{diag}(\sqrt{1 - \varepsilon}, c, \ldots, c) \\
0
\end{bmatrix}V', \tag{2}
\end{align}
where $c = \frac{\sqrt{\varepsilon}}{\sqrt{p - 1}}$, $U$ and $V$ are arbitrary order $n$ and order $p$ orthogonal matrices. Therefore, it is not surprising at all to observe cases such that $p_1 > 90\%$. | Can PC1 explain more than 90% of variance?
Let $S \in \mathbb{R}^{p \times p}$ be the sample covariance matrix (which equals to $X'X$ for centered and intercept-free design matrix $X$), and $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_p |
37,902 | Best Practices with Data Wrangling before running Random Forest Predictions | When doing predictions with Random Forests, we very often (or always)
need to perform some pre-processing.
This is not true. Random Forest is really "off-the-shelf".
Outliers. Should we remove them all? If so, we consider an outlier
based on the 3/2 rule? Should we keep them? Why?
The base model used in RF is a large decision tree (usually built via CART). Decision trees are robust to outliers, because they isolate them in small regions of the feature space. Then, since the prediction for each leaf is the average (for regression) or the majority class (for classification), being isolated in separate leaves, outliers won't influence the rest of the predictions (in the case of regression for instance, they would not impact the mean of the other leaves). Bottom line: you don't care about outliers in RF. Just remove them if they are aberrant observations (e.g., due to recording errors). If they're valid cases, you can keep them.
When dealing with deltas of observations (as an example, suppose I'm
subtracting a student grade from another), should I normalize the
delta of all students or just stick to the absolute delta? Sticking to
the same student case, If I have cumulative data (suppose for every
test I sum their last grades). Should the process be the same?
The question here is not really related to RF, it is algorithm independent. The real question is what do you want to do? What are you trying to predict?
Do we need to apply any data transformation like log or any other? If
so, when should it be done? When data range is large? What's the point
of changing the domain of the data here?
For the same reasons you don't need to worry about outliers, you don't need to apply any kind of data transformation when using RF. For classification, you may need to apply some kind of resampling/weighing strategy if you have a class-imbalance problem, but that's it.
If I have a categorical target, can I apply regression instead of
classification so the output would be (suppose the classes are 0, 1,
2) 0.132, 0.431; so would it be more accurate?
You cannot apply regression if your target is categorical.
In what kind of problems is Random Forest more indicated? Large
datasets?
RF is indicated for all types of problems. People (especially in the medical field, genomics, etc.) even use it primarily for its variable importance measures. In genetics, where the guys face the "small $n$ - large $p$" problem, RF also does very well. Anyhow, Machine Learning in general requires sufficient amounts of training and testing data, though there's no general rule. If your training data represents all your concepts and if these concepts are easily capturable, a couple of hundred observations may suffice. However, if what should be learned is very complex and if some concepts are underepresented, more training data will be needed.
Should I discart the less important variables? Maybe it just creates
noise?
Another nice feature of decision trees built through CART is that they automatically put aside the non-important variables (only the best splitters are selected at each split). In the seminal book by Hastie et al. (2009), the authors showed that with 100 pure noise predictors, and 6 relevant predictors, the relevant variables were still selected 50% of the time at each split. So you really don't need to worry about variable selection in RF. Of course if you know that some variables are not contributing, don't include them, but if the underlying mechanisms of the process you're studying are mostly unknown, you can include all your candidate predictors. | Best Practices with Data Wrangling before running Random Forest Predictions | When doing predictions with Random Forests, we very often (or always)
need to perform some pre-processing.
This is not true. Random Forest is really "off-the-shelf".
Outliers. Should we remove the | Best Practices with Data Wrangling before running Random Forest Predictions
When doing predictions with Random Forests, we very often (or always)
need to perform some pre-processing.
This is not true. Random Forest is really "off-the-shelf".
Outliers. Should we remove them all? If so, we consider an outlier
based on the 3/2 rule? Should we keep them? Why?
The base model used in RF is a large decision tree (usually built via CART). Decision trees are robust to outliers, because they isolate them in small regions of the feature space. Then, since the prediction for each leaf is the average (for regression) or the majority class (for classification), being isolated in separate leaves, outliers won't influence the rest of the predictions (in the case of regression for instance, they would not impact the mean of the other leaves). Bottom line: you don't care about outliers in RF. Just remove them if they are aberrant observations (e.g., due to recording errors). If they're valid cases, you can keep them.
When dealing with deltas of observations (as an example, suppose I'm
subtracting a student grade from another), should I normalize the
delta of all students or just stick to the absolute delta? Sticking to
the same student case, If I have cumulative data (suppose for every
test I sum their last grades). Should the process be the same?
The question here is not really related to RF, it is algorithm independent. The real question is what do you want to do? What are you trying to predict?
Do we need to apply any data transformation like log or any other? If
so, when should it be done? When data range is large? What's the point
of changing the domain of the data here?
For the same reasons you don't need to worry about outliers, you don't need to apply any kind of data transformation when using RF. For classification, you may need to apply some kind of resampling/weighing strategy if you have a class-imbalance problem, but that's it.
If I have a categorical target, can I apply regression instead of
classification so the output would be (suppose the classes are 0, 1,
2) 0.132, 0.431; so would it be more accurate?
You cannot apply regression if your target is categorical.
In what kind of problems is Random Forest more indicated? Large
datasets?
RF is indicated for all types of problems. People (especially in the medical field, genomics, etc.) even use it primarily for its variable importance measures. In genetics, where the guys face the "small $n$ - large $p$" problem, RF also does very well. Anyhow, Machine Learning in general requires sufficient amounts of training and testing data, though there's no general rule. If your training data represents all your concepts and if these concepts are easily capturable, a couple of hundred observations may suffice. However, if what should be learned is very complex and if some concepts are underepresented, more training data will be needed.
Should I discart the less important variables? Maybe it just creates
noise?
Another nice feature of decision trees built through CART is that they automatically put aside the non-important variables (only the best splitters are selected at each split). In the seminal book by Hastie et al. (2009), the authors showed that with 100 pure noise predictors, and 6 relevant predictors, the relevant variables were still selected 50% of the time at each split. So you really don't need to worry about variable selection in RF. Of course if you know that some variables are not contributing, don't include them, but if the underlying mechanisms of the process you're studying are mostly unknown, you can include all your candidate predictors. | Best Practices with Data Wrangling before running Random Forest Predictions
When doing predictions with Random Forests, we very often (or always)
need to perform some pre-processing.
This is not true. Random Forest is really "off-the-shelf".
Outliers. Should we remove the |
37,903 | Best Practices with Data Wrangling before running Random Forest Predictions | Theoretically, Random Forest is ideal as it is commonly assumed and described by Breiman and Cuttler. In practice, it is very good but far from ideal. Therefore, these questions are very valid.
RF are not handling outliers as ideally as it is widely assumed. They are susceptible to even a single outlier with extreme values as was shown in How are Random Forests not sensitive to outliers?, and also there are couple of papers about how heteroscedasticity affects the RF predictions. In real-life data, you may have a lot (1-2%) of such outliers caused by typos (for human-inputted data like 3200 instead of 32.00), jumps of electrical current due to induction or simply due to unexpected exposures (for IoT), heteroscedasticity, etc. These "outliers" end up in many leaves of the decision trees, pulling predictions to higher values.
In case of unbalanced data where large number of target_value = 0, RF tends to underestimate predictions significantly.
Log-transformations can improve accuracy, especially in case of very skewed data (with very long tails). See for example "Forecasting Bike Sharing Demand" by Jayant Malani et al. (pdf), and this kaggle submission.
RF tends assigning higher importance to variables that have larger range of values (both categorical and continuous). For example, see this blog post: Are categorical variables getting lost in your random forests?
So, data preprocessing is very important even in the case of Random Forest.
I hope, this answer frames the validity of the questions and the links will provide some answers with starting points. | Best Practices with Data Wrangling before running Random Forest Predictions | Theoretically, Random Forest is ideal as it is commonly assumed and described by Breiman and Cuttler. In practice, it is very good but far from ideal. Therefore, these questions are very valid.
RF ar | Best Practices with Data Wrangling before running Random Forest Predictions
Theoretically, Random Forest is ideal as it is commonly assumed and described by Breiman and Cuttler. In practice, it is very good but far from ideal. Therefore, these questions are very valid.
RF are not handling outliers as ideally as it is widely assumed. They are susceptible to even a single outlier with extreme values as was shown in How are Random Forests not sensitive to outliers?, and also there are couple of papers about how heteroscedasticity affects the RF predictions. In real-life data, you may have a lot (1-2%) of such outliers caused by typos (for human-inputted data like 3200 instead of 32.00), jumps of electrical current due to induction or simply due to unexpected exposures (for IoT), heteroscedasticity, etc. These "outliers" end up in many leaves of the decision trees, pulling predictions to higher values.
In case of unbalanced data where large number of target_value = 0, RF tends to underestimate predictions significantly.
Log-transformations can improve accuracy, especially in case of very skewed data (with very long tails). See for example "Forecasting Bike Sharing Demand" by Jayant Malani et al. (pdf), and this kaggle submission.
RF tends assigning higher importance to variables that have larger range of values (both categorical and continuous). For example, see this blog post: Are categorical variables getting lost in your random forests?
So, data preprocessing is very important even in the case of Random Forest.
I hope, this answer frames the validity of the questions and the links will provide some answers with starting points. | Best Practices with Data Wrangling before running Random Forest Predictions
Theoretically, Random Forest is ideal as it is commonly assumed and described by Breiman and Cuttler. In practice, it is very good but far from ideal. Therefore, these questions are very valid.
RF ar |
37,904 | Best Practices with Data Wrangling before running Random Forest Predictions | When pre-processing data, you are generally trying to achieve the following:
A. Removal of errors from your data. If your outliers are due to data
recording errors, for example, you would want to fix this in the
pre-processing stage. The various rules for identifying outliers
should be treated as providing initial guesses, requiring further
investigation.
B. Creating variables where you have a reasonable expectation that
different values of the predictor variables may by correlated with
the outcome variable. This is the bit that requires domain-specific
knowledge, and good variables are often constructed using ratios,
differences, averages of variables, etc.
C. Modifying the data to avoid restrictive assumptions of whatever
model we are fitting.
The super-cool thing about tree-based methods, like random forests, is that they require much less effort in the type C pre-processing. In particular, normalizing, removing non-error-outliers, discarding variables, and log transformations, are not generally required. But, the cost of tree-based methods is that they are data hungry, so with smaller samples (e.g., less than 10,000 case), my experience is that often a glm is going to do a better job, but this brings in the type C processing, which after 25 years of building models I still find a challenge. | Best Practices with Data Wrangling before running Random Forest Predictions | When pre-processing data, you are generally trying to achieve the following:
A. Removal of errors from your data. If your outliers are due to data
recording errors, for example, you would want to | Best Practices with Data Wrangling before running Random Forest Predictions
When pre-processing data, you are generally trying to achieve the following:
A. Removal of errors from your data. If your outliers are due to data
recording errors, for example, you would want to fix this in the
pre-processing stage. The various rules for identifying outliers
should be treated as providing initial guesses, requiring further
investigation.
B. Creating variables where you have a reasonable expectation that
different values of the predictor variables may by correlated with
the outcome variable. This is the bit that requires domain-specific
knowledge, and good variables are often constructed using ratios,
differences, averages of variables, etc.
C. Modifying the data to avoid restrictive assumptions of whatever
model we are fitting.
The super-cool thing about tree-based methods, like random forests, is that they require much less effort in the type C pre-processing. In particular, normalizing, removing non-error-outliers, discarding variables, and log transformations, are not generally required. But, the cost of tree-based methods is that they are data hungry, so with smaller samples (e.g., less than 10,000 case), my experience is that often a glm is going to do a better job, but this brings in the type C processing, which after 25 years of building models I still find a challenge. | Best Practices with Data Wrangling before running Random Forest Predictions
When pre-processing data, you are generally trying to achieve the following:
A. Removal of errors from your data. If your outliers are due to data
recording errors, for example, you would want to |
37,905 | Getting P value with mixed effect with lme4 package [duplicate] | I'm pasting the information from help("pvalues",package="lme4") here.
Users who need p-values have a variety of options. In the list below, the methods marked MC provide explicit model comparisons; CI denotes confidence intervals; and P denotes parameter-level or sequential tests of all effects in a model. The starred (*) suggestions provide finite-size corrections (important when the number of groups is <50); those marked (+) support GLMMs as well as LMMs.
likelihood ratio tests via anova (MC,+)
profile confidence intervals via profile.merMod and confint.merMod (CI,+)
parametric bootstrap confidence intervals and model comparisons via bootMer (or PBmodcomp in the pbkrtest package) (MC/CI,*,+)
for random effects, simulation tests via the RLRsim package (MC,*)
for fixed effects, F tests via Kenward-Roger approximation using KRmodcomp from the pbkrtest package (MC)
car::Anova and lmerTest::anova provide wrappers for pbkrtest: lmerTest::anova also provides t tests via the Satterthwaite approximation (P,*)
afex::mixed is another wrapper for pbkrtest and anova providing "Type 3" tests of all effects (P,*,+)
arm::sim, or bootMer, can be used to compute confidence intervals on predictions.
When all else fails, don't forget to keep p-values in perspective. | Getting P value with mixed effect with lme4 package [duplicate] | I'm pasting the information from help("pvalues",package="lme4") here.
Users who need p-values have a variety of options. In the list below, the methods marked MC provide explicit model comparisons; CI | Getting P value with mixed effect with lme4 package [duplicate]
I'm pasting the information from help("pvalues",package="lme4") here.
Users who need p-values have a variety of options. In the list below, the methods marked MC provide explicit model comparisons; CI denotes confidence intervals; and P denotes parameter-level or sequential tests of all effects in a model. The starred (*) suggestions provide finite-size corrections (important when the number of groups is <50); those marked (+) support GLMMs as well as LMMs.
likelihood ratio tests via anova (MC,+)
profile confidence intervals via profile.merMod and confint.merMod (CI,+)
parametric bootstrap confidence intervals and model comparisons via bootMer (or PBmodcomp in the pbkrtest package) (MC/CI,*,+)
for random effects, simulation tests via the RLRsim package (MC,*)
for fixed effects, F tests via Kenward-Roger approximation using KRmodcomp from the pbkrtest package (MC)
car::Anova and lmerTest::anova provide wrappers for pbkrtest: lmerTest::anova also provides t tests via the Satterthwaite approximation (P,*)
afex::mixed is another wrapper for pbkrtest and anova providing "Type 3" tests of all effects (P,*,+)
arm::sim, or bootMer, can be used to compute confidence intervals on predictions.
When all else fails, don't forget to keep p-values in perspective. | Getting P value with mixed effect with lme4 package [duplicate]
I'm pasting the information from help("pvalues",package="lme4") here.
Users who need p-values have a variety of options. In the list below, the methods marked MC provide explicit model comparisons; CI |
37,906 | Getting P value with mixed effect with lme4 package [duplicate] | p-values in lme4 are deliberately not listed by default, see:
Bates (author of lme4) on p-values in linear mixed models
or here
see also:
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
There are some "approximations" but better just forget about p-values in lmm (or generally forget about them because they "measure" mostly the sample size). | Getting P value with mixed effect with lme4 package [duplicate] | p-values in lme4 are deliberately not listed by default, see:
Bates (author of lme4) on p-values in linear mixed models
or here
see also:
How to get an "overall" p-value and effect size for a categor | Getting P value with mixed effect with lme4 package [duplicate]
p-values in lme4 are deliberately not listed by default, see:
Bates (author of lme4) on p-values in linear mixed models
or here
see also:
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
There are some "approximations" but better just forget about p-values in lmm (or generally forget about them because they "measure" mostly the sample size). | Getting P value with mixed effect with lme4 package [duplicate]
p-values in lme4 are deliberately not listed by default, see:
Bates (author of lme4) on p-values in linear mixed models
or here
see also:
How to get an "overall" p-value and effect size for a categor |
37,907 | Can a mathematically sound prediction interval have a negative lower bound? | Mathematics are reality-agnostic. So your negative lower prediction band can certainly be mathematically sound.
I would argue, however, that this is a good indication that you are using the wrong mathematics, e.g., Ordinary Least Squares (which assumes a normal distribution of errors) with count data (where a normal distribution makes no sense). I would suggest using Poisson regression or some similar method that is more suitable for count data. | Can a mathematically sound prediction interval have a negative lower bound? | Mathematics are reality-agnostic. So your negative lower prediction band can certainly be mathematically sound.
I would argue, however, that this is a good indication that you are using the wrong math | Can a mathematically sound prediction interval have a negative lower bound?
Mathematics are reality-agnostic. So your negative lower prediction band can certainly be mathematically sound.
I would argue, however, that this is a good indication that you are using the wrong mathematics, e.g., Ordinary Least Squares (which assumes a normal distribution of errors) with count data (where a normal distribution makes no sense). I would suggest using Poisson regression or some similar method that is more suitable for count data. | Can a mathematically sound prediction interval have a negative lower bound?
Mathematics are reality-agnostic. So your negative lower prediction band can certainly be mathematically sound.
I would argue, however, that this is a good indication that you are using the wrong math |
37,908 | Can a mathematically sound prediction interval have a negative lower bound? | It suggests to me that you haven't used any analytic approach with an appropriate transformation of the outcome. With count data, for instance, popular linear models (Poisson Regression or Negative Binomial Regression in particular) model the log of the process as a linear function of predictors. Then, any predicted values resulting from such a model would have to be exponentiated and, thus, positive.
Similarly, when you use the predict.glm function with se.fit set to TRUE for these models, you calculate symmetric prediction intervals for counts on the log scale. Re-exponentiating those values ensures that you have intervals which do not include 0. You'll notice that the exponentiated predictions are the same as you would get from setting type='response' in the predict function. However, asking for both type='response', se.fit=TRUE will confuse R since the link transformation of the GLM means you'll have non-symmetric intervals (SE of FIT is calculated on the transformed outcome scale).
There are additive count models, just like there are additive risk models for binary endpoints, but I think the results can be difficult to interpret and they behave untenably for predictions near to the boundaries values of the support (0 for count data). As such, I'd be dubious about not only your negative predictions but all other predictions from your model. | Can a mathematically sound prediction interval have a negative lower bound? | It suggests to me that you haven't used any analytic approach with an appropriate transformation of the outcome. With count data, for instance, popular linear models (Poisson Regression or Negative Bi | Can a mathematically sound prediction interval have a negative lower bound?
It suggests to me that you haven't used any analytic approach with an appropriate transformation of the outcome. With count data, for instance, popular linear models (Poisson Regression or Negative Binomial Regression in particular) model the log of the process as a linear function of predictors. Then, any predicted values resulting from such a model would have to be exponentiated and, thus, positive.
Similarly, when you use the predict.glm function with se.fit set to TRUE for these models, you calculate symmetric prediction intervals for counts on the log scale. Re-exponentiating those values ensures that you have intervals which do not include 0. You'll notice that the exponentiated predictions are the same as you would get from setting type='response' in the predict function. However, asking for both type='response', se.fit=TRUE will confuse R since the link transformation of the GLM means you'll have non-symmetric intervals (SE of FIT is calculated on the transformed outcome scale).
There are additive count models, just like there are additive risk models for binary endpoints, but I think the results can be difficult to interpret and they behave untenably for predictions near to the boundaries values of the support (0 for count data). As such, I'd be dubious about not only your negative predictions but all other predictions from your model. | Can a mathematically sound prediction interval have a negative lower bound?
It suggests to me that you haven't used any analytic approach with an appropriate transformation of the outcome. With count data, for instance, popular linear models (Poisson Regression or Negative Bi |
37,909 | Why is the arithmetic mean > median on a histogram skewed to the right? | A histogram represents probability by area:
In this figure, the white region (to the left of $x=1$) comprises half the area. The blue region comprises the other half. The boundary between them at $x=1$ is, by definition, the median: it splits the total probability exactly in half.
The areas in the next figure are shaded with varying densities of black:
The density of black is directly proportional to the horizontal distance from the middle (around 1.65 here). Each point near $x=7$ is very dark. Such points contribute proportionately more to the total amount of black ink used to shade this figure. The central place (where the shading becomes white) is chosen to make total amount of black to its right equal the total amount of black to its left. This makes it equal to the mean.
We see that the distant values ($x$ larger than $3$ or so) contribute so much black that they "pull" the dividing line--the mean--towards them.
Another way to see this uses three dimensions. The mean is the point at which the two volumes (pink/yellow and blue/purple) are exactly equal:
This figure was constructed by sweeping the original histogram (shown in the $x$ (horizontal) and $z$ (up) directions) from side to side around the mean value. This caused the long extended "tail" at the right to sweep out a larger region, because it is further from the mean than the rest of the figure. By virtue of that, it contributes more to the volume.
Were we to try the same thing by sweeping around the median (at $x=1$), we would get unequal volumes:
The white line on the ground still shows the mean, but now the axis of sweeping is around the median. Although the median correctly splits the cross-sectional area into two, it allows more volume to the right because the points to the right are "skewed" away from the median. Thus the sweeping axis has to be shifted toward larger values of $x$ to make the volumes balance. | Why is the arithmetic mean > median on a histogram skewed to the right? | A histogram represents probability by area:
In this figure, the white region (to the left of $x=1$) comprises half the area. The blue region comprises the other half. The boundary between them at $ | Why is the arithmetic mean > median on a histogram skewed to the right?
A histogram represents probability by area:
In this figure, the white region (to the left of $x=1$) comprises half the area. The blue region comprises the other half. The boundary between them at $x=1$ is, by definition, the median: it splits the total probability exactly in half.
The areas in the next figure are shaded with varying densities of black:
The density of black is directly proportional to the horizontal distance from the middle (around 1.65 here). Each point near $x=7$ is very dark. Such points contribute proportionately more to the total amount of black ink used to shade this figure. The central place (where the shading becomes white) is chosen to make total amount of black to its right equal the total amount of black to its left. This makes it equal to the mean.
We see that the distant values ($x$ larger than $3$ or so) contribute so much black that they "pull" the dividing line--the mean--towards them.
Another way to see this uses three dimensions. The mean is the point at which the two volumes (pink/yellow and blue/purple) are exactly equal:
This figure was constructed by sweeping the original histogram (shown in the $x$ (horizontal) and $z$ (up) directions) from side to side around the mean value. This caused the long extended "tail" at the right to sweep out a larger region, because it is further from the mean than the rest of the figure. By virtue of that, it contributes more to the volume.
Were we to try the same thing by sweeping around the median (at $x=1$), we would get unequal volumes:
The white line on the ground still shows the mean, but now the axis of sweeping is around the median. Although the median correctly splits the cross-sectional area into two, it allows more volume to the right because the points to the right are "skewed" away from the median. Thus the sweeping axis has to be shifted toward larger values of $x$ to make the volumes balance. | Why is the arithmetic mean > median on a histogram skewed to the right?
A histogram represents probability by area:
In this figure, the white region (to the left of $x=1$) comprises half the area. The blue region comprises the other half. The boundary between them at $ |
37,910 | Why is the arithmetic mean > median on a histogram skewed to the right? | Here is a simple answer:
Skew to the right means that the largest values are farther from the mean than the smallest values are (I know that isn't technically right, and not specific, but it gets the idea). If the largest values are farther from the mean they will influence the mean more than the smallest values will, thus making it larger. However, the effect on the median will be the same for the largest and smallest values.
For example, let's start with some symmetrically distributed data:
1 2 3 4 5
mean = 3, median = 3.
Now, let's skew it to the right, by making the largest values bigger (farther from the mean):
1 2 3 40 50
mean = 96/5 = 19.2 ... but median still = 3. | Why is the arithmetic mean > median on a histogram skewed to the right? | Here is a simple answer:
Skew to the right means that the largest values are farther from the mean than the smallest values are (I know that isn't technically right, and not specific, but it gets the | Why is the arithmetic mean > median on a histogram skewed to the right?
Here is a simple answer:
Skew to the right means that the largest values are farther from the mean than the smallest values are (I know that isn't technically right, and not specific, but it gets the idea). If the largest values are farther from the mean they will influence the mean more than the smallest values will, thus making it larger. However, the effect on the median will be the same for the largest and smallest values.
For example, let's start with some symmetrically distributed data:
1 2 3 4 5
mean = 3, median = 3.
Now, let's skew it to the right, by making the largest values bigger (farther from the mean):
1 2 3 40 50
mean = 96/5 = 19.2 ... but median still = 3. | Why is the arithmetic mean > median on a histogram skewed to the right?
Here is a simple answer:
Skew to the right means that the largest values are farther from the mean than the smallest values are (I know that isn't technically right, and not specific, but it gets the |
37,911 | Why is the arithmetic mean > median on a histogram skewed to the right? | "Always" is wrong: take for example the data $\{1,1,2,2,3\}$ which has a mean of $1.8$, a median of $2$ and a positive skewness.
But more typically, positive skewness is associated with some extreme values above the median and fewer or less extreme values below the median. These will typically push up both the skewness and the mean.
Counter-examples can be constructed by having a few values far above the median on one side and more values but only moderately extreme on the other side. | Why is the arithmetic mean > median on a histogram skewed to the right? | "Always" is wrong: take for example the data $\{1,1,2,2,3\}$ which has a mean of $1.8$, a median of $2$ and a positive skewness.
But more typically, positive skewness is associated with some extreme v | Why is the arithmetic mean > median on a histogram skewed to the right?
"Always" is wrong: take for example the data $\{1,1,2,2,3\}$ which has a mean of $1.8$, a median of $2$ and a positive skewness.
But more typically, positive skewness is associated with some extreme values above the median and fewer or less extreme values below the median. These will typically push up both the skewness and the mean.
Counter-examples can be constructed by having a few values far above the median on one side and more values but only moderately extreme on the other side. | Why is the arithmetic mean > median on a histogram skewed to the right?
"Always" is wrong: take for example the data $\{1,1,2,2,3\}$ which has a mean of $1.8$, a median of $2$ and a positive skewness.
But more typically, positive skewness is associated with some extreme v |
37,912 | How to learn statistics for medical research? | What would you think about that?
This is a good question. I've spent a good amount of time during my Ph.D in Biostatistics consulting for academic physicians and their research. If you (and moderators) will allow for an opinion based answer then I'm happy to give it.
Medicine for some reason has created a culture in which the physician is intended to do everything themselves. Study design, data collection, analysis, writing, oh yea and on top of that their clinical duties and learning more about their specialty. These include responsibilities of an epidemiologist, data architect, statistician, just to name a few. Personally, I think that is a ridiculous onus to put on a researcher. This also might explain why medical research seems to be a copy-paste affair with bad statistics. Statistics is hard to learn, medicine is hard to learn, so learning both tends to mean taking shortcuts on one or the other or both (and understandably, it is the statistical rigour that is sacrificed).
Rather than succumb to these expectations it might be wiser to, as whuber notes, befriend a biostatsitician. Collaboration is a good way to learn, because you get consistent advice tailored to your specific situation as opposed to a mish mash of approaches from different courses with different learning goals. I'm not saying to defer all statistical work to a statistician, nor am I saying you should not learn about statistics independently, but I think rushing to learn all these things while also being a physician will lead to poorer work than if you were patient and collaborative.
The question is then "How do I meet/befriend a biostatsitician". Your medical school is likely attached to a university, in which there may or may not be an epidemiology department. Epidemiologists focus very carefully on how to do quality studies in a medical setting. THey should be well versed enough in statistics to help you out with design, data collection, and analysis. If you don't have an epidemiology department, there may be someone in a stats/math department, or in the sociology department (sociology is not exactly like biostatistics, but the difference between an epidemiologist and a sociologist grows smaller and smaller).
EDIT:
EdM makes a good point about the basis of fundamental probability and statistics. I'm not prepared to give a list of topics to learn and places to learn them. I think any undergraduate curriculum in science can give you enough to get started.
That being said, if pressed to offer one resource on a basis of prob and stats, I would recommend Introduction to Medical Statistics by Martin Bland. The book is geared towards medical students and in the introduction states
This book is intended as an introduction to some of the statistical ideas important to medicine. It does not tell you all you need to know to do medical research. Once you have understood the concepts discussed here, it is much easier to learn about the techniques of study design and statistical analysis required to answer any particular question.
The book however does not cover probability, and so you're free to pick up most introductory texts on the matter to cover that base. I agree with Bland that this book should serve as a good basis to read academic medical literature critically, and should serve as an excellent jumping off point to learn more about statistics in medicine. | How to learn statistics for medical research? | What would you think about that?
This is a good question. I've spent a good amount of time during my Ph.D in Biostatistics consulting for academic physicians and their research. If you (and moderat | How to learn statistics for medical research?
What would you think about that?
This is a good question. I've spent a good amount of time during my Ph.D in Biostatistics consulting for academic physicians and their research. If you (and moderators) will allow for an opinion based answer then I'm happy to give it.
Medicine for some reason has created a culture in which the physician is intended to do everything themselves. Study design, data collection, analysis, writing, oh yea and on top of that their clinical duties and learning more about their specialty. These include responsibilities of an epidemiologist, data architect, statistician, just to name a few. Personally, I think that is a ridiculous onus to put on a researcher. This also might explain why medical research seems to be a copy-paste affair with bad statistics. Statistics is hard to learn, medicine is hard to learn, so learning both tends to mean taking shortcuts on one or the other or both (and understandably, it is the statistical rigour that is sacrificed).
Rather than succumb to these expectations it might be wiser to, as whuber notes, befriend a biostatsitician. Collaboration is a good way to learn, because you get consistent advice tailored to your specific situation as opposed to a mish mash of approaches from different courses with different learning goals. I'm not saying to defer all statistical work to a statistician, nor am I saying you should not learn about statistics independently, but I think rushing to learn all these things while also being a physician will lead to poorer work than if you were patient and collaborative.
The question is then "How do I meet/befriend a biostatsitician". Your medical school is likely attached to a university, in which there may or may not be an epidemiology department. Epidemiologists focus very carefully on how to do quality studies in a medical setting. THey should be well versed enough in statistics to help you out with design, data collection, and analysis. If you don't have an epidemiology department, there may be someone in a stats/math department, or in the sociology department (sociology is not exactly like biostatistics, but the difference between an epidemiologist and a sociologist grows smaller and smaller).
EDIT:
EdM makes a good point about the basis of fundamental probability and statistics. I'm not prepared to give a list of topics to learn and places to learn them. I think any undergraduate curriculum in science can give you enough to get started.
That being said, if pressed to offer one resource on a basis of prob and stats, I would recommend Introduction to Medical Statistics by Martin Bland. The book is geared towards medical students and in the introduction states
This book is intended as an introduction to some of the statistical ideas important to medicine. It does not tell you all you need to know to do medical research. Once you have understood the concepts discussed here, it is much easier to learn about the techniques of study design and statistical analysis required to answer any particular question.
The book however does not cover probability, and so you're free to pick up most introductory texts on the matter to cover that base. I agree with Bland that this book should serve as a good basis to read academic medical literature critically, and should serve as an excellent jumping off point to learn more about statistics in medicine. | How to learn statistics for medical research?
What would you think about that?
This is a good question. I've spent a good amount of time during my Ph.D in Biostatistics consulting for academic physicians and their research. If you (and moderat |
37,913 | How to learn statistics for medical research? | I spent the last 14 years trying to get better at statistics including earning an applied master's in it (which in honesty was disappointing). I think the answer is that there is no simple way to do this. This is made worse by the fact that much of the academic literature that uses statistics is not done by true statisticians IMHO (which of course is true of me) and they do things (lots of things) that violate various rules because they are not true experts. Simple examples include applying a method wrong, ignoring threats to external validity, using stepwise regression, failing to address omitted variable bias, dropping variables from the model because they are not statistically significant and rerunning the model with the same data and on and on. One frightening article years ago looking at articles in the New England Journal of Medicine and Lancet found a wide range of statistical errors in articles applying logistic regression, and they are of course elite journals. And this ignores the fact that statisticians often disagree themselves.
So if you are going to learn statistics, its a lot of work. You might look at the UCLA (Idre) site or for time series the Duke University site as a starter. | How to learn statistics for medical research? | I spent the last 14 years trying to get better at statistics including earning an applied master's in it (which in honesty was disappointing). I think the answer is that there is no simple way to do t | How to learn statistics for medical research?
I spent the last 14 years trying to get better at statistics including earning an applied master's in it (which in honesty was disappointing). I think the answer is that there is no simple way to do this. This is made worse by the fact that much of the academic literature that uses statistics is not done by true statisticians IMHO (which of course is true of me) and they do things (lots of things) that violate various rules because they are not true experts. Simple examples include applying a method wrong, ignoring threats to external validity, using stepwise regression, failing to address omitted variable bias, dropping variables from the model because they are not statistically significant and rerunning the model with the same data and on and on. One frightening article years ago looking at articles in the New England Journal of Medicine and Lancet found a wide range of statistical errors in articles applying logistic regression, and they are of course elite journals. And this ignores the fact that statisticians often disagree themselves.
So if you are going to learn statistics, its a lot of work. You might look at the UCLA (Idre) site or for time series the Duke University site as a starter. | How to learn statistics for medical research?
I spent the last 14 years trying to get better at statistics including earning an applied master's in it (which in honesty was disappointing). I think the answer is that there is no simple way to do t |
37,914 | How to learn statistics for medical research? | The discussions above make fantastic points. I recommend a parallel approach of finding a biostatistician collaborator and learning about biostatistics. On the latter, concentrate on learning things that the biostatistician is unlikely to already know, to spur them on to a better understanding. I've tried to cover lots of things in BBR that are in this category---things that biostatisticians and clinical researchers have to unlearn in order to make progress. As just one example, it is commonly accepted that computing change from baseline is OK in analyzing patient outcomes. BBR goes to great lengths to show why you should not compute change from baseline. You'll see lots of discussions about learning medical statistics, and collaborating, at datamethods.org. | How to learn statistics for medical research? | The discussions above make fantastic points. I recommend a parallel approach of finding a biostatistician collaborator and learning about biostatistics. On the latter, concentrate on learning things | How to learn statistics for medical research?
The discussions above make fantastic points. I recommend a parallel approach of finding a biostatistician collaborator and learning about biostatistics. On the latter, concentrate on learning things that the biostatistician is unlikely to already know, to spur them on to a better understanding. I've tried to cover lots of things in BBR that are in this category---things that biostatisticians and clinical researchers have to unlearn in order to make progress. As just one example, it is commonly accepted that computing change from baseline is OK in analyzing patient outcomes. BBR goes to great lengths to show why you should not compute change from baseline. You'll see lots of discussions about learning medical statistics, and collaborating, at datamethods.org. | How to learn statistics for medical research?
The discussions above make fantastic points. I recommend a parallel approach of finding a biostatistician collaborator and learning about biostatistics. On the latter, concentrate on learning things |
37,915 | How to learn statistics for medical research? | You're Getting The Order Wrong.
It's as though you've said "I want to learn medicine, therefor I bought a scalpel." R is a very useful tool. But it will be much harder to learn statistics from using R than it would to learn to use R after already knowing statistics.
If I were learning statistics from scratch, I would not start with Baysian. I would start really simple. Use physical objects to understand how possible outcomes work first (like dice and coins). If you flip a coin, how many outcomes are there? Can you list them out? How about two coins, etc. Getting a solid understanding of the set of possible outcomes is the best basis for thinking about how "likely" something is to fall into a given subset of those outcomes.
Conveniently, this is the way that most introductory statistics classes are taught. As such, I really recommend that you start learning statistics with an intro class.
Following that, probably the next most important skill is formal logic. That's likely to be filed under philosophy at your school or online. You will essentially never be able to actually "prove" anything with research in a strict, formal sense. Example: with the logic chain A -> B -> C, maybe you test B->C a million times, and get the desired result every time. That demonstration doesn't "prove" that B->C. But practically, we would probably accept research that asserted B->C. What you need there is the grounding and discipline to understand and clearly state your full logic chain and all your assumptions. Doing so will make the analysis easier, and will make your results more robust.
Once you have that, you can look at Baysian approaches much more usefully. I find it's easiest to think of Bayes as dealing with some epistemological problems with frequentist statistics, which means it wont make much sense unless you already understand the basics of statistics and of the logic chains. (Some people have claimed success with learning Baysian from the get-go. I have trouble even imagining how one could usefully do that. YMMV.)
Unfortunately, while you can get a solid working knowledge of applied statistics that way, it will be exceedingly difficult to get a true understanding without calculus, and fairly difficult calculus at that. So if you really want to dive deep, this is about the point where you would want to brush up on that. Then look for a calculus-based statistics. It might be called "Statistics for scientists and engineers" or similar. | How to learn statistics for medical research? | You're Getting The Order Wrong.
It's as though you've said "I want to learn medicine, therefor I bought a scalpel." R is a very useful tool. But it will be much harder to learn statistics from using R | How to learn statistics for medical research?
You're Getting The Order Wrong.
It's as though you've said "I want to learn medicine, therefor I bought a scalpel." R is a very useful tool. But it will be much harder to learn statistics from using R than it would to learn to use R after already knowing statistics.
If I were learning statistics from scratch, I would not start with Baysian. I would start really simple. Use physical objects to understand how possible outcomes work first (like dice and coins). If you flip a coin, how many outcomes are there? Can you list them out? How about two coins, etc. Getting a solid understanding of the set of possible outcomes is the best basis for thinking about how "likely" something is to fall into a given subset of those outcomes.
Conveniently, this is the way that most introductory statistics classes are taught. As such, I really recommend that you start learning statistics with an intro class.
Following that, probably the next most important skill is formal logic. That's likely to be filed under philosophy at your school or online. You will essentially never be able to actually "prove" anything with research in a strict, formal sense. Example: with the logic chain A -> B -> C, maybe you test B->C a million times, and get the desired result every time. That demonstration doesn't "prove" that B->C. But practically, we would probably accept research that asserted B->C. What you need there is the grounding and discipline to understand and clearly state your full logic chain and all your assumptions. Doing so will make the analysis easier, and will make your results more robust.
Once you have that, you can look at Baysian approaches much more usefully. I find it's easiest to think of Bayes as dealing with some epistemological problems with frequentist statistics, which means it wont make much sense unless you already understand the basics of statistics and of the logic chains. (Some people have claimed success with learning Baysian from the get-go. I have trouble even imagining how one could usefully do that. YMMV.)
Unfortunately, while you can get a solid working knowledge of applied statistics that way, it will be exceedingly difficult to get a true understanding without calculus, and fairly difficult calculus at that. So if you really want to dive deep, this is about the point where you would want to brush up on that. Then look for a calculus-based statistics. It might be called "Statistics for scientists and engineers" or similar. | How to learn statistics for medical research?
You're Getting The Order Wrong.
It's as though you've said "I want to learn medicine, therefor I bought a scalpel." R is a very useful tool. But it will be much harder to learn statistics from using R |
37,916 | How to learn statistics for medical research? | It sounds like that you just have started with statistics and you do not have experience with it. That is totally OK.
But your goal is the other extrem that you want to be a statistic pro. That is impossible. Relax your self and do not push you to much here.
Keep one very important thing in mind. A professional (or however you want to name that) scientist never work alone. Research is a team thing. The workload itself can be done by one person. But the thinking, developing and dropping of ideas and questions and arguing with research colleagues and other experts is always a team thing.
And that will bring you back to statistics.
Even well experience research do not make there statistics alone. They make a plan how to collect and how to analyze the data. But when they finalized that plan they do not start to collect data but they contact a statistic person. They do statistic consulting (translated from German).
Because of that process you learn how to do things especially where are your borders of knowledge and expertise and when to ask other persons.
In short: You do not have to work alone. Your supervisor and your university should be able to offer you something here. | How to learn statistics for medical research? | It sounds like that you just have started with statistics and you do not have experience with it. That is totally OK.
But your goal is the other extrem that you want to be a statistic pro. That is imp | How to learn statistics for medical research?
It sounds like that you just have started with statistics and you do not have experience with it. That is totally OK.
But your goal is the other extrem that you want to be a statistic pro. That is impossible. Relax your self and do not push you to much here.
Keep one very important thing in mind. A professional (or however you want to name that) scientist never work alone. Research is a team thing. The workload itself can be done by one person. But the thinking, developing and dropping of ideas and questions and arguing with research colleagues and other experts is always a team thing.
And that will bring you back to statistics.
Even well experience research do not make there statistics alone. They make a plan how to collect and how to analyze the data. But when they finalized that plan they do not start to collect data but they contact a statistic person. They do statistic consulting (translated from German).
Because of that process you learn how to do things especially where are your borders of knowledge and expertise and when to ask other persons.
In short: You do not have to work alone. Your supervisor and your university should be able to offer you something here. | How to learn statistics for medical research?
It sounds like that you just have started with statistics and you do not have experience with it. That is totally OK.
But your goal is the other extrem that you want to be a statistic pro. That is imp |
37,917 | How do we know if the correlation is significant? | For what range of values of rx,y, can we [...] proceed to predict Y by using a linear regression?
If the relationship is indeed linear, any value of correlation can work; linear regression behaves as it should across the entire range of correlations, including 0. You don't even need to examine the correlation beforehand (it seems to serve no purpose not already covered by the usual regression calculations).
However, that's a big if. You can get any correlation (except exactly 1 or -1) and not have linearity; a large (magnitude of) correlation doesn't necessarily imply the relationship is actually linear (nor does a small one imply that it isn't); correlation is not of itself a useful way to decide on the suitability of a linear regression model.
In the case of multiple regression, examining bivariate correlations is even more problematic, since the marginal bivariate correlations may be quite different from what you get in a multiple regression model. (See the Wikipedia articles on Simpson's paradox and omitted variable bias, for example.)
However, if you're interested in whether the regression is doing something useful in terms of prediction, we'd need to pin down precisely what is intended by "useful". In some cases that might be attributable to correlation values.
On the other hand, if you're instead asking "how do we perform a hypothesis test of a Pearson correlation?" you should probably edit the question to make that explicit. Under suitable assumptions you get a "standard" test readily available in packages - or fairly easily carried out by hand. [However, you're not limited to those specific assumptions, other tests of a Pearson correlation - including nonparametric tests - are possible.] | How do we know if the correlation is significant? | For what range of values of rx,y, can we [...] proceed to predict Y by using a linear regression?
If the relationship is indeed linear, any value of correlation can work; linear regression behaves as | How do we know if the correlation is significant?
For what range of values of rx,y, can we [...] proceed to predict Y by using a linear regression?
If the relationship is indeed linear, any value of correlation can work; linear regression behaves as it should across the entire range of correlations, including 0. You don't even need to examine the correlation beforehand (it seems to serve no purpose not already covered by the usual regression calculations).
However, that's a big if. You can get any correlation (except exactly 1 or -1) and not have linearity; a large (magnitude of) correlation doesn't necessarily imply the relationship is actually linear (nor does a small one imply that it isn't); correlation is not of itself a useful way to decide on the suitability of a linear regression model.
In the case of multiple regression, examining bivariate correlations is even more problematic, since the marginal bivariate correlations may be quite different from what you get in a multiple regression model. (See the Wikipedia articles on Simpson's paradox and omitted variable bias, for example.)
However, if you're interested in whether the regression is doing something useful in terms of prediction, we'd need to pin down precisely what is intended by "useful". In some cases that might be attributable to correlation values.
On the other hand, if you're instead asking "how do we perform a hypothesis test of a Pearson correlation?" you should probably edit the question to make that explicit. Under suitable assumptions you get a "standard" test readily available in packages - or fairly easily carried out by hand. [However, you're not limited to those specific assumptions, other tests of a Pearson correlation - including nonparametric tests - are possible.] | How do we know if the correlation is significant?
For what range of values of rx,y, can we [...] proceed to predict Y by using a linear regression?
If the relationship is indeed linear, any value of correlation can work; linear regression behaves as |
37,918 | How do we know if the correlation is significant? | There is a difference between a well-evidenced effect and a strong effect. For example, there is good evidence that eating bacon causes cancer, but the risk is low; and there is weak evidence that smoking marijuana leaf causes cancer, but the risk is probably high. (The reason for the gap is that the bacon eaters are subject to more medical surveillance than ganja smokers.)
So a useful statistical test of whether the correlation is well evidenced is not based on the correlation coefficient, but on the sample size.
Another feature of the situation that matters is how much of the variation is explained by the correlation: this is the R-squared statistic, coefficient of determination. | How do we know if the correlation is significant? | There is a difference between a well-evidenced effect and a strong effect. For example, there is good evidence that eating bacon causes cancer, but the risk is low; and there is weak evidence that smo | How do we know if the correlation is significant?
There is a difference between a well-evidenced effect and a strong effect. For example, there is good evidence that eating bacon causes cancer, but the risk is low; and there is weak evidence that smoking marijuana leaf causes cancer, but the risk is probably high. (The reason for the gap is that the bacon eaters are subject to more medical surveillance than ganja smokers.)
So a useful statistical test of whether the correlation is well evidenced is not based on the correlation coefficient, but on the sample size.
Another feature of the situation that matters is how much of the variation is explained by the correlation: this is the R-squared statistic, coefficient of determination. | How do we know if the correlation is significant?
There is a difference between a well-evidenced effect and a strong effect. For example, there is good evidence that eating bacon causes cancer, but the risk is low; and there is weak evidence that smo |
37,919 | How do we know if the correlation is significant? | Often the term "significance" is used in the meaning "$\rho$ is statistically significantly different from zero". This is, however, not what most users of $\rho$ are interested in, because the null hypothesis that $\rho$ is exactly zero is almost certainly false. Hence even the tiniest deviation from zero becomes "significant" for a sample size that is large enough.
It is generally of more interest whether a correlation is strong. What is considered a "strong" correlation depends on the field, but here is a rule of thumb taken from an introductory textbook (here is an online reference for the same rule):
\begin{eqnarray*}
|\rho|\leq 0.3: & & \mbox{weak correlation}\\
0.3 < |\rho|\leq 0.7: & & \mbox{moderate correlation}\\
|\rho|> 0.7: & & \mbox{strong correlation}\\
\end{eqnarray*}
I would thus suggest, not to do a hypothesis test against $\rho=0$, but to report a confidence interval for $\rho$. You can find the formulas, e.g., here, and most statistical packages provide functions that compute it for you, for example cor.test in R. Then you can see how far this interval overlaps with the "weak" range. | How do we know if the correlation is significant? | Often the term "significance" is used in the meaning "$\rho$ is statistically significantly different from zero". This is, however, not what most users of $\rho$ are interested in, because the null hy | How do we know if the correlation is significant?
Often the term "significance" is used in the meaning "$\rho$ is statistically significantly different from zero". This is, however, not what most users of $\rho$ are interested in, because the null hypothesis that $\rho$ is exactly zero is almost certainly false. Hence even the tiniest deviation from zero becomes "significant" for a sample size that is large enough.
It is generally of more interest whether a correlation is strong. What is considered a "strong" correlation depends on the field, but here is a rule of thumb taken from an introductory textbook (here is an online reference for the same rule):
\begin{eqnarray*}
|\rho|\leq 0.3: & & \mbox{weak correlation}\\
0.3 < |\rho|\leq 0.7: & & \mbox{moderate correlation}\\
|\rho|> 0.7: & & \mbox{strong correlation}\\
\end{eqnarray*}
I would thus suggest, not to do a hypothesis test against $\rho=0$, but to report a confidence interval for $\rho$. You can find the formulas, e.g., here, and most statistical packages provide functions that compute it for you, for example cor.test in R. Then you can see how far this interval overlaps with the "weak" range. | How do we know if the correlation is significant?
Often the term "significance" is used in the meaning "$\rho$ is statistically significantly different from zero". This is, however, not what most users of $\rho$ are interested in, because the null hy |
37,920 | How do we know if the correlation is significant? | You could use the following test to check whether there is significant correlation between $X$ and $Y$. Assume that you have the observations $(x_i,y_i), i =1,\dots,n$.
The null and alternative hypothesis are given by:
$$
H_0: \, \rho = 0 \quad vs. \quad H_1: \rho \neq 0
$$
The test statistic is given by:
$$
T = \sqrt{n-2}\frac{\hat{\rho}}{\sqrt{1-\hat{\rho}^2}}\overset{H_0}{\sim} t_{n-2}
$$
Where $\hat{\rho}$ is the sample estimate for the correlation coefficient, i.e.
$$
\hat{\rho}=\frac{\frac{1}{n}\sum_{i=1}^n((x_i-\bar{x})(y_i-\bar{y}))}{\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i-\bar{x})^2} \cdot \sqrt{\frac{1}{n}\sum_{i=1}^n(y_i-\bar{y})^2} }
$$
Thus, the null is rejected if $\vert T\vert >t_{n-2;1-\frac{\alpha}{2}}$. | How do we know if the correlation is significant? | You could use the following test to check whether there is significant correlation between $X$ and $Y$. Assume that you have the observations $(x_i,y_i), i =1,\dots,n$.
The null and alternative hypoth | How do we know if the correlation is significant?
You could use the following test to check whether there is significant correlation between $X$ and $Y$. Assume that you have the observations $(x_i,y_i), i =1,\dots,n$.
The null and alternative hypothesis are given by:
$$
H_0: \, \rho = 0 \quad vs. \quad H_1: \rho \neq 0
$$
The test statistic is given by:
$$
T = \sqrt{n-2}\frac{\hat{\rho}}{\sqrt{1-\hat{\rho}^2}}\overset{H_0}{\sim} t_{n-2}
$$
Where $\hat{\rho}$ is the sample estimate for the correlation coefficient, i.e.
$$
\hat{\rho}=\frac{\frac{1}{n}\sum_{i=1}^n((x_i-\bar{x})(y_i-\bar{y}))}{\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i-\bar{x})^2} \cdot \sqrt{\frac{1}{n}\sum_{i=1}^n(y_i-\bar{y})^2} }
$$
Thus, the null is rejected if $\vert T\vert >t_{n-2;1-\frac{\alpha}{2}}$. | How do we know if the correlation is significant?
You could use the following test to check whether there is significant correlation between $X$ and $Y$. Assume that you have the observations $(x_i,y_i), i =1,\dots,n$.
The null and alternative hypoth |
37,921 | How do we know if the correlation is significant? | All you need is to compute the degree of freedom of the system, which is the number of participants minus $2$, and then refer to the table of critical values for $r$, which can be found here.
However, this gives you nothing but the mathematical significance, you still need to have a look at the scatterplot to see if a linear relationship really is a good guess or not. | How do we know if the correlation is significant? | All you need is to compute the degree of freedom of the system, which is the number of participants minus $2$, and then refer to the table of critical values for $r$, which can be found here.
However, | How do we know if the correlation is significant?
All you need is to compute the degree of freedom of the system, which is the number of participants minus $2$, and then refer to the table of critical values for $r$, which can be found here.
However, this gives you nothing but the mathematical significance, you still need to have a look at the scatterplot to see if a linear relationship really is a good guess or not. | How do we know if the correlation is significant?
All you need is to compute the degree of freedom of the system, which is the number of participants minus $2$, and then refer to the table of critical values for $r$, which can be found here.
However, |
37,922 | How do we know if the correlation is significant? | Like hypothesis testing in general, all you can do is propose a null hypothesis and calculate the probability of seeing the data given that null hypothesis. There is no point at which the data "definitely" comes from correlated sources, only some line in the sand where you decide that the data is "unlikely enough" to reject the null.
If you want to know how to calculate the p-value, you need to know the correlation coefficient and degrees of freedom (which is number of data points minus two). The formula generally given is $p = \frac{r \sqrt{n-2}}{1-r^2}$. There are many online calculators that give you $p$ given $n$ and $r$.
However, this formula is for the null hypothesis that the data is coming from normal IID. Just because this null is rejected, does not mean that there isn't some other hypothesis that doesn't involve correlation between $X$ and $Y$; if there is correlation within $X$ and $Y$, rather than between them, that increases the probability of seeing large sample correlation. | How do we know if the correlation is significant? | Like hypothesis testing in general, all you can do is propose a null hypothesis and calculate the probability of seeing the data given that null hypothesis. There is no point at which the data "defini | How do we know if the correlation is significant?
Like hypothesis testing in general, all you can do is propose a null hypothesis and calculate the probability of seeing the data given that null hypothesis. There is no point at which the data "definitely" comes from correlated sources, only some line in the sand where you decide that the data is "unlikely enough" to reject the null.
If you want to know how to calculate the p-value, you need to know the correlation coefficient and degrees of freedom (which is number of data points minus two). The formula generally given is $p = \frac{r \sqrt{n-2}}{1-r^2}$. There are many online calculators that give you $p$ given $n$ and $r$.
However, this formula is for the null hypothesis that the data is coming from normal IID. Just because this null is rejected, does not mean that there isn't some other hypothesis that doesn't involve correlation between $X$ and $Y$; if there is correlation within $X$ and $Y$, rather than between them, that increases the probability of seeing large sample correlation. | How do we know if the correlation is significant?
Like hypothesis testing in general, all you can do is propose a null hypothesis and calculate the probability of seeing the data given that null hypothesis. There is no point at which the data "defini |
37,923 | Usefulness of the confidence interval | A confidence interval is typically more useful than a hypothesis test. A hypothesis test tells you if you can rule out a specific null hypothesis (typically, $0$). On the other hand a confidence interval demarcates an infinite set of values that, if they had been your null, would have been rejected similarly. (Likewise, it gives the set of potential null values that would not have been rejected.) For example, consider a 95% confidence interval for a mean $(.1, .9)$. The p-value for the (nil) null is $<.05$, but the confidence interval also lets you know that if your null value had been $1.0$, it would have been rejected as well.
A confidence interval also helps you differentiate between high level of confidence and a large effect. People are often impressed by an effect that is highly significant (e.g., $p<.0001$), and conclude that it must be really important. However, p-values conflate the size of the effect with the clarity of the effect. You can get a low p-value because the effect is large or because the effect is small, but you have very many data. This isn't ambiguous if you're looking at a confidence interval that is, say, $(.05, .15)$ versus $(5, 15)$.
In addition, a confidence interval is usually more informative than a point estimate. Although the point estimate returned by some fitting function will typically be the single most likely value (conditional on your data and your model), it isn't actually very likely to be the true value. There is, as you mention, no guarantee that the true value lies within a, say, 95% confidence interval (for instance, there isn't a 95% chance that the true value is in a 95% confidence interval1). That said, it is more likely that the true value lies within the interval than it is that the point estimate is the true value—this should be obvious since the point estimate is within the interval. In fact, you could think of a point estimate as a $0\%$ confidence interval.
1. Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? | Usefulness of the confidence interval | A confidence interval is typically more useful than a hypothesis test. A hypothesis test tells you if you can rule out a specific null hypothesis (typically, $0$). On the other hand a confidence int | Usefulness of the confidence interval
A confidence interval is typically more useful than a hypothesis test. A hypothesis test tells you if you can rule out a specific null hypothesis (typically, $0$). On the other hand a confidence interval demarcates an infinite set of values that, if they had been your null, would have been rejected similarly. (Likewise, it gives the set of potential null values that would not have been rejected.) For example, consider a 95% confidence interval for a mean $(.1, .9)$. The p-value for the (nil) null is $<.05$, but the confidence interval also lets you know that if your null value had been $1.0$, it would have been rejected as well.
A confidence interval also helps you differentiate between high level of confidence and a large effect. People are often impressed by an effect that is highly significant (e.g., $p<.0001$), and conclude that it must be really important. However, p-values conflate the size of the effect with the clarity of the effect. You can get a low p-value because the effect is large or because the effect is small, but you have very many data. This isn't ambiguous if you're looking at a confidence interval that is, say, $(.05, .15)$ versus $(5, 15)$.
In addition, a confidence interval is usually more informative than a point estimate. Although the point estimate returned by some fitting function will typically be the single most likely value (conditional on your data and your model), it isn't actually very likely to be the true value. There is, as you mention, no guarantee that the true value lies within a, say, 95% confidence interval (for instance, there isn't a 95% chance that the true value is in a 95% confidence interval1). That said, it is more likely that the true value lies within the interval than it is that the point estimate is the true value—this should be obvious since the point estimate is within the interval. In fact, you could think of a point estimate as a $0\%$ confidence interval.
1. Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? | Usefulness of the confidence interval
A confidence interval is typically more useful than a hypothesis test. A hypothesis test tells you if you can rule out a specific null hypothesis (typically, $0$). On the other hand a confidence int |
37,924 | Usefulness of the confidence interval | I think it's best understood with a simple example.
Imagine you are on a farm that raises sheep. The farm has a lot of sheep but you only observe 5. Out of those 5 sheep, 1 sheep was black and 4 sheep were white. And you are interested in the real proportion of black/white sheep within the farm. What can you tell about that proportion based on the 5-sheep sample you just saw?
One question might be - is it reasonable to think that the true proportion of black/white sheep on the farm is equal (50/50)? To answer this question you can calculate the probability of seeing 4 white and 1 black sheep (or a more extreme difference) if the true proportion of black sheep is 0.5. This is a p-value.
Another question is the inverse of the first one - given the sample of sheep you just saw - what proportions are not unreasonable to consider? You can dismiss the possibility of the farm having only white sheep, since you already saw one black sheep. We can say that you dismiss it because the probability to see 1 black and 4 white sheep, if all the sheep are white, is 0. But how to go further? Well, you can calculate all the proportions for which the chance of seeing 1 black and 4 white sheep is greater than 5%. Those would be the "reasonable proportions" based on your observation. This is a 95% confidence interval.
So you can think about the confidence interval as a sort of philosophical tool that, given certain assumptions and under certain conditions, allows you to expand your inductive reasoning - going from observations to generalizations. As you can see, there is no need to repeat anything multiple times at all.
Disclaimer: the above example is simplified for brevity. In particular - it doesn't mention the assumption that your observations of sheep are independent of one another. And that in two-tailed scenarios you would also have to consider 1 white / 4 black cases. | Usefulness of the confidence interval | I think it's best understood with a simple example.
Imagine you are on a farm that raises sheep. The farm has a lot of sheep but you only observe 5. Out of those 5 sheep, 1 sheep was black and 4 sheep | Usefulness of the confidence interval
I think it's best understood with a simple example.
Imagine you are on a farm that raises sheep. The farm has a lot of sheep but you only observe 5. Out of those 5 sheep, 1 sheep was black and 4 sheep were white. And you are interested in the real proportion of black/white sheep within the farm. What can you tell about that proportion based on the 5-sheep sample you just saw?
One question might be - is it reasonable to think that the true proportion of black/white sheep on the farm is equal (50/50)? To answer this question you can calculate the probability of seeing 4 white and 1 black sheep (or a more extreme difference) if the true proportion of black sheep is 0.5. This is a p-value.
Another question is the inverse of the first one - given the sample of sheep you just saw - what proportions are not unreasonable to consider? You can dismiss the possibility of the farm having only white sheep, since you already saw one black sheep. We can say that you dismiss it because the probability to see 1 black and 4 white sheep, if all the sheep are white, is 0. But how to go further? Well, you can calculate all the proportions for which the chance of seeing 1 black and 4 white sheep is greater than 5%. Those would be the "reasonable proportions" based on your observation. This is a 95% confidence interval.
So you can think about the confidence interval as a sort of philosophical tool that, given certain assumptions and under certain conditions, allows you to expand your inductive reasoning - going from observations to generalizations. As you can see, there is no need to repeat anything multiple times at all.
Disclaimer: the above example is simplified for brevity. In particular - it doesn't mention the assumption that your observations of sheep are independent of one another. And that in two-tailed scenarios you would also have to consider 1 white / 4 black cases. | Usefulness of the confidence interval
I think it's best understood with a simple example.
Imagine you are on a farm that raises sheep. The farm has a lot of sheep but you only observe 5. Out of those 5 sheep, 1 sheep was black and 4 sheep |
37,925 | Usefulness of the confidence interval | The answer by gung-Reinstate Monica is fine.
I add the following. In reality there is no such thing as a true model. There is no true parameter either (as such a parameter is only defined within a model). What we do is we use models to think about a reality that is different, however we don't have better tools than artificial formal models to make quantitative statements.
So let's imagine that what we observe in reality behaves like data generating process, potentially infinitely repeatable, modelled by some distribution with a parameter $\mu$, say, that we identify, in our brains, with some real quantity we are interested in. What we want to use the model for is to quantify the uncertainty, because we think of the real process as having some random variation, i.e., we will get other numbers when we do the same thing next time, the precise explanation of which is either unobservable or not of interest or not worth the effort finding out. What we want is some indication how far away from reality we might be with our best guess (the parameter estimate), because we are convinced, normally from experience, that the data that we have will not tell us precisely what is going on, however they hint at it, with some possible variation. The confidence interval is a way to use model-thinking to quantify this. It asks: If the model is true, which parameter values could have given rise to the data that we have observed?
The confidence interval gives us a set of parameter values that, were the model true, are all compatible with what was observed, i.e., what was observed is a realistic, at least fairly typical thing if any of the values in the confidence interval were true, and pretty atypical if other values were true. Therefore it gives a set of "realistic" parameter values. That said, as I wrote before, none of these is really true, however as long as we think of the real situation in terms of the model, it makes sense to think of the model taking one of these parameter values. That may look disappointingly far away from reality, but it is hard to do better really. That's the nature of models. (Epistemic Bayesian logic would be an alternative, but it comes with problems that turn out to be fairly similar if you look at them in the right way.)
The positive side of this modest way of interpreting things is that it doesn't rely on the model setup being literally fulfilled. Particularly there is no need to indeed repeat the experiment to give the result a meaning. This is an imagination anyway, a tool for thinking about the situation, the possibility of which can be more or less close to reality. (Obviously the advantage of indeed being able to repeat this several times is that we have better ways to assess if the model world is reasonably in line with the real world.)
Issues: (1) As I wrote, the model is in fact not true. This is not generally a problem (it's the nature of models actually), but it is a problem if it is violated in ways that make thinking in terms of the specific model strongly misleading. A typical issue is if data are strongly positively correlated if in fact the model assumes them independent - you'll end up with a far too narrow confidence interval.
(2) Confidence intervals oversimplify things in the sense that you specify a confidence level, and then parameters are either in or out. However, the difference in compatibility with the data of parameters that are borderline in or borderline out, respectively, is not that big. If your confidence interval is, say, $[5,10]$, it is not really appropriate to think that 5.1 is a perfectly realistic value whereas 4.9 is totally not. Rather 4.9 is just slightly more unrealistic than 5.1, which is conceivable as true but may (depending on the exact model, statistic used etc.) be substantially less realistic than, say, 7. | Usefulness of the confidence interval | The answer by gung-Reinstate Monica is fine.
I add the following. In reality there is no such thing as a true model. There is no true parameter either (as such a parameter is only defined within a mod | Usefulness of the confidence interval
The answer by gung-Reinstate Monica is fine.
I add the following. In reality there is no such thing as a true model. There is no true parameter either (as such a parameter is only defined within a model). What we do is we use models to think about a reality that is different, however we don't have better tools than artificial formal models to make quantitative statements.
So let's imagine that what we observe in reality behaves like data generating process, potentially infinitely repeatable, modelled by some distribution with a parameter $\mu$, say, that we identify, in our brains, with some real quantity we are interested in. What we want to use the model for is to quantify the uncertainty, because we think of the real process as having some random variation, i.e., we will get other numbers when we do the same thing next time, the precise explanation of which is either unobservable or not of interest or not worth the effort finding out. What we want is some indication how far away from reality we might be with our best guess (the parameter estimate), because we are convinced, normally from experience, that the data that we have will not tell us precisely what is going on, however they hint at it, with some possible variation. The confidence interval is a way to use model-thinking to quantify this. It asks: If the model is true, which parameter values could have given rise to the data that we have observed?
The confidence interval gives us a set of parameter values that, were the model true, are all compatible with what was observed, i.e., what was observed is a realistic, at least fairly typical thing if any of the values in the confidence interval were true, and pretty atypical if other values were true. Therefore it gives a set of "realistic" parameter values. That said, as I wrote before, none of these is really true, however as long as we think of the real situation in terms of the model, it makes sense to think of the model taking one of these parameter values. That may look disappointingly far away from reality, but it is hard to do better really. That's the nature of models. (Epistemic Bayesian logic would be an alternative, but it comes with problems that turn out to be fairly similar if you look at them in the right way.)
The positive side of this modest way of interpreting things is that it doesn't rely on the model setup being literally fulfilled. Particularly there is no need to indeed repeat the experiment to give the result a meaning. This is an imagination anyway, a tool for thinking about the situation, the possibility of which can be more or less close to reality. (Obviously the advantage of indeed being able to repeat this several times is that we have better ways to assess if the model world is reasonably in line with the real world.)
Issues: (1) As I wrote, the model is in fact not true. This is not generally a problem (it's the nature of models actually), but it is a problem if it is violated in ways that make thinking in terms of the specific model strongly misleading. A typical issue is if data are strongly positively correlated if in fact the model assumes them independent - you'll end up with a far too narrow confidence interval.
(2) Confidence intervals oversimplify things in the sense that you specify a confidence level, and then parameters are either in or out. However, the difference in compatibility with the data of parameters that are borderline in or borderline out, respectively, is not that big. If your confidence interval is, say, $[5,10]$, it is not really appropriate to think that 5.1 is a perfectly realistic value whereas 4.9 is totally not. Rather 4.9 is just slightly more unrealistic than 5.1, which is conceivable as true but may (depending on the exact model, statistic used etc.) be substantially less realistic than, say, 7. | Usefulness of the confidence interval
The answer by gung-Reinstate Monica is fine.
I add the following. In reality there is no such thing as a true model. There is no true parameter either (as such a parameter is only defined within a mod |
37,926 | When two events $A$ and $B$ have no result in common | No, events with no result in common are not independent if the events come from the same sample space.
An example: Throw a single fair die. Let event A be 'throw is a 1', and event B be 'throw is a 2'. Then $P(A) = P(B) = 1/6$, but $P(A|B) = P(B|A) = 0$, as the die throw can't be both 1 and 2. | When two events $A$ and $B$ have no result in common | No, events with no result in common are not independent if the events come from the same sample space.
An example: Throw a single fair die. Let event A be 'throw is a 1', and event B be 'throw is a 2 | When two events $A$ and $B$ have no result in common
No, events with no result in common are not independent if the events come from the same sample space.
An example: Throw a single fair die. Let event A be 'throw is a 1', and event B be 'throw is a 2'. Then $P(A) = P(B) = 1/6$, but $P(A|B) = P(B|A) = 0$, as the die throw can't be both 1 and 2. | When two events $A$ and $B$ have no result in common
No, events with no result in common are not independent if the events come from the same sample space.
An example: Throw a single fair die. Let event A be 'throw is a 1', and event B be 'throw is a 2 |
37,927 | When two events $A$ and $B$ have no result in common | Independence can be thought of as "If event A occurs it tells you nothing about the probability of event B occurring".
However, if two events share nothing in common then clearly if I know that one event occurred then I know that the other event COULDN'T have occurred. So I am getting information about the other event so there the two events can't be independent.
Note that this was an intuitive explanation and it's possible for B to be empty in which case A and B can be mutually exclusive and still be independent as explained in some of the other answers. | When two events $A$ and $B$ have no result in common | Independence can be thought of as "If event A occurs it tells you nothing about the probability of event B occurring".
However, if two events share nothing in common then clearly if I know that one ev | When two events $A$ and $B$ have no result in common
Independence can be thought of as "If event A occurs it tells you nothing about the probability of event B occurring".
However, if two events share nothing in common then clearly if I know that one event occurred then I know that the other event COULDN'T have occurred. So I am getting information about the other event so there the two events can't be independent.
Note that this was an intuitive explanation and it's possible for B to be empty in which case A and B can be mutually exclusive and still be independent as explained in some of the other answers. | When two events $A$ and $B$ have no result in common
Independence can be thought of as "If event A occurs it tells you nothing about the probability of event B occurring".
However, if two events share nothing in common then clearly if I know that one ev |
37,928 | When two events $A$ and $B$ have no result in common | It is true that if $P(A)=P(A\mid B)$ then the two events are intependent.
Now we know that: $P(A\mid B)=\frac{P(A\cap B)}{P(B)}$. Note the intersection in the numerator. If the two events didn't have "anything in common" then we would have $P(A\cap B)=0$ since the set $A\cap B$ would be empty. However independence works in the space of probabilities. Two events are called independent if $P(A\cap B)=P(A)P(B)$, ie the probability of having both events occuring is equal to the probability of the first one occuring times the probability of the second one. The fact that event $A$ occured doesn't tell us anything about event $B$.
If on the other hand $P(A\cap B)=0$ this simply says that the two events cannot happen at the same time: they are disjoint.
Example 1.
What is the probability of throwing a die and getting both 1 and 3 at the same time? Since the two events are disjoint, we have that this probability is zero.
Example 2.
What is the probability of throwing a die two times and getting 1 in the first time and 2 in the second? The two events are not disjoint; the fact that one happened doesn't exclude the other. However they are independent: the fact that one happened tells us nothing about the other. The probability of getting 1 in the first throw is 1/6, the probability of getting 2 in the second throw is 1/6. Then (abusing notation a bit): $P(2\mid 1)= \frac{P(1 \cap 2)}{P(1)} = \frac{ \frac{1}{36}}{\frac{1}{6}}=\frac{1}{6}=P(2)$. | When two events $A$ and $B$ have no result in common | It is true that if $P(A)=P(A\mid B)$ then the two events are intependent.
Now we know that: $P(A\mid B)=\frac{P(A\cap B)}{P(B)}$. Note the intersection in the numerator. If the two events didn't have | When two events $A$ and $B$ have no result in common
It is true that if $P(A)=P(A\mid B)$ then the two events are intependent.
Now we know that: $P(A\mid B)=\frac{P(A\cap B)}{P(B)}$. Note the intersection in the numerator. If the two events didn't have "anything in common" then we would have $P(A\cap B)=0$ since the set $A\cap B$ would be empty. However independence works in the space of probabilities. Two events are called independent if $P(A\cap B)=P(A)P(B)$, ie the probability of having both events occuring is equal to the probability of the first one occuring times the probability of the second one. The fact that event $A$ occured doesn't tell us anything about event $B$.
If on the other hand $P(A\cap B)=0$ this simply says that the two events cannot happen at the same time: they are disjoint.
Example 1.
What is the probability of throwing a die and getting both 1 and 3 at the same time? Since the two events are disjoint, we have that this probability is zero.
Example 2.
What is the probability of throwing a die two times and getting 1 in the first time and 2 in the second? The two events are not disjoint; the fact that one happened doesn't exclude the other. However they are independent: the fact that one happened tells us nothing about the other. The probability of getting 1 in the first throw is 1/6, the probability of getting 2 in the second throw is 1/6. Then (abusing notation a bit): $P(2\mid 1)= \frac{P(1 \cap 2)}{P(1)} = \frac{ \frac{1}{36}}{\frac{1}{6}}=\frac{1}{6}=P(2)$. | When two events $A$ and $B$ have no result in common
It is true that if $P(A)=P(A\mid B)$ then the two events are intependent.
Now we know that: $P(A\mid B)=\frac{P(A\cap B)}{P(B)}$. Note the intersection in the numerator. If the two events didn't have |
37,929 | When two events $A$ and $B$ have no result in common | You are confusing dependence with mutual-exclusivity. Two events $A$ and $B$ are independent if and only if $P(A \cap B) = P(A)P(B)$. They are disjoint if and only if $A \cap B = \emptyset$.
Consider a normally-distributed random variable $X$ with mean 0 and variance 1.
Here is a pair of events for each of the four categories:
the events $0 < X < 1$ and $1 < X < 2$ are dependent and disjoint
the events $X = 0$ and $1 < X < 2$ are independent and disjoint
the events $0 < X < 1$ and $0 < X < 2$ are dependent and not disjoint, and
the events $X < 0$ and $\lvert X \rvert < 3$ are independent and not disjoint.
If events $A$ and $B$ are independent and disjoint then
\begin{align}
0 &= P(A \cap B) \\\\
&= P(A)P(B) \\\\
&\implies P(A) = 0 \vee P(B) = 0.
\end{align} | When two events $A$ and $B$ have no result in common | You are confusing dependence with mutual-exclusivity. Two events $A$ and $B$ are independent if and only if $P(A \cap B) = P(A)P(B)$. They are disjoint if and only if $A \cap B = \emptyset$.
Conside | When two events $A$ and $B$ have no result in common
You are confusing dependence with mutual-exclusivity. Two events $A$ and $B$ are independent if and only if $P(A \cap B) = P(A)P(B)$. They are disjoint if and only if $A \cap B = \emptyset$.
Consider a normally-distributed random variable $X$ with mean 0 and variance 1.
Here is a pair of events for each of the four categories:
the events $0 < X < 1$ and $1 < X < 2$ are dependent and disjoint
the events $X = 0$ and $1 < X < 2$ are independent and disjoint
the events $0 < X < 1$ and $0 < X < 2$ are dependent and not disjoint, and
the events $X < 0$ and $\lvert X \rvert < 3$ are independent and not disjoint.
If events $A$ and $B$ are independent and disjoint then
\begin{align}
0 &= P(A \cap B) \\\\
&= P(A)P(B) \\\\
&\implies P(A) = 0 \vee P(B) = 0.
\end{align} | When two events $A$ and $B$ have no result in common
You are confusing dependence with mutual-exclusivity. Two events $A$ and $B$ are independent if and only if $P(A \cap B) = P(A)P(B)$. They are disjoint if and only if $A \cap B = \emptyset$.
Conside |
37,930 | When two events $A$ and $B$ have no result in common | Let $A$ and $B$ denote two events defined on a sample space $\Omega$.
The formal definition of independent events is as follows.
Definition: $A$ and $B$ are said to be (stochastically)
mutually independent events if
$$P(A\cap B) = P(A)P(B).$$
It is easily shown that any one of the four relations
shown below implies the other three:
$$\begin{align*}
P(A\cap B) &= P(A)P(B)\\
P(A^c\cap B) &= P(A^c)P(B)\\
P(A\cap B^c) &= P(A)P(B^c)\\
P(A^c\cap B^c) &= P(A^c)P(B^c)
\end{align*}$$
and so if $A$ and $B$ are mutually independent events, then so
are $A^c$ and $B$ mutually independent events, as are
$A$ and $B^c$, and $A^c$ and $B^c$.
Now, if $P(B) > 0$ so that we can write $P(A \mid B)$ as $P(A\cap B)/P(B)$,
then $P(A\mid B)$ equals $P(A)$, and this is often taken as the
colloquial meaning (or definition) of independence. $A$ and $B$ are
independent events if knowing that $B$ has occurred does not
change our estimate of
the probability of $A$. Put another way, the posterior probability
$P(A\mid B)$ is the same as the prior probability $P(A)$.
The asymmetry in the colloquial definition even leads
people to say $A$ is independent of $B$ (which
can make beginners wonder whether $B$ is independent of $A$
or not), but the
formal definition makes it clear that independence is
a mutual property: one cannot have $A$ independent of $B$
but $B$ dependent on $A$.
Turning to the OP's question, if $0 < P(A), P(B) < 1$,
then mutual independence and mutual exclusion are mutually
exclusive properties. If one property holds, the other cannot.
Of course, the most commonly encountered case is
that neither property holds. Said out loud and clear
If $A$ and $B$ are mutually independent
events, then they cannot be mutually exclusive events.
If $A$ and $B$ are mutually exclusive
events, then they cannot be mutually independent events.
In the first case, note that mutual independence
implies that $P(A\cap B) = P(A)P(B) > 0$ and so the intersection
of $A$ and $B$ has positive probability. In the second case,
$P(A\cap B) = 0$ cannot equal $P(A)P(B)$ since neither
$P(A)$ nor $P(B)$ is $0$ by assumption and so their product
is a positive number.
As a corollary, note that $A$ and $A$ cannot be a pair of
mutually independent events. and nor can $A$ and $A^c$
be mutually independent events.
Much of the discussion in the comments has centered on the
rare cases when $P(A)$ or $P(B)$ happen to equal $0$ or $1$.
First note that since
$$P(A \cap \Omega) = P(A) = P(A)P(\Omega)$$
and so $A$ and the certain event $\Omega$ are independent
events for all choices of $A$. Similarly, since
$$P(A \cap \emptyset) = P(\emptyset) = 0 = P(A)P(\emptyset),$$
$A$ and the impossible event $\emptyset$ are independent
events for all choices of $A$. More generally, if $B$
is an event of probability $0$ (not necessarily the impossible
event), then since $A\cap B$ is a subset of $B$ and hence also
has probability $0$, we can generalize to
$P(A \cap B) = 0 = P(A)P(B)$ and so
Any event of probability $0$ is independent of all events
(including itself and its complement). If $B$ is an event
of probability $0$, then $B$ and $B^c$ are independent
events that are mutually exclusive.
If $B$ is an event of probability $0$, then $B^c$ is
an event of probability $1$. Since $B$ and $A$ are
independent events for all choices of $A$, so also
are $B^c$ and $A$ independent events for all choices of $A$.
Thus, we have
Any event of probability $1$ is independent of all events
(including itself and its complement). If $A$ is an event
of probability $1$, then $A$ and $A^c$ are independent
events that are mutually exclusive.
Note that, as @NeilG has pointed out in his answer,
if $A$ and $B$ are independent events that are mutually
exclusive, then at least one of $A$ and $B$ must be
an event of probability $0$.
We also have an anticorollary: $A$ and $A$ are mutually independent
events if and only if $P(A)$ equals either $0$ or $1$.
$A$ and $A^c$ are mutually independent
events if and only if one of $P(A)$ and $P(A^c)$ equals $0$
(and the other equals $1$.) | When two events $A$ and $B$ have no result in common | Let $A$ and $B$ denote two events defined on a sample space $\Omega$.
The formal definition of independent events is as follows.
Definition: $A$ and $B$ are said to be (stochastically)
mutually indepe | When two events $A$ and $B$ have no result in common
Let $A$ and $B$ denote two events defined on a sample space $\Omega$.
The formal definition of independent events is as follows.
Definition: $A$ and $B$ are said to be (stochastically)
mutually independent events if
$$P(A\cap B) = P(A)P(B).$$
It is easily shown that any one of the four relations
shown below implies the other three:
$$\begin{align*}
P(A\cap B) &= P(A)P(B)\\
P(A^c\cap B) &= P(A^c)P(B)\\
P(A\cap B^c) &= P(A)P(B^c)\\
P(A^c\cap B^c) &= P(A^c)P(B^c)
\end{align*}$$
and so if $A$ and $B$ are mutually independent events, then so
are $A^c$ and $B$ mutually independent events, as are
$A$ and $B^c$, and $A^c$ and $B^c$.
Now, if $P(B) > 0$ so that we can write $P(A \mid B)$ as $P(A\cap B)/P(B)$,
then $P(A\mid B)$ equals $P(A)$, and this is often taken as the
colloquial meaning (or definition) of independence. $A$ and $B$ are
independent events if knowing that $B$ has occurred does not
change our estimate of
the probability of $A$. Put another way, the posterior probability
$P(A\mid B)$ is the same as the prior probability $P(A)$.
The asymmetry in the colloquial definition even leads
people to say $A$ is independent of $B$ (which
can make beginners wonder whether $B$ is independent of $A$
or not), but the
formal definition makes it clear that independence is
a mutual property: one cannot have $A$ independent of $B$
but $B$ dependent on $A$.
Turning to the OP's question, if $0 < P(A), P(B) < 1$,
then mutual independence and mutual exclusion are mutually
exclusive properties. If one property holds, the other cannot.
Of course, the most commonly encountered case is
that neither property holds. Said out loud and clear
If $A$ and $B$ are mutually independent
events, then they cannot be mutually exclusive events.
If $A$ and $B$ are mutually exclusive
events, then they cannot be mutually independent events.
In the first case, note that mutual independence
implies that $P(A\cap B) = P(A)P(B) > 0$ and so the intersection
of $A$ and $B$ has positive probability. In the second case,
$P(A\cap B) = 0$ cannot equal $P(A)P(B)$ since neither
$P(A)$ nor $P(B)$ is $0$ by assumption and so their product
is a positive number.
As a corollary, note that $A$ and $A$ cannot be a pair of
mutually independent events. and nor can $A$ and $A^c$
be mutually independent events.
Much of the discussion in the comments has centered on the
rare cases when $P(A)$ or $P(B)$ happen to equal $0$ or $1$.
First note that since
$$P(A \cap \Omega) = P(A) = P(A)P(\Omega)$$
and so $A$ and the certain event $\Omega$ are independent
events for all choices of $A$. Similarly, since
$$P(A \cap \emptyset) = P(\emptyset) = 0 = P(A)P(\emptyset),$$
$A$ and the impossible event $\emptyset$ are independent
events for all choices of $A$. More generally, if $B$
is an event of probability $0$ (not necessarily the impossible
event), then since $A\cap B$ is a subset of $B$ and hence also
has probability $0$, we can generalize to
$P(A \cap B) = 0 = P(A)P(B)$ and so
Any event of probability $0$ is independent of all events
(including itself and its complement). If $B$ is an event
of probability $0$, then $B$ and $B^c$ are independent
events that are mutually exclusive.
If $B$ is an event of probability $0$, then $B^c$ is
an event of probability $1$. Since $B$ and $A$ are
independent events for all choices of $A$, so also
are $B^c$ and $A$ independent events for all choices of $A$.
Thus, we have
Any event of probability $1$ is independent of all events
(including itself and its complement). If $A$ is an event
of probability $1$, then $A$ and $A^c$ are independent
events that are mutually exclusive.
Note that, as @NeilG has pointed out in his answer,
if $A$ and $B$ are independent events that are mutually
exclusive, then at least one of $A$ and $B$ must be
an event of probability $0$.
We also have an anticorollary: $A$ and $A$ are mutually independent
events if and only if $P(A)$ equals either $0$ or $1$.
$A$ and $A^c$ are mutually independent
events if and only if one of $P(A)$ and $P(A^c)$ equals $0$
(and the other equals $1$.) | When two events $A$ and $B$ have no result in common
Let $A$ and $B$ denote two events defined on a sample space $\Omega$.
The formal definition of independent events is as follows.
Definition: $A$ and $B$ are said to be (stochastically)
mutually indepe |
37,931 | When two events $A$ and $B$ have no result in common | mutually exclusive events.. natural blonde hair, and black skin. If I know someone has black skin, I know they will not have naturally blonde hair. Therefore, the characteristic of hair color and skin color are dependent.. Knowing that someone has black skin tells me information about what their hair will be like.. Their hair will NOT be blonde.. These things are dependent. Mutually exclusive implies dependence!
independent events... IQ and shoe size.. If you thought someone was a size 8 shoe, would it change your opinion about their IQ if I told you their shoe size was actually a 10? No.. Shoe size and IQ (likely) have no relationship. (Assume it's true.) Shoe size and IQ are independent of one another.
I hope that helps. | When two events $A$ and $B$ have no result in common | mutually exclusive events.. natural blonde hair, and black skin. If I know someone has black skin, I know they will not have naturally blonde hair. Therefore, the characteristic of hair color and | When two events $A$ and $B$ have no result in common
mutually exclusive events.. natural blonde hair, and black skin. If I know someone has black skin, I know they will not have naturally blonde hair. Therefore, the characteristic of hair color and skin color are dependent.. Knowing that someone has black skin tells me information about what their hair will be like.. Their hair will NOT be blonde.. These things are dependent. Mutually exclusive implies dependence!
independent events... IQ and shoe size.. If you thought someone was a size 8 shoe, would it change your opinion about their IQ if I told you their shoe size was actually a 10? No.. Shoe size and IQ (likely) have no relationship. (Assume it's true.) Shoe size and IQ are independent of one another.
I hope that helps. | When two events $A$ and $B$ have no result in common
mutually exclusive events.. natural blonde hair, and black skin. If I know someone has black skin, I know they will not have naturally blonde hair. Therefore, the characteristic of hair color and |
37,932 | If X=Y+Z, Is it ever useful to regress X on Y? | If you know $X = Y + Z$ and you have $Y$ and $Z$ measured, why would you need to run a regression of $X$ on $Y$ and $Z$? It provides no additional information and does not allow you to make "better" predictions about $X$ (since you know $X$ exactly from $Y$ and $Z$). But if you don't have $Z$, don't know the exact relationship between $Y$ and $X$, and want to predict $X$ from $Y$, then you absolutely can (and should!) regress $X$ on $Y$. The true marginal relationship between $X$ and $Y$ depends on the correlation between $Y$ and $Z$, which is not stated in the problem, so you won't automatically know what the marginal relationship between $X$ and $Y$ is from the formula for $X$ alone. Indeed, this is the usual case that we do regression: we assume a (possibly) deterministic relationship between the outcome and some other variables, but many of those variables are unmeasured, so their influence is captured in the error term. In the absence of measured $Z$, the effect of $Z$ will be captured in the error term, and the marginal effect of $Y$ will be estimated by the model. | If X=Y+Z, Is it ever useful to regress X on Y? | If you know $X = Y + Z$ and you have $Y$ and $Z$ measured, why would you need to run a regression of $X$ on $Y$ and $Z$? It provides no additional information and does not allow you to make "better" p | If X=Y+Z, Is it ever useful to regress X on Y?
If you know $X = Y + Z$ and you have $Y$ and $Z$ measured, why would you need to run a regression of $X$ on $Y$ and $Z$? It provides no additional information and does not allow you to make "better" predictions about $X$ (since you know $X$ exactly from $Y$ and $Z$). But if you don't have $Z$, don't know the exact relationship between $Y$ and $X$, and want to predict $X$ from $Y$, then you absolutely can (and should!) regress $X$ on $Y$. The true marginal relationship between $X$ and $Y$ depends on the correlation between $Y$ and $Z$, which is not stated in the problem, so you won't automatically know what the marginal relationship between $X$ and $Y$ is from the formula for $X$ alone. Indeed, this is the usual case that we do regression: we assume a (possibly) deterministic relationship between the outcome and some other variables, but many of those variables are unmeasured, so their influence is captured in the error term. In the absence of measured $Z$, the effect of $Z$ will be captured in the error term, and the marginal effect of $Y$ will be estimated by the model. | If X=Y+Z, Is it ever useful to regress X on Y?
If you know $X = Y + Z$ and you have $Y$ and $Z$ measured, why would you need to run a regression of $X$ on $Y$ and $Z$? It provides no additional information and does not allow you to make "better" p |
37,933 | If X=Y+Z, Is it ever useful to regress X on Y? | Linear regression is a tool that is used to achieve a goal. So any answer will depend on the goal to be achieved. As said already in the answer of @Noah, if you already know $X$, $Y$, and $Z$, I can't see any goal worth achieving that is achieved by this regression. Why would you want imprecise predictions of $X$ if you can have precise ones?
If however you don't know $Z$, linear regression may work well for predicting $X$ from $Y$. There is nothing that formally forbids you from trying that out (in fact if $Z$ is independent of $Y$ and distributed according to standard regression model assumptions, the standard regression model is just fulfilled). But then, depending on the exact nature of the data (particularly the distribution of $Z$ and how it is related to $Y$), it may not work that well, and/or other techniques may work better. This can be explored for example using bootstrap or cross-validation. | If X=Y+Z, Is it ever useful to regress X on Y? | Linear regression is a tool that is used to achieve a goal. So any answer will depend on the goal to be achieved. As said already in the answer of @Noah, if you already know $X$, $Y$, and $Z$, I can't | If X=Y+Z, Is it ever useful to regress X on Y?
Linear regression is a tool that is used to achieve a goal. So any answer will depend on the goal to be achieved. As said already in the answer of @Noah, if you already know $X$, $Y$, and $Z$, I can't see any goal worth achieving that is achieved by this regression. Why would you want imprecise predictions of $X$ if you can have precise ones?
If however you don't know $Z$, linear regression may work well for predicting $X$ from $Y$. There is nothing that formally forbids you from trying that out (in fact if $Z$ is independent of $Y$ and distributed according to standard regression model assumptions, the standard regression model is just fulfilled). But then, depending on the exact nature of the data (particularly the distribution of $Z$ and how it is related to $Y$), it may not work that well, and/or other techniques may work better. This can be explored for example using bootstrap or cross-validation. | If X=Y+Z, Is it ever useful to regress X on Y?
Linear regression is a tool that is used to achieve a goal. So any answer will depend on the goal to be achieved. As said already in the answer of @Noah, if you already know $X$, $Y$, and $Z$, I can't |
37,934 | If X=Y+Z, Is it ever useful to regress X on Y? | Two people step on a scale. The scale outputs only the total weight of the couple.
If you know the weight of the first person ($Y$) can you guess what the scale will output ($X$) without knowing anything about the weight of the second person ($Z$)?
Intuitively, there has to be some connection. If $Y$ is very heavy, it is likely that the total weight $X$ will also be high.
Regressing X on Y here is just trying to put a number to that connection by looking at a bunch of real-life examples of total weight $X$ given first person weight $Y$.
It is not forbidden, as you say. It is simply trying to make the best guess given the information you have.
Of course if you know the weight of both people $Y$ and $Z$ and you know how scales work, you don't need to make observations and regression. | If X=Y+Z, Is it ever useful to regress X on Y? | Two people step on a scale. The scale outputs only the total weight of the couple.
If you know the weight of the first person ($Y$) can you guess what the scale will output ($X$) without knowing anyth | If X=Y+Z, Is it ever useful to regress X on Y?
Two people step on a scale. The scale outputs only the total weight of the couple.
If you know the weight of the first person ($Y$) can you guess what the scale will output ($X$) without knowing anything about the weight of the second person ($Z$)?
Intuitively, there has to be some connection. If $Y$ is very heavy, it is likely that the total weight $X$ will also be high.
Regressing X on Y here is just trying to put a number to that connection by looking at a bunch of real-life examples of total weight $X$ given first person weight $Y$.
It is not forbidden, as you say. It is simply trying to make the best guess given the information you have.
Of course if you know the weight of both people $Y$ and $Z$ and you know how scales work, you don't need to make observations and regression. | If X=Y+Z, Is it ever useful to regress X on Y?
Two people step on a scale. The scale outputs only the total weight of the couple.
If you know the weight of the first person ($Y$) can you guess what the scale will output ($X$) without knowing anyth |
37,935 | Lasso centering and standarization with R | If you use glmnet, the scaling is performed by the package. You don't need to worry about scaling the test set because the "coefficients are always returned on the original scale".
By default:
glmnet(x, y, [...]
standardize = TRUE,
intercept = TRUE,
standardize.response = FALSE [...])
As for the standardization of the response, it should not change the performance of your model after cross validating over $\lambda$ so you can set standardize.response = FALSE
Indeed the LASSO solves
$$ \min_\beta\; \| Y - X\beta \|^2_2 + \lambda \|\beta\|_1 $$
Scaling $Y$ by a factor $\alpha > 0$, the problem becomes
$$ \min_\beta\; \| \alpha Y - X\beta \|^2_2 + \lambda \|\beta\|_1 $$
which is equivalent to
$$ \min_\beta\; \alpha \| Y-X\beta/\alpha \|^2_2 + \lambda \|\beta\|_1 $$
$$ \min_\beta\; \| Y - X\beta/\alpha \|^2_2 + \lambda \|\beta/\alpha\|_1 $$
So it has the same value of $\lambda$ | Lasso centering and standarization with R | If you use glmnet, the scaling is performed by the package. You don't need to worry about scaling the test set because the "coefficients are always returned on the original scale".
By default:
glmnet( | Lasso centering and standarization with R
If you use glmnet, the scaling is performed by the package. You don't need to worry about scaling the test set because the "coefficients are always returned on the original scale".
By default:
glmnet(x, y, [...]
standardize = TRUE,
intercept = TRUE,
standardize.response = FALSE [...])
As for the standardization of the response, it should not change the performance of your model after cross validating over $\lambda$ so you can set standardize.response = FALSE
Indeed the LASSO solves
$$ \min_\beta\; \| Y - X\beta \|^2_2 + \lambda \|\beta\|_1 $$
Scaling $Y$ by a factor $\alpha > 0$, the problem becomes
$$ \min_\beta\; \| \alpha Y - X\beta \|^2_2 + \lambda \|\beta\|_1 $$
which is equivalent to
$$ \min_\beta\; \alpha \| Y-X\beta/\alpha \|^2_2 + \lambda \|\beta\|_1 $$
$$ \min_\beta\; \| Y - X\beta/\alpha \|^2_2 + \lambda \|\beta/\alpha\|_1 $$
So it has the same value of $\lambda$ | Lasso centering and standarization with R
If you use glmnet, the scaling is performed by the package. You don't need to worry about scaling the test set because the "coefficients are always returned on the original scale".
By default:
glmnet( |
37,936 | Lasso centering and standarization with R | In general, you are right to worry about scaling the responses. If you optimize a function of the kind that LASSO is based on,
$$
\min_{\beta} || Y - X\beta ||_{2}^{2} + \lambda || \beta ||_1,
$$
then scaling the response $Y$ with some constant $\alpha$,
$$
\min_{\beta} || \alpha Y - X\beta ||_{2}^{2} + \lambda || \beta ||_1
$$
leads to
$$
\min_{\beta} \alpha^2 || Y - X\beta / \alpha ||_{2}^{2} + \lambda || \beta ||_1.
$$
Notice the square on the $\alpha$, which is missing in the accepted answer. Dividing by $\alpha^2$ leads to
$$
\min_{\beta} || Y - X\beta / \alpha ||_{2}^{2} + (\lambda / \alpha) || (\beta / \alpha) ||_1,
$$
which describes a solution that will perform differently than a solution of the original problem since effectively the regularization constant changed.
However, if you look at the documentation for glmnet, you will find that the default behavior is to use a log spaced grid for $\lambda$ and choose a range for the grid based on standardized (!) responses and predictors.
So the answer to your first question is, based on the glmnet documentation: If you use default values for all parameters, standardizing the response should not have a big effect on performance since the $\lambda$ grid is chosen using standardized responses anyway. For non-default parameters (e.g., using less values in the $\lambda$ grid by setting nlambda to a low value), it might have a larger effect and you have to be careful. Also, this does not carry over to, e.g., sklearn, where no grid is used and changing the response scale might drastically affect performance.
The answer to your second and third questions is that by default, the package standardizes the data before using it and rescales the coefficients before returning them. | Lasso centering and standarization with R | In general, you are right to worry about scaling the responses. If you optimize a function of the kind that LASSO is based on,
$$
\min_{\beta} || Y - X\beta ||_{2}^{2} + \lambda || \beta ||_1,
$$
then | Lasso centering and standarization with R
In general, you are right to worry about scaling the responses. If you optimize a function of the kind that LASSO is based on,
$$
\min_{\beta} || Y - X\beta ||_{2}^{2} + \lambda || \beta ||_1,
$$
then scaling the response $Y$ with some constant $\alpha$,
$$
\min_{\beta} || \alpha Y - X\beta ||_{2}^{2} + \lambda || \beta ||_1
$$
leads to
$$
\min_{\beta} \alpha^2 || Y - X\beta / \alpha ||_{2}^{2} + \lambda || \beta ||_1.
$$
Notice the square on the $\alpha$, which is missing in the accepted answer. Dividing by $\alpha^2$ leads to
$$
\min_{\beta} || Y - X\beta / \alpha ||_{2}^{2} + (\lambda / \alpha) || (\beta / \alpha) ||_1,
$$
which describes a solution that will perform differently than a solution of the original problem since effectively the regularization constant changed.
However, if you look at the documentation for glmnet, you will find that the default behavior is to use a log spaced grid for $\lambda$ and choose a range for the grid based on standardized (!) responses and predictors.
So the answer to your first question is, based on the glmnet documentation: If you use default values for all parameters, standardizing the response should not have a big effect on performance since the $\lambda$ grid is chosen using standardized responses anyway. For non-default parameters (e.g., using less values in the $\lambda$ grid by setting nlambda to a low value), it might have a larger effect and you have to be careful. Also, this does not carry over to, e.g., sklearn, where no grid is used and changing the response scale might drastically affect performance.
The answer to your second and third questions is that by default, the package standardizes the data before using it and rescales the coefficients before returning them. | Lasso centering and standarization with R
In general, you are right to worry about scaling the responses. If you optimize a function of the kind that LASSO is based on,
$$
\min_{\beta} || Y - X\beta ||_{2}^{2} + \lambda || \beta ||_1,
$$
then |
37,937 | Lasso centering and standarization with R | With a lasso regression, standardization is essential. That's because lasso finds the best solution subject to a constraint on the absolute value of the sum of the coefficients. If one didn't scale the coefficients the answer would totally depend on the scaling of the coefficient. For example using lasso on $x_1, x_2 $ as opposed to $x_1, y=\frac{1}{10000} x_2$ would give very different answers. With the second set of variables, the coefficient of y is almost guaranteed to be zero with lasso. Check the glmnet help, I seem to recall that it will automatically scale the data | Lasso centering and standarization with R | With a lasso regression, standardization is essential. That's because lasso finds the best solution subject to a constraint on the absolute value of the sum of the coefficients. If one didn't scale | Lasso centering and standarization with R
With a lasso regression, standardization is essential. That's because lasso finds the best solution subject to a constraint on the absolute value of the sum of the coefficients. If one didn't scale the coefficients the answer would totally depend on the scaling of the coefficient. For example using lasso on $x_1, x_2 $ as opposed to $x_1, y=\frac{1}{10000} x_2$ would give very different answers. With the second set of variables, the coefficient of y is almost guaranteed to be zero with lasso. Check the glmnet help, I seem to recall that it will automatically scale the data | Lasso centering and standarization with R
With a lasso regression, standardization is essential. That's because lasso finds the best solution subject to a constraint on the absolute value of the sum of the coefficients. If one didn't scale |
37,938 | Probability density function between -1 and 1? | A beta distribution seems to suit your needs, but you'll have to perform a transformation in order to change its $(0,1)$ (finite) support to $(-1,1)$ support.
Let $X$ be distributed with a beta distribution, then the random variable $Y$ given by the transformation
$$Y=(b-a)X+a$$
is beta distributed and the PDF has finite support in $(a,b)$. In your case, $a=-1$ and $b=1$. The PDF of this linear transformation is given by:$$p(Y=y|\alpha,\beta,a,b)=f\left(\frac{y-a}{b-a}\right)\frac{1}{b-a},$$
where $f(x)$ is the PDF of the beta distribution given in the wiki page that I cited, and $\alpha$ and $\beta$ are it's parameters. In your case, with $a=-1$ and $b=1$ we get:
$$p(Y=y|\alpha,\beta)=\frac{1}{2}f\left(\frac{y+1}{2}\right).$$ | Probability density function between -1 and 1? | A beta distribution seems to suit your needs, but you'll have to perform a transformation in order to change its $(0,1)$ (finite) support to $(-1,1)$ support.
Let $X$ be distributed with a beta distri | Probability density function between -1 and 1?
A beta distribution seems to suit your needs, but you'll have to perform a transformation in order to change its $(0,1)$ (finite) support to $(-1,1)$ support.
Let $X$ be distributed with a beta distribution, then the random variable $Y$ given by the transformation
$$Y=(b-a)X+a$$
is beta distributed and the PDF has finite support in $(a,b)$. In your case, $a=-1$ and $b=1$. The PDF of this linear transformation is given by:$$p(Y=y|\alpha,\beta,a,b)=f\left(\frac{y-a}{b-a}\right)\frac{1}{b-a},$$
where $f(x)$ is the PDF of the beta distribution given in the wiki page that I cited, and $\alpha$ and $\beta$ are it's parameters. In your case, with $a=-1$ and $b=1$ we get:
$$p(Y=y|\alpha,\beta)=\frac{1}{2}f\left(\frac{y+1}{2}\right).$$ | Probability density function between -1 and 1?
A beta distribution seems to suit your needs, but you'll have to perform a transformation in order to change its $(0,1)$ (finite) support to $(-1,1)$ support.
Let $X$ be distributed with a beta distri |
37,939 | Probability density function between -1 and 1? | Here is an attempt to further illustrate how to apply Néstor's suggestion (+1, btw) of using the beta distribution.
The beta distribution has two parameters $\alpha$ and $\beta$. These determine the shape of the distribution - it can look like the distributions in your figure, like a box, like a straight line, and so on. The question, then, is which parameters you should use for your distributions. You want to get the right mean and the right shape of the distributions.
If $X\sim \rm Beta(\alpha,\beta)$ then its mean is $\mu=\frac{\alpha}{\alpha+\beta}$. Thus $\beta=\alpha(\mu^{-1}-1)$.
Recall that if $Y=2X-1$ then $E(Y)=2E(X)-1$. If you want your distribution on $[-1,1]$ to have mean $0.5$, then the beta distributed variable $X$ (which is on $[0,1]$) should have mean $\mu=0.75$, since $0.5=2*0.75-1$.
Example: Set $\alpha=5$ (say). Then $\beta=5\cdot(1/0.75-1)=5/3$ yields $X$ with mean $0.75$.
By trying different combinations of $\alpha$ and $\mu$ you can in this way find distributions with the right mean and the right shape. Here are some examples that resemble your figures:
Finally, from the illustration in your question it seems that what you've marked in red is the mode (i.e. the maximum of the density function) and not the mean of the distribution. The mode of the beta distribution is $\frac{\alpha-1}{\alpha+\beta-2}$. Thus, if the mode is $m$, we have $\beta=(\alpha-1)/m-a+2$. Using this, you can find distributions with the right shape and the right mode with experiments analogous to those above. | Probability density function between -1 and 1? | Here is an attempt to further illustrate how to apply Néstor's suggestion (+1, btw) of using the beta distribution.
The beta distribution has two parameters $\alpha$ and $\beta$. These determine the | Probability density function between -1 and 1?
Here is an attempt to further illustrate how to apply Néstor's suggestion (+1, btw) of using the beta distribution.
The beta distribution has two parameters $\alpha$ and $\beta$. These determine the shape of the distribution - it can look like the distributions in your figure, like a box, like a straight line, and so on. The question, then, is which parameters you should use for your distributions. You want to get the right mean and the right shape of the distributions.
If $X\sim \rm Beta(\alpha,\beta)$ then its mean is $\mu=\frac{\alpha}{\alpha+\beta}$. Thus $\beta=\alpha(\mu^{-1}-1)$.
Recall that if $Y=2X-1$ then $E(Y)=2E(X)-1$. If you want your distribution on $[-1,1]$ to have mean $0.5$, then the beta distributed variable $X$ (which is on $[0,1]$) should have mean $\mu=0.75$, since $0.5=2*0.75-1$.
Example: Set $\alpha=5$ (say). Then $\beta=5\cdot(1/0.75-1)=5/3$ yields $X$ with mean $0.75$.
By trying different combinations of $\alpha$ and $\mu$ you can in this way find distributions with the right mean and the right shape. Here are some examples that resemble your figures:
Finally, from the illustration in your question it seems that what you've marked in red is the mode (i.e. the maximum of the density function) and not the mean of the distribution. The mode of the beta distribution is $\frac{\alpha-1}{\alpha+\beta-2}$. Thus, if the mode is $m$, we have $\beta=(\alpha-1)/m-a+2$. Using this, you can find distributions with the right shape and the right mode with experiments analogous to those above. | Probability density function between -1 and 1?
Here is an attempt to further illustrate how to apply Néstor's suggestion (+1, btw) of using the beta distribution.
The beta distribution has two parameters $\alpha$ and $\beta$. These determine the |
37,940 | What are the primary differences between z-scores and t-scores, and are they both considered standard scores? | What you are reporting is a standardized score. It just isn't the standardized score most statisticians are familiar with. Likewise, the t-score you are talking about, isn't what most of the people answering the question think it is.
I only ran into these issues before because I volunteered in a psychometric testing lab while in undergrad. Thanks go to my supervisor at the time for drilling these things into my head. Transformations like this are usually an attempt to solve a "what normal person wants to look at all of those decimal points anyway" sort of problem.
Z-scores are what most people in statistics call "Standard Scores". When a score is at the mean, they have a value of 0, and for each standard deviation difference from the mean adjusts the score by 1.
The "standard score" you are using has a mean of 100 and a difference of a standard deviation adjusts the score by 15. This sort of transformation is most familiar for its use on some intelligence tests.
You probably ran into a t-score in your reading. That is yet another specialized term that has no relation (that I am aware of) to a t-test. t-scores represent the mean as 50 and each standard deviation difference as a 10 point change.
Google found an example conversion sheet here:
http://faculty.pepperdine.edu/shimels/Courses/Files/ConvTable.pdf
A couple mentions of t-scores here supports my assertion regarding them:
http://www.healthpsych.com/bhi/doublenorm.html
http://www.psychometric-success.com/aptitude-tests/percentiles-and-norming.htm
Chapter 5, pp 89, Murphy, K. R., & Davidshofer, C. O. (2001). Psychological testing: principles and applications. Upper Saddle River, NJ: Prentice Hall.
A mention of standardized scores along my interpretation is here:
http://www.gifted.uconn.edu/siegle/research/Normal/Interpret%20Raw%20Scores.html
http://www.nfer.ac.uk/nfer/research/assessment/eleven-plus/standardised-scores.cfm
This is an intro psych book, so it probably isn't particular official either. Chapter 8, pp 307 in Wade, C., & Tarvis, C. (1996). Psychology. New York: Harper Collins state in regards to IQ testing "the average is set arbitrarily at 100, and tests are constructed so that the standard deviation ... is always 15 or 16, depending on the test".
So, now to directly address your questions:
Yes, zscores and tscores are both types of "Standard scores". However, please note that your boss is right in calling the transformation you are doing a "standard score".
I don't know of any standard abbreviation for standardized scores.
As you can see above, I looked for a canonical source, but I was unable to find one. I think the best place to look for a citation people would believe is in the manual of the standardized test you are using.
Good luck. | What are the primary differences between z-scores and t-scores, and are they both considered standar | What you are reporting is a standardized score. It just isn't the standardized score most statisticians are familiar with. Likewise, the t-score you are talking about, isn't what most of the people an | What are the primary differences between z-scores and t-scores, and are they both considered standard scores?
What you are reporting is a standardized score. It just isn't the standardized score most statisticians are familiar with. Likewise, the t-score you are talking about, isn't what most of the people answering the question think it is.
I only ran into these issues before because I volunteered in a psychometric testing lab while in undergrad. Thanks go to my supervisor at the time for drilling these things into my head. Transformations like this are usually an attempt to solve a "what normal person wants to look at all of those decimal points anyway" sort of problem.
Z-scores are what most people in statistics call "Standard Scores". When a score is at the mean, they have a value of 0, and for each standard deviation difference from the mean adjusts the score by 1.
The "standard score" you are using has a mean of 100 and a difference of a standard deviation adjusts the score by 15. This sort of transformation is most familiar for its use on some intelligence tests.
You probably ran into a t-score in your reading. That is yet another specialized term that has no relation (that I am aware of) to a t-test. t-scores represent the mean as 50 and each standard deviation difference as a 10 point change.
Google found an example conversion sheet here:
http://faculty.pepperdine.edu/shimels/Courses/Files/ConvTable.pdf
A couple mentions of t-scores here supports my assertion regarding them:
http://www.healthpsych.com/bhi/doublenorm.html
http://www.psychometric-success.com/aptitude-tests/percentiles-and-norming.htm
Chapter 5, pp 89, Murphy, K. R., & Davidshofer, C. O. (2001). Psychological testing: principles and applications. Upper Saddle River, NJ: Prentice Hall.
A mention of standardized scores along my interpretation is here:
http://www.gifted.uconn.edu/siegle/research/Normal/Interpret%20Raw%20Scores.html
http://www.nfer.ac.uk/nfer/research/assessment/eleven-plus/standardised-scores.cfm
This is an intro psych book, so it probably isn't particular official either. Chapter 8, pp 307 in Wade, C., & Tarvis, C. (1996). Psychology. New York: Harper Collins state in regards to IQ testing "the average is set arbitrarily at 100, and tests are constructed so that the standard deviation ... is always 15 or 16, depending on the test".
So, now to directly address your questions:
Yes, zscores and tscores are both types of "Standard scores". However, please note that your boss is right in calling the transformation you are doing a "standard score".
I don't know of any standard abbreviation for standardized scores.
As you can see above, I looked for a canonical source, but I was unable to find one. I think the best place to look for a citation people would believe is in the manual of the standardized test you are using.
Good luck. | What are the primary differences between z-scores and t-scores, and are they both considered standar
What you are reporting is a standardized score. It just isn't the standardized score most statisticians are familiar with. Likewise, the t-score you are talking about, isn't what most of the people an |
37,941 | What are the primary differences between z-scores and t-scores, and are they both considered standard scores? | Your question pertains to terminology used in the reporting of standardised psychometric tests.
Charles Hale has notes on terminology in standardised testing.
My understanding:
z-score: mean = 0; sd = 1
t-score: mean = 50; sd = 10 (example test using t-scores) (interestingly, t-score means something different in the bone density literature)
Typical IQ style scaling: mean = 100; sd = 15
All of the above are "standardised scores" in a general sense.
I have seen people use the term "standard score" exclusively for z-scores, and also for typical IQ style scaling (e.g., in this conversion table).
In terms of definitive sources of information, there might be something in The Standards for Educational and Psychological Testing from the American Psychological Association. | What are the primary differences between z-scores and t-scores, and are they both considered standar | Your question pertains to terminology used in the reporting of standardised psychometric tests.
Charles Hale has notes on terminology in standardised testing.
My understanding:
z-score: mean = 0; s | What are the primary differences between z-scores and t-scores, and are they both considered standard scores?
Your question pertains to terminology used in the reporting of standardised psychometric tests.
Charles Hale has notes on terminology in standardised testing.
My understanding:
z-score: mean = 0; sd = 1
t-score: mean = 50; sd = 10 (example test using t-scores) (interestingly, t-score means something different in the bone density literature)
Typical IQ style scaling: mean = 100; sd = 15
All of the above are "standardised scores" in a general sense.
I have seen people use the term "standard score" exclusively for z-scores, and also for typical IQ style scaling (e.g., in this conversion table).
In terms of definitive sources of information, there might be something in The Standards for Educational and Psychological Testing from the American Psychological Association. | What are the primary differences between z-scores and t-scores, and are they both considered standar
Your question pertains to terminology used in the reporting of standardised psychometric tests.
Charles Hale has notes on terminology in standardised testing.
My understanding:
z-score: mean = 0; s |
37,942 | What are the primary differences between z-scores and t-scores, and are they both considered standard scores? | Most basic texts on statistics will define these as $z= \frac{\bar{x}-\mu}{ \sigma/\sqrt{n} }$ and $t=\frac{\bar{x}-\mu}{s/\sqrt{n}}$. The difference is that $z$ uses $\sigma$ which is the known population standard deviation and $t$ uses $s$ which is the sample standard devition used as an estimate of the population $\sigma$. There are sometimes variations on $z$ for an individual observation. Both are standardized scores, though $t$ is pretty much only used in testing or confidence intervals while $z$ with $n=1$ is used to compare between different populations. | What are the primary differences between z-scores and t-scores, and are they both considered standar | Most basic texts on statistics will define these as $z= \frac{\bar{x}-\mu}{ \sigma/\sqrt{n} }$ and $t=\frac{\bar{x}-\mu}{s/\sqrt{n}}$. The difference is that $z$ uses $\sigma$ which is the known popu | What are the primary differences between z-scores and t-scores, and are they both considered standard scores?
Most basic texts on statistics will define these as $z= \frac{\bar{x}-\mu}{ \sigma/\sqrt{n} }$ and $t=\frac{\bar{x}-\mu}{s/\sqrt{n}}$. The difference is that $z$ uses $\sigma$ which is the known population standard deviation and $t$ uses $s$ which is the sample standard devition used as an estimate of the population $\sigma$. There are sometimes variations on $z$ for an individual observation. Both are standardized scores, though $t$ is pretty much only used in testing or confidence intervals while $z$ with $n=1$ is used to compare between different populations. | What are the primary differences between z-scores and t-scores, and are they both considered standar
Most basic texts on statistics will define these as $z= \frac{\bar{x}-\mu}{ \sigma/\sqrt{n} }$ and $t=\frac{\bar{x}-\mu}{s/\sqrt{n}}$. The difference is that $z$ uses $\sigma$ which is the known popu |
37,943 | What are the primary differences between z-scores and t-scores, and are they both considered standard scores? | The Student's t test is used when you have a small sample and have to approximate the standard deviation (SD, $\sigma$). If you look at the distribution tables for the z-score and t-score you can see that they quickly approach similar values and that with more than 50 observations the difference is so small that it really doesn't matter which one you use.
The term standard score indicates how many standard deviations away from the expected mean (the null hypothesis) your observations are and through the z-score you can then deduce the probability of that happening by chance, the p-value. | What are the primary differences between z-scores and t-scores, and are they both considered standar | The Student's t test is used when you have a small sample and have to approximate the standard deviation (SD, $\sigma$). If you look at the distribution tables for the z-score and t-score you can see | What are the primary differences between z-scores and t-scores, and are they both considered standard scores?
The Student's t test is used when you have a small sample and have to approximate the standard deviation (SD, $\sigma$). If you look at the distribution tables for the z-score and t-score you can see that they quickly approach similar values and that with more than 50 observations the difference is so small that it really doesn't matter which one you use.
The term standard score indicates how many standard deviations away from the expected mean (the null hypothesis) your observations are and through the z-score you can then deduce the probability of that happening by chance, the p-value. | What are the primary differences between z-scores and t-scores, and are they both considered standar
The Student's t test is used when you have a small sample and have to approximate the standard deviation (SD, $\sigma$). If you look at the distribution tables for the z-score and t-score you can see |
37,944 | What is likelihood actually? | The likelihood function parametrized by a parameter $\theta$ in statistics is defined as
$$
\mathcal{L}(\theta \mid x) = f_{\theta}(x)
$$
where $f_{\theta}$ is the probability density or mass function with parameter $\theta$ and $x$ is the data.
If for some data $x$ you evaluate the function for the parameter $\theta$ we call the result the “likelihood” of $\theta$. There's no other “likelihood”, because this is how we define it.
Using a code example, Gaussian likelihood could be implemented in Python as below.
import numpy as np
from scipy.stats import norm
def likelihood(loc, scale):
return np.prod(norm.pdf(X, loc=loc, scale=scale))
where norm.pdf is the Gaussian probability density function and np.prod calculates the product of the probability density values returned for each value in the array X. Notice that X is not an argument of the function, it is fixed, and the only arguments of the likelihood function are the parameters (here loc and scale). What the function returns, is the likelihood for the parameters passed as arguments. If you maximize this function, the result would be a maximum likelihood estimate for the parameters.
Could it have been better named?
Maybe, but it wasn't. But the same applies to all the other names in mathematics or names in general. For example, “isomorphism” or “monoid” may also not be a great names, but this is how we called them. | What is likelihood actually? | The likelihood function parametrized by a parameter $\theta$ in statistics is defined as
$$
\mathcal{L}(\theta \mid x) = f_{\theta}(x)
$$
where $f_{\theta}$ is the probability density or mass function | What is likelihood actually?
The likelihood function parametrized by a parameter $\theta$ in statistics is defined as
$$
\mathcal{L}(\theta \mid x) = f_{\theta}(x)
$$
where $f_{\theta}$ is the probability density or mass function with parameter $\theta$ and $x$ is the data.
If for some data $x$ you evaluate the function for the parameter $\theta$ we call the result the “likelihood” of $\theta$. There's no other “likelihood”, because this is how we define it.
Using a code example, Gaussian likelihood could be implemented in Python as below.
import numpy as np
from scipy.stats import norm
def likelihood(loc, scale):
return np.prod(norm.pdf(X, loc=loc, scale=scale))
where norm.pdf is the Gaussian probability density function and np.prod calculates the product of the probability density values returned for each value in the array X. Notice that X is not an argument of the function, it is fixed, and the only arguments of the likelihood function are the parameters (here loc and scale). What the function returns, is the likelihood for the parameters passed as arguments. If you maximize this function, the result would be a maximum likelihood estimate for the parameters.
Could it have been better named?
Maybe, but it wasn't. But the same applies to all the other names in mathematics or names in general. For example, “isomorphism” or “monoid” may also not be a great names, but this is how we called them. | What is likelihood actually?
The likelihood function parametrized by a parameter $\theta$ in statistics is defined as
$$
\mathcal{L}(\theta \mid x) = f_{\theta}(x)
$$
where $f_{\theta}$ is the probability density or mass function |
37,945 | What is likelihood actually? | There have been numerous responses including some to your very posts earlier and the present one too.
It should be reiterated that $\mathcal L(\theta\mid \mathbf x) $ or $\ell_\mathbf x(\theta)$ (to emphasize what the argument here is) even though has the same functional form as the corresponding density function of the distribution, in the former, what varies is the value of $\theta$ over the parameter space given the observed sample value $\mathbf x. $ As has been noted earlier too, $\ell_\mathbf x(\theta) $ , as a function of $\theta$, doesn't have to be a legit density function.
It returns likelihood as codified in the Likelihood Principle which basically says that two likelihood functions have same information about $\theta$ if they are proportional to one another or stating in more formal terms, if $E:=(\mathbf X, \theta,\{f_\theta(\mathbf x) \}),$ is the experiment, then any conclusion about $\theta$ (measured by the evidence function $\textrm{EV}(E,\mathbf x) $) should depend on $E,~\mathbf x$ only via $\ell_\theta(\mathbf x).$ So, if $\ell_\mathbf x(\theta)=C(\mathbf x, \mathbf y) \ell_\mathbf y, ~\forall\theta\in\Theta,$ ($C(\mathbf x, \mathbf y) $ would be independent of $\theta$) for two sample values $\mathbf x, \mathbf y, $ then the inference on $\theta$ based on either of the sample observation is equivalent.
Thus likelihood functions enable us to assess the "plausibility" of $\theta:$ if $\ell_\mathbf x(\theta_2) =c\ell_\mathbf x(\theta_1),$ then it is likely that $\theta_2$ is $c ~(c>0) $ (say) times as plausible as $\theta_1.$ By the likelihood principle, for the sample value $\mathbf y, ~\ell_\mathbf y(\theta_2) =c\ell_\mathbf y(\theta_1)$ and likely $\theta_2$ is $c$ times as plausible as $\theta_1$ irrespective of whether $\mathbf x$ or $\mathbf y$ is the realized observation of the sample.
Since the confusion still lingers in light of the likelihoods and priors, let me quote verbatim from $\rm [II]$ (to articulate the relationship of Bayes' theorem and likelihood function; emphasis mine):
[...] given the data $\mathbf y, ~p(\mathbf y\mid\boldsymbol\theta) $ in $$p(\boldsymbol\theta\mid\mathbf y) =cp(\mathbf y\mid\boldsymbol\theta) p(\boldsymbol\theta)$$ may be regarded as a function not of $\bf y$ but of $\boldsymbol\theta.$ When so regarded, following Fisher ($1922$), it is called the likelihood function of $\boldsymbol\theta$ for given $\mathbf y$ and can be written $l(\boldsymbol\theta\mid\mathbf y). $ We can thus write Bayes' formula as $$ p(\boldsymbol\theta\mid\mathbf y) =l(\boldsymbol\theta\mid\mathbf y)p(\boldsymbol\theta).$$ In other words, Bayes' theorem tells us that the probability distribution for $\boldsymbol \theta$ posterior to the data $\bf y$ is proportional to the product of the distribution for $\boldsymbol\theta$ prior to the data and the likelihood for $\boldsymbol\theta$ given $\mathbf y. $
Reference:
$\rm [I]$ Statistical Inference, George Casella, Roger L. Berger, Wadsworth, $2002, $ sec. $6.3.1, $ pp. $290-291, ~293-294.$
$\rm [II]$ Bayesian Inference in Statistical Analysis, George E. P. Box, George C. Tiao, Wiley Classics, $1992, $ sec. $1.2.1, $ pp. $10-11.$ | What is likelihood actually? | There have been numerous responses including some to your very posts earlier and the present one too.
It should be reiterated that $\mathcal L(\theta\mid \mathbf x) $ or $\ell_\mathbf x(\theta)$ (to e | What is likelihood actually?
There have been numerous responses including some to your very posts earlier and the present one too.
It should be reiterated that $\mathcal L(\theta\mid \mathbf x) $ or $\ell_\mathbf x(\theta)$ (to emphasize what the argument here is) even though has the same functional form as the corresponding density function of the distribution, in the former, what varies is the value of $\theta$ over the parameter space given the observed sample value $\mathbf x. $ As has been noted earlier too, $\ell_\mathbf x(\theta) $ , as a function of $\theta$, doesn't have to be a legit density function.
It returns likelihood as codified in the Likelihood Principle which basically says that two likelihood functions have same information about $\theta$ if they are proportional to one another or stating in more formal terms, if $E:=(\mathbf X, \theta,\{f_\theta(\mathbf x) \}),$ is the experiment, then any conclusion about $\theta$ (measured by the evidence function $\textrm{EV}(E,\mathbf x) $) should depend on $E,~\mathbf x$ only via $\ell_\theta(\mathbf x).$ So, if $\ell_\mathbf x(\theta)=C(\mathbf x, \mathbf y) \ell_\mathbf y, ~\forall\theta\in\Theta,$ ($C(\mathbf x, \mathbf y) $ would be independent of $\theta$) for two sample values $\mathbf x, \mathbf y, $ then the inference on $\theta$ based on either of the sample observation is equivalent.
Thus likelihood functions enable us to assess the "plausibility" of $\theta:$ if $\ell_\mathbf x(\theta_2) =c\ell_\mathbf x(\theta_1),$ then it is likely that $\theta_2$ is $c ~(c>0) $ (say) times as plausible as $\theta_1.$ By the likelihood principle, for the sample value $\mathbf y, ~\ell_\mathbf y(\theta_2) =c\ell_\mathbf y(\theta_1)$ and likely $\theta_2$ is $c$ times as plausible as $\theta_1$ irrespective of whether $\mathbf x$ or $\mathbf y$ is the realized observation of the sample.
Since the confusion still lingers in light of the likelihoods and priors, let me quote verbatim from $\rm [II]$ (to articulate the relationship of Bayes' theorem and likelihood function; emphasis mine):
[...] given the data $\mathbf y, ~p(\mathbf y\mid\boldsymbol\theta) $ in $$p(\boldsymbol\theta\mid\mathbf y) =cp(\mathbf y\mid\boldsymbol\theta) p(\boldsymbol\theta)$$ may be regarded as a function not of $\bf y$ but of $\boldsymbol\theta.$ When so regarded, following Fisher ($1922$), it is called the likelihood function of $\boldsymbol\theta$ for given $\mathbf y$ and can be written $l(\boldsymbol\theta\mid\mathbf y). $ We can thus write Bayes' formula as $$ p(\boldsymbol\theta\mid\mathbf y) =l(\boldsymbol\theta\mid\mathbf y)p(\boldsymbol\theta).$$ In other words, Bayes' theorem tells us that the probability distribution for $\boldsymbol \theta$ posterior to the data $\bf y$ is proportional to the product of the distribution for $\boldsymbol\theta$ prior to the data and the likelihood for $\boldsymbol\theta$ given $\mathbf y. $
Reference:
$\rm [I]$ Statistical Inference, George Casella, Roger L. Berger, Wadsworth, $2002, $ sec. $6.3.1, $ pp. $290-291, ~293-294.$
$\rm [II]$ Bayesian Inference in Statistical Analysis, George E. P. Box, George C. Tiao, Wiley Classics, $1992, $ sec. $1.2.1, $ pp. $10-11.$ | What is likelihood actually?
There have been numerous responses including some to your very posts earlier and the present one too.
It should be reiterated that $\mathcal L(\theta\mid \mathbf x) $ or $\ell_\mathbf x(\theta)$ (to e |
37,946 | What is likelihood actually? | As stated by many others: the likelihood function $\mathcal L_y$ is the probability density function $f_\theta$ of the observed data $y$, but viewed as a function of the (unknown) parameter $\theta$, i.e., $\mathcal L_y(\theta) = f_\theta(y)$.
To provide some intuition, let's consider discrete data1 and continuous data separately:
For discrete data we have $\mathcal L_y(\theta) = \mathbb P_\theta(Y = y)$, i.e., the likelihood function is the probability of the observed data viewed as a function of the parameter $\theta$.
For continuous data let's assume that in practice we can only measure, and thus observe, data with limited accuracy. Then, observing $Y = y$ (say, for the sake of simplicity, for real-valued $Y$) can be understood as indicating that $Y$ took a value in a small interval $[y - \delta, y + \delta]$. For the probability of the observed datum $y$ we then have
$$
\mathbb P_\theta(Y \in [y - \delta, y + \delta]) = \mathbb P_\theta(y - \delta \leq Y \leq y + \delta) = \int_{y - \delta}^{y + \delta} f_\theta(y) \,\mathrm d y.
$$
Now, the approximation
$$\int_{y - \delta}^{y + \delta} f_\theta(y) \,\mathrm d y \approx f_\theta(y) \cdot \left[\left(y + \delta\right) -\left(y - \delta\right)\right]
= 2\delta \cdot f_\theta(y)
$$
suggests that the probability of the observed datum $y$ is approximately proportional to $f_\theta(y)$.
In this sense $\mathcal L_y(\theta) = f_\theta(y) \overset{\text{approx.}}\propto \mathbb P_\theta(Y \in [y - \delta, y + \delta])$ still indicates how "likely" a paramter value $\theta$ is for the observed datum $y$.
1 here the probability mass function referred to in some of the other answers can be seen as probability density function w.r.t. the counting measure.
Reference
Held, L., & Sabanés Bové, D. (2020). Likelihood and Bayesian inference: With applications in biology and medicine (Second edition). Springer. | What is likelihood actually? | As stated by many others: the likelihood function $\mathcal L_y$ is the probability density function $f_\theta$ of the observed data $y$, but viewed as a function of the (unknown) parameter $\theta$, | What is likelihood actually?
As stated by many others: the likelihood function $\mathcal L_y$ is the probability density function $f_\theta$ of the observed data $y$, but viewed as a function of the (unknown) parameter $\theta$, i.e., $\mathcal L_y(\theta) = f_\theta(y)$.
To provide some intuition, let's consider discrete data1 and continuous data separately:
For discrete data we have $\mathcal L_y(\theta) = \mathbb P_\theta(Y = y)$, i.e., the likelihood function is the probability of the observed data viewed as a function of the parameter $\theta$.
For continuous data let's assume that in practice we can only measure, and thus observe, data with limited accuracy. Then, observing $Y = y$ (say, for the sake of simplicity, for real-valued $Y$) can be understood as indicating that $Y$ took a value in a small interval $[y - \delta, y + \delta]$. For the probability of the observed datum $y$ we then have
$$
\mathbb P_\theta(Y \in [y - \delta, y + \delta]) = \mathbb P_\theta(y - \delta \leq Y \leq y + \delta) = \int_{y - \delta}^{y + \delta} f_\theta(y) \,\mathrm d y.
$$
Now, the approximation
$$\int_{y - \delta}^{y + \delta} f_\theta(y) \,\mathrm d y \approx f_\theta(y) \cdot \left[\left(y + \delta\right) -\left(y - \delta\right)\right]
= 2\delta \cdot f_\theta(y)
$$
suggests that the probability of the observed datum $y$ is approximately proportional to $f_\theta(y)$.
In this sense $\mathcal L_y(\theta) = f_\theta(y) \overset{\text{approx.}}\propto \mathbb P_\theta(Y \in [y - \delta, y + \delta])$ still indicates how "likely" a paramter value $\theta$ is for the observed datum $y$.
1 here the probability mass function referred to in some of the other answers can be seen as probability density function w.r.t. the counting measure.
Reference
Held, L., & Sabanés Bové, D. (2020). Likelihood and Bayesian inference: With applications in biology and medicine (Second edition). Springer. | What is likelihood actually?
As stated by many others: the likelihood function $\mathcal L_y$ is the probability density function $f_\theta$ of the observed data $y$, but viewed as a function of the (unknown) parameter $\theta$, |
37,947 | What is likelihood actually? | Likelihood is a slippery concept. The likelihood function, $L(t | w)$, expresses how probable the data $t_n$ are in relation to the model function $y(x,w)$.
The uncertainty in the empirical measurements enters the modeling through the misfits $ε_n$; one misfit for each data point. We introduce a misfit probability distribution, usually a Gaussian.
The values of the misfits have no specific meaning. But the introduction of the misfit probability distribution has changed our state of knowledge in a rather fundamental way. As data scientists, we prefer models with small misfits and look suspiciously at large outliers. The misfits $ε_n$ are now subjected to an overarching probability distribution $p(ε_n)$, which “connects” the previously unrelated individual misfit values by their probabilities. Conceptually, this metamorphosis is a big step.
The likelihood value is the product of the $p(ε_n)$.
The likelihood function is the generalization of the likelihood value.
The likelihood function is the product of the misfit probability densities but with the dependency on the coefficient $w$ of the model function $y(x,w)$ taken into account.
Technically, the likelihood function is not a normalized probability distribution over $w$. It is a factor in Bayes' theorem, and does not necessarily integrate to unity.
$$\int L(t | w) dw \ne 1.$$
However, it is a normalized probability distribution when integrated over
the data values $t$
$$\int L(t | w) dt = 1.$$
Which makes it confusing.
The above texts are excerpts from my new tutorial book "What is your model?, a Bayesian tutorial". It is a short self-study book on Bayesian data analysis. One can read 80% for free on Amazon under the Look Inside feature, thereby avoiding bad buys. | What is likelihood actually? | Likelihood is a slippery concept. The likelihood function, $L(t | w)$, expresses how probable the data $t_n$ are in relation to the model function $y(x,w)$.
The uncertainty in the empirical measuremen | What is likelihood actually?
Likelihood is a slippery concept. The likelihood function, $L(t | w)$, expresses how probable the data $t_n$ are in relation to the model function $y(x,w)$.
The uncertainty in the empirical measurements enters the modeling through the misfits $ε_n$; one misfit for each data point. We introduce a misfit probability distribution, usually a Gaussian.
The values of the misfits have no specific meaning. But the introduction of the misfit probability distribution has changed our state of knowledge in a rather fundamental way. As data scientists, we prefer models with small misfits and look suspiciously at large outliers. The misfits $ε_n$ are now subjected to an overarching probability distribution $p(ε_n)$, which “connects” the previously unrelated individual misfit values by their probabilities. Conceptually, this metamorphosis is a big step.
The likelihood value is the product of the $p(ε_n)$.
The likelihood function is the generalization of the likelihood value.
The likelihood function is the product of the misfit probability densities but with the dependency on the coefficient $w$ of the model function $y(x,w)$ taken into account.
Technically, the likelihood function is not a normalized probability distribution over $w$. It is a factor in Bayes' theorem, and does not necessarily integrate to unity.
$$\int L(t | w) dw \ne 1.$$
However, it is a normalized probability distribution when integrated over
the data values $t$
$$\int L(t | w) dt = 1.$$
Which makes it confusing.
The above texts are excerpts from my new tutorial book "What is your model?, a Bayesian tutorial". It is a short self-study book on Bayesian data analysis. One can read 80% for free on Amazon under the Look Inside feature, thereby avoiding bad buys. | What is likelihood actually?
Likelihood is a slippery concept. The likelihood function, $L(t | w)$, expresses how probable the data $t_n$ are in relation to the model function $y(x,w)$.
The uncertainty in the empirical measuremen |
37,948 | What is likelihood actually? | I am not exactly sure I fully understand the question but I suspect it might come down to understanding what the likelihood function is measuring exactly.
Let $X$ be a (absolutely) continuous random variable and let $f_X(\cdot)$ denote its PDF, also known as the likelihood function. The value of $f_X(a)$ is not equal to $P(X = a)$, indeed since $X$ is continuous it means that $P(X=a)=0$ for every real number $a$.
So then what is the interpretation of $f_X(a)$? It denotes the probability of landing inside an "infinitesimal neighborhood of $a$". To make this more precise, let $\varepsilon > 0$, then we can ask $P(|X-a|\leq \varepsilon)$. We thicken the point $a$ by a width of $2\varepsilon$. Then $f_X(a)$ is the limit of $\frac{P(|X-a|\leq \varepsilon)}{2\varepsilon}$ as we shrink $\varepsilon$.
For instance, if $f_X(0) = 2$, this means that if we drew a tiny neighborhood of thickness $\ell$, then the probability of landing inside that neighborhood is approximately equal to $2\ell$. This explains why likelihood can exceed the value of $1$ whereas the probability does not. | What is likelihood actually? | I am not exactly sure I fully understand the question but I suspect it might come down to understanding what the likelihood function is measuring exactly.
Let $X$ be a (absolutely) continuous random v | What is likelihood actually?
I am not exactly sure I fully understand the question but I suspect it might come down to understanding what the likelihood function is measuring exactly.
Let $X$ be a (absolutely) continuous random variable and let $f_X(\cdot)$ denote its PDF, also known as the likelihood function. The value of $f_X(a)$ is not equal to $P(X = a)$, indeed since $X$ is continuous it means that $P(X=a)=0$ for every real number $a$.
So then what is the interpretation of $f_X(a)$? It denotes the probability of landing inside an "infinitesimal neighborhood of $a$". To make this more precise, let $\varepsilon > 0$, then we can ask $P(|X-a|\leq \varepsilon)$. We thicken the point $a$ by a width of $2\varepsilon$. Then $f_X(a)$ is the limit of $\frac{P(|X-a|\leq \varepsilon)}{2\varepsilon}$ as we shrink $\varepsilon$.
For instance, if $f_X(0) = 2$, this means that if we drew a tiny neighborhood of thickness $\ell$, then the probability of landing inside that neighborhood is approximately equal to $2\ell$. This explains why likelihood can exceed the value of $1$ whereas the probability does not. | What is likelihood actually?
I am not exactly sure I fully understand the question but I suspect it might come down to understanding what the likelihood function is measuring exactly.
Let $X$ be a (absolutely) continuous random v |
37,949 | What is likelihood actually? | After studying Tim's answer and realising that code can give me the best short sentences. I realised that I could understand best via code.
The following code outputs the likelihood of a particular mean and variance combination, for a normal distribution and some generated data.
import numpy as np
from scipy.stats import norm
RNG = np.random.default_rng(seed=0)
def likelihood(pMean, pStdDev):
# construct the probability density function for the model this particular likelihood function will use, using the supplied parameters.
gausianPDF = norm.pdf(X, loc=pMean, scale=pStdDev)
# construct the product of the PDF for each data point
productOfPDFs = np.prod(gausianPDF)
# that is the answer.
return productOfPDFs
X = RNG.choice(20,30)
mean = np.mean(X)
stdDev = np.std(X)
lh = likelihood(mean, stdDev)
print('The likelihood of mean ',mean,'and stdDev',stdDev, 'for the data is ',lh)
When I ran it I got
This is for a normal distribution.
I see that the likelihood is the product of a probability density function.
The particular probability density function is based on a distribution model and some data distributed according to the model.
To get the likelihood of some parameters we evaluate the likelihood function for the particular distribution and data, using the parameters we are interested in.
Thus,
The likelihood of the parameters of a model is given by the joint probability density of the data, as modelled using those parameters. | What is likelihood actually? | After studying Tim's answer and realising that code can give me the best short sentences. I realised that I could understand best via code.
The following code outputs the likelihood of a particular me | What is likelihood actually?
After studying Tim's answer and realising that code can give me the best short sentences. I realised that I could understand best via code.
The following code outputs the likelihood of a particular mean and variance combination, for a normal distribution and some generated data.
import numpy as np
from scipy.stats import norm
RNG = np.random.default_rng(seed=0)
def likelihood(pMean, pStdDev):
# construct the probability density function for the model this particular likelihood function will use, using the supplied parameters.
gausianPDF = norm.pdf(X, loc=pMean, scale=pStdDev)
# construct the product of the PDF for each data point
productOfPDFs = np.prod(gausianPDF)
# that is the answer.
return productOfPDFs
X = RNG.choice(20,30)
mean = np.mean(X)
stdDev = np.std(X)
lh = likelihood(mean, stdDev)
print('The likelihood of mean ',mean,'and stdDev',stdDev, 'for the data is ',lh)
When I ran it I got
This is for a normal distribution.
I see that the likelihood is the product of a probability density function.
The particular probability density function is based on a distribution model and some data distributed according to the model.
To get the likelihood of some parameters we evaluate the likelihood function for the particular distribution and data, using the parameters we are interested in.
Thus,
The likelihood of the parameters of a model is given by the joint probability density of the data, as modelled using those parameters. | What is likelihood actually?
After studying Tim's answer and realising that code can give me the best short sentences. I realised that I could understand best via code.
The following code outputs the likelihood of a particular me |
37,950 | Is there a GLM bible? | Is there consensus in the field of statistics that one book is the
absolute best source and completely covering every aspect of GLM -
detailing everything from estimation to inference?
No, there is not. However the classic reference about GLM's would be:
McCullagh, P., & Nelder, J.A. (1989). Generalized linear models. CRC press. | Is there a GLM bible? | Is there consensus in the field of statistics that one book is the
absolute best source and completely covering every aspect of GLM -
detailing everything from estimation to inference?
No, there | Is there a GLM bible?
Is there consensus in the field of statistics that one book is the
absolute best source and completely covering every aspect of GLM -
detailing everything from estimation to inference?
No, there is not. However the classic reference about GLM's would be:
McCullagh, P., & Nelder, J.A. (1989). Generalized linear models. CRC press. | Is there a GLM bible?
Is there consensus in the field of statistics that one book is the
absolute best source and completely covering every aspect of GLM -
detailing everything from estimation to inference?
No, there |
37,951 | Is there a GLM bible? | It's hard to beat
Generalized Linear Models.
P. McCullagh, J. Nelder.
CRC Press.
2nd edition, 1989
It is comprehensive. | Is there a GLM bible? | It's hard to beat
Generalized Linear Models.
P. McCullagh, J. Nelder.
CRC Press.
2nd edition, 1989
It is comprehensive. | Is there a GLM bible?
It's hard to beat
Generalized Linear Models.
P. McCullagh, J. Nelder.
CRC Press.
2nd edition, 1989
It is comprehensive. | Is there a GLM bible?
It's hard to beat
Generalized Linear Models.
P. McCullagh, J. Nelder.
CRC Press.
2nd edition, 1989
It is comprehensive. |
37,952 | Is there a GLM bible? | I don't think there is a single book that will be exactly what you want. From your description, I think the best fit would be:
Dobson, AJ & Barnett, A. (2008). An Introduction to Generalized Linear Models. Chapman and Hall.
It is a classic. It does cover the math, but is also more introductory than other books that do so. | Is there a GLM bible? | I don't think there is a single book that will be exactly what you want. From your description, I think the best fit would be:
Dobson, AJ & Barnett, A. (2008). An Introduction to Generalized Linea | Is there a GLM bible?
I don't think there is a single book that will be exactly what you want. From your description, I think the best fit would be:
Dobson, AJ & Barnett, A. (2008). An Introduction to Generalized Linear Models. Chapman and Hall.
It is a classic. It does cover the math, but is also more introductory than other books that do so. | Is there a GLM bible?
I don't think there is a single book that will be exactly what you want. From your description, I think the best fit would be:
Dobson, AJ & Barnett, A. (2008). An Introduction to Generalized Linea |
37,953 | Is there a GLM bible? | The closest thing I've found to a GLM Bible is Applied Linear Statistical Models by Kutner, Nachtsheim, Neter, and Li. It's over 1400 pages and covers linear regression and GLMs. Virtually anything involving GLMs can be found in that book. | Is there a GLM bible? | The closest thing I've found to a GLM Bible is Applied Linear Statistical Models by Kutner, Nachtsheim, Neter, and Li. It's over 1400 pages and covers linear regression and GLMs. Virtually anything | Is there a GLM bible?
The closest thing I've found to a GLM Bible is Applied Linear Statistical Models by Kutner, Nachtsheim, Neter, and Li. It's over 1400 pages and covers linear regression and GLMs. Virtually anything involving GLMs can be found in that book. | Is there a GLM bible?
The closest thing I've found to a GLM Bible is Applied Linear Statistical Models by Kutner, Nachtsheim, Neter, and Li. It's over 1400 pages and covers linear regression and GLMs. Virtually anything |
37,954 | Is there a GLM bible? | A good book is the one by Fahrmeir et al https://www.amazon.com/Multivariate-Statistical-Modelling-Generalized-Statistics/dp/0387951873/ref=sr_1_1?s=books&ie=UTF8&qid=1506715879&sr=1-1 "Multivariate Statistical Modelling Based on Generalized Linear Models (second edition)", maybe not for a first treatment, but for various extensions of the basic model and coverage of computational algorithms. As the title says, multivariate extensions, semiparametric approaches (splines) and extensions to time series, and more. | Is there a GLM bible? | A good book is the one by Fahrmeir et al https://www.amazon.com/Multivariate-Statistical-Modelling-Generalized-Statistics/dp/0387951873/ref=sr_1_1?s=books&ie=UTF8&qid=1506715879&sr=1-1 "Multivariate | Is there a GLM bible?
A good book is the one by Fahrmeir et al https://www.amazon.com/Multivariate-Statistical-Modelling-Generalized-Statistics/dp/0387951873/ref=sr_1_1?s=books&ie=UTF8&qid=1506715879&sr=1-1 "Multivariate Statistical Modelling Based on Generalized Linear Models (second edition)", maybe not for a first treatment, but for various extensions of the basic model and coverage of computational algorithms. As the title says, multivariate extensions, semiparametric approaches (splines) and extensions to time series, and more. | Is there a GLM bible?
A good book is the one by Fahrmeir et al https://www.amazon.com/Multivariate-Statistical-Modelling-Generalized-Statistics/dp/0387951873/ref=sr_1_1?s=books&ie=UTF8&qid=1506715879&sr=1-1 "Multivariate |
37,955 | Is there a GLM bible? | Introductory books:
An introduction to generalized linear models, by George Dunteman and Moon-Ho Ho (2006). Only 72 pages.
Generalized linear models : a unified approach, by Jeff Gill (2001) This is also short (101 pages).
Then you have more textbook-like, longer books like the one you mention (444 pages), or the one in the other answer (511 pages). | Is there a GLM bible? | Introductory books:
An introduction to generalized linear models, by George Dunteman and Moon-Ho Ho (2006). Only 72 pages.
Generalized linear models : a unified approach, by Jeff Gill (2001) This is | Is there a GLM bible?
Introductory books:
An introduction to generalized linear models, by George Dunteman and Moon-Ho Ho (2006). Only 72 pages.
Generalized linear models : a unified approach, by Jeff Gill (2001) This is also short (101 pages).
Then you have more textbook-like, longer books like the one you mention (444 pages), or the one in the other answer (511 pages). | Is there a GLM bible?
Introductory books:
An introduction to generalized linear models, by George Dunteman and Moon-Ho Ho (2006). Only 72 pages.
Generalized linear models : a unified approach, by Jeff Gill (2001) This is |
37,956 | Is there a GLM bible? | The Nelder book already mentioned is a good one.
Just for more consideration I would recommend Elements of Statistical Learning Second Edition by Trevor Hastie, Robert Tibshirani, Jerome Friedman. I Like ESL because it covers such a breadth of statistical and machine learning topics. It shows how GLMs fit in with other techniques (and it's free).
And as seen in this question, I'd recommend the Simon Wood text Generalised Additive Models: an introduction with R. I really believe the Wood text is worth considering because, while it says it covers GAMs, it really covers LMs, GLMs, and GAMs in detail and introduces some mixed modeling techniques as well. Wood's approach is to introduce each topic with a theoretical background, but then the text is very practical and has examples already in an R package that can be downloaded to accompany the book. | Is there a GLM bible? | The Nelder book already mentioned is a good one.
Just for more consideration I would recommend Elements of Statistical Learning Second Edition by Trevor Hastie, Robert Tibshirani, Jerome Friedman. | Is there a GLM bible?
The Nelder book already mentioned is a good one.
Just for more consideration I would recommend Elements of Statistical Learning Second Edition by Trevor Hastie, Robert Tibshirani, Jerome Friedman. I Like ESL because it covers such a breadth of statistical and machine learning topics. It shows how GLMs fit in with other techniques (and it's free).
And as seen in this question, I'd recommend the Simon Wood text Generalised Additive Models: an introduction with R. I really believe the Wood text is worth considering because, while it says it covers GAMs, it really covers LMs, GLMs, and GAMs in detail and introduces some mixed modeling techniques as well. Wood's approach is to introduce each topic with a theoretical background, but then the text is very practical and has examples already in an R package that can be downloaded to accompany the book. | Is there a GLM bible?
The Nelder book already mentioned is a good one.
Just for more consideration I would recommend Elements of Statistical Learning Second Edition by Trevor Hastie, Robert Tibshirani, Jerome Friedman. |
37,957 | Is the likelihood a true function? | Many books and many posts on this site define the likelihood as a function of model parameters.
If you specify a value for each of the parameters*, you will have at most one value for the likelihood.
(along with everything else you need to have specified, of course)
However, does the output associated with every possible model parameter have to be unique? For example, it seems that for some two configurations of the model parameter, the observed data can be equally likely.
You're confused -- take some function $f$ -- it's fine for $f(x_1)$ and $f(x_2)$ to be equal. $f(x)=(x-3)^2$ is a function, even though $f(2)=f(4)$.
That's two different arguments having the same function value, not the function having two different values for a given argument.
So, my question is whether we are playing fast and loose with the word "function" when talking about the likelihood
Nope
or really the likelihood by definition has to be a function and each input for the model parameter must yield a unique P(x|θ)?
Yes. But you seem to have gotten a little confused about what that means. | Is the likelihood a true function? | Many books and many posts on this site define the likelihood as a function of model parameters.
If you specify a value for each of the parameters*, you will have at most one value for the likelihood | Is the likelihood a true function?
Many books and many posts on this site define the likelihood as a function of model parameters.
If you specify a value for each of the parameters*, you will have at most one value for the likelihood.
(along with everything else you need to have specified, of course)
However, does the output associated with every possible model parameter have to be unique? For example, it seems that for some two configurations of the model parameter, the observed data can be equally likely.
You're confused -- take some function $f$ -- it's fine for $f(x_1)$ and $f(x_2)$ to be equal. $f(x)=(x-3)^2$ is a function, even though $f(2)=f(4)$.
That's two different arguments having the same function value, not the function having two different values for a given argument.
So, my question is whether we are playing fast and loose with the word "function" when talking about the likelihood
Nope
or really the likelihood by definition has to be a function and each input for the model parameter must yield a unique P(x|θ)?
Yes. But you seem to have gotten a little confused about what that means. | Is the likelihood a true function?
Many books and many posts on this site define the likelihood as a function of model parameters.
If you specify a value for each of the parameters*, you will have at most one value for the likelihood |
37,958 | Is the likelihood a true function? | A real-valued function $f$ associates to a vector or real entry $θ\in\Theta$ a real number $f(θ)$, that is,
\begin{align*}f:\ \Theta &\longrightarrow \mathbb{R}\\ \theta &\longrightarrow f(\theta)\end{align*}Once the data $(x_1,\ldots,x_n)$ is observed and thus fixed, the likelihood associates with a given value of the parameter $θ$ the real number $$\prod_{i=1}^n p(x_i|\theta)$$ where $p(x|\theta)$ is the density of the random variable $X_i$. (I assume i.i.d.-ness in this answer to keep notations at a minimum complexity.) It is therefore a well-defined function in the mathematical sense. | Is the likelihood a true function? | A real-valued function $f$ associates to a vector or real entry $θ\in\Theta$ a real number $f(θ)$, that is,
\begin{align*}f:\ \Theta &\longrightarrow \mathbb{R}\\ \theta &\longrightarrow f(\theta)\end | Is the likelihood a true function?
A real-valued function $f$ associates to a vector or real entry $θ\in\Theta$ a real number $f(θ)$, that is,
\begin{align*}f:\ \Theta &\longrightarrow \mathbb{R}\\ \theta &\longrightarrow f(\theta)\end{align*}Once the data $(x_1,\ldots,x_n)$ is observed and thus fixed, the likelihood associates with a given value of the parameter $θ$ the real number $$\prod_{i=1}^n p(x_i|\theta)$$ where $p(x|\theta)$ is the density of the random variable $X_i$. (I assume i.i.d.-ness in this answer to keep notations at a minimum complexity.) It is therefore a well-defined function in the mathematical sense. | Is the likelihood a true function?
A real-valued function $f$ associates to a vector or real entry $θ\in\Theta$ a real number $f(θ)$, that is,
\begin{align*}f:\ \Theta &\longrightarrow \mathbb{R}\\ \theta &\longrightarrow f(\theta)\end |
37,959 | Is the likelihood a true function? | This is closely related to the concept of identifiability in mathematical statistics, which I think relates to your question. Identifiability deals with the possibility that a poorly-specified model may lack a one-to-one relationship between parameter sets and probability distributions over the data, and this causes problems with regard to inference.
For instance, take the "over-parameterized" ANOVA model,
$$
Y_{ij} = \mu + \alpha_i + \epsilon_{ij} ,
$$
where $1 \leq i \leq k$, $1 \leq j \leq n$, $\epsilon_{ij} \sim$ normal$(0, \sigma^2)$, and no restrictions are placed on$\{ \alpha_i \}_{i=1}^{k}$. Now suppose we were told by an oracle the exact distribution of $Y_{ij}$ within each group, so that we know both its mean and variance for every $i$. (This is in fact the maximum we could ever hope to learn from the data.) Can we recover the model parameters? We cannot, because there's an infinite number of ways we could specify $\mu, \alpha_1, \ldots , \alpha_k$ so that $\text{E}(Y_{ij}) = \mu + \alpha_i$ for each $i$. This would show up in the likelihood function as well, where different parameters sets would give exactly the same likelihood for all possible configurations of the data. The model is not identifiable, and we can't obtain even consistent estimates for any of the mean parameters. For this reason one usually imposes the identifiability constraint $\sum_{i=1}^{k} \alpha_i = 0$.
So while it's important that the parameters of the model specify the distributions involved, it's also important that we be able to go in the other direction and infer parameters from distributions, else we could never uncover the "true" model. | Is the likelihood a true function? | This is closely related to the concept of identifiability in mathematical statistics, which I think relates to your question. Identifiability deals with the possibility that a poorly-specified model | Is the likelihood a true function?
This is closely related to the concept of identifiability in mathematical statistics, which I think relates to your question. Identifiability deals with the possibility that a poorly-specified model may lack a one-to-one relationship between parameter sets and probability distributions over the data, and this causes problems with regard to inference.
For instance, take the "over-parameterized" ANOVA model,
$$
Y_{ij} = \mu + \alpha_i + \epsilon_{ij} ,
$$
where $1 \leq i \leq k$, $1 \leq j \leq n$, $\epsilon_{ij} \sim$ normal$(0, \sigma^2)$, and no restrictions are placed on$\{ \alpha_i \}_{i=1}^{k}$. Now suppose we were told by an oracle the exact distribution of $Y_{ij}$ within each group, so that we know both its mean and variance for every $i$. (This is in fact the maximum we could ever hope to learn from the data.) Can we recover the model parameters? We cannot, because there's an infinite number of ways we could specify $\mu, \alpha_1, \ldots , \alpha_k$ so that $\text{E}(Y_{ij}) = \mu + \alpha_i$ for each $i$. This would show up in the likelihood function as well, where different parameters sets would give exactly the same likelihood for all possible configurations of the data. The model is not identifiable, and we can't obtain even consistent estimates for any of the mean parameters. For this reason one usually imposes the identifiability constraint $\sum_{i=1}^{k} \alpha_i = 0$.
So while it's important that the parameters of the model specify the distributions involved, it's also important that we be able to go in the other direction and infer parameters from distributions, else we could never uncover the "true" model. | Is the likelihood a true function?
This is closely related to the concept of identifiability in mathematical statistics, which I think relates to your question. Identifiability deals with the possibility that a poorly-specified model |
37,960 | What is it called to use random error as evidence? | $p$-value hacking
I’ve learned that the headline-grabbing cases of misconduct and fraud are mere distractions. The state of our science is strong, but it’s plagued by a universal problem: Science is hard — really f**ing hard. If we’re going to rely on science as a means for reaching the truth — and it’s still the best tool we have — it’s important that we understand and respect just how difficult it is to get a rigorous result. I could pontificate about all the reasons why science is arduous, but instead I’m going to let you experience one of them for yourself. Welcome to the wild world of $p$-hacking.
From an introductory paragraph at "Science isn't broken," a feature at Fivethirtyeight.com (Christie Aschwanden, Aug. 19, 2015).
The article describes how you can achieve publishable results (and reject a null hypothesis) even though the results are not reproducible.
The $p$-value is that "due to random chance" footnote that you are looking for. By hacking it, you can get your results published. | What is it called to use random error as evidence? | $p$-value hacking
I’ve learned that the headline-grabbing cases of misconduct and fraud are mere distractions. The state of our science is strong, but it’s plagued by a universal problem: Science is | What is it called to use random error as evidence?
$p$-value hacking
I’ve learned that the headline-grabbing cases of misconduct and fraud are mere distractions. The state of our science is strong, but it’s plagued by a universal problem: Science is hard — really f**ing hard. If we’re going to rely on science as a means for reaching the truth — and it’s still the best tool we have — it’s important that we understand and respect just how difficult it is to get a rigorous result. I could pontificate about all the reasons why science is arduous, but instead I’m going to let you experience one of them for yourself. Welcome to the wild world of $p$-hacking.
From an introductory paragraph at "Science isn't broken," a feature at Fivethirtyeight.com (Christie Aschwanden, Aug. 19, 2015).
The article describes how you can achieve publishable results (and reject a null hypothesis) even though the results are not reproducible.
The $p$-value is that "due to random chance" footnote that you are looking for. By hacking it, you can get your results published. | What is it called to use random error as evidence?
$p$-value hacking
I’ve learned that the headline-grabbing cases of misconduct and fraud are mere distractions. The state of our science is strong, but it’s plagued by a universal problem: Science is |
37,961 | What is it called to use random error as evidence? | Cherry picking, suppressing evidence, or the fallacy of incomplete evidence ... you can check Wikipedia:
Cherry picking, suppressing evidence, or the fallacy of incomplete evidence is the act of pointing to individual cases or data that seem to confirm a particular position while ignoring a significant portion of related and similar cases or data that may contradict that position. | What is it called to use random error as evidence? | Cherry picking, suppressing evidence, or the fallacy of incomplete evidence ... you can check Wikipedia:
Cherry picking, suppressing evidence, or the fallacy of incomplete evidence is the act of poin | What is it called to use random error as evidence?
Cherry picking, suppressing evidence, or the fallacy of incomplete evidence ... you can check Wikipedia:
Cherry picking, suppressing evidence, or the fallacy of incomplete evidence is the act of pointing to individual cases or data that seem to confirm a particular position while ignoring a significant portion of related and similar cases or data that may contradict that position. | What is it called to use random error as evidence?
Cherry picking, suppressing evidence, or the fallacy of incomplete evidence ... you can check Wikipedia:
Cherry picking, suppressing evidence, or the fallacy of incomplete evidence is the act of poin |
37,962 | What is it called to use random error as evidence? | Although the technical details may deter folk from answering, they are secondary to your main question about English terminology.
Nevertheless, let's get them out of the way. You have 95% confidence in the results; I take this to mean you knew that 5 in a hundred fail. The chance of randomly picking 1 success from a batch of 100 is 95/100, leaving 99 to choose from. A second success of 1 from 99 has chance 94/99, and so forth. It works out that the chance of randomly picking 5 successes from 100 is about 77%.
But you have presented 5 successes to the world as evidence that the drug works with 100% confidence rather the 77% confidence that you should have inferred. This is at least statistical misrepresentation.
Even if you chose your five examples at random, you have chosen to present a circumstance that happens only 77% of the time in such trials as one that always happens. This is unwarranted inference.
Or you may have chosen your five examples knowing them to be successes. This is selective bias and is statistical falsification. | What is it called to use random error as evidence? | Although the technical details may deter folk from answering, they are secondary to your main question about English terminology.
Nevertheless, let's get them out of the way. You have 95% confidence i | What is it called to use random error as evidence?
Although the technical details may deter folk from answering, they are secondary to your main question about English terminology.
Nevertheless, let's get them out of the way. You have 95% confidence in the results; I take this to mean you knew that 5 in a hundred fail. The chance of randomly picking 1 success from a batch of 100 is 95/100, leaving 99 to choose from. A second success of 1 from 99 has chance 94/99, and so forth. It works out that the chance of randomly picking 5 successes from 100 is about 77%.
But you have presented 5 successes to the world as evidence that the drug works with 100% confidence rather the 77% confidence that you should have inferred. This is at least statistical misrepresentation.
Even if you chose your five examples at random, you have chosen to present a circumstance that happens only 77% of the time in such trials as one that always happens. This is unwarranted inference.
Or you may have chosen your five examples knowing them to be successes. This is selective bias and is statistical falsification. | What is it called to use random error as evidence?
Although the technical details may deter folk from answering, they are secondary to your main question about English terminology.
Nevertheless, let's get them out of the way. You have 95% confidence i |
37,963 | What is it called to use random error as evidence? | If the trials are being tested with a variety of statistical methods, and only the "successful" ones publicized, then the primary term for this would be data dredging. From Wikipedia:
Data dredging (also data fishing, data snooping, data butchery, and p-hacking) is the misuse of data analysis to find
patterns in data that can be presented as statistically significant,
thus dramatically increasing and understating the risk of false
positives. This is done by performing many statistical tests on the
data and only reporting those that come back with significant results.
On the other hand, if only a single statistical test is being applied, and the failed trials literally discarded, then "cherry picking" (from another answer) would be the better term. | What is it called to use random error as evidence? | If the trials are being tested with a variety of statistical methods, and only the "successful" ones publicized, then the primary term for this would be data dredging. From Wikipedia:
Data dredging ( | What is it called to use random error as evidence?
If the trials are being tested with a variety of statistical methods, and only the "successful" ones publicized, then the primary term for this would be data dredging. From Wikipedia:
Data dredging (also data fishing, data snooping, data butchery, and p-hacking) is the misuse of data analysis to find
patterns in data that can be presented as statistically significant,
thus dramatically increasing and understating the risk of false
positives. This is done by performing many statistical tests on the
data and only reporting those that come back with significant results.
On the other hand, if only a single statistical test is being applied, and the failed trials literally discarded, then "cherry picking" (from another answer) would be the better term. | What is it called to use random error as evidence?
If the trials are being tested with a variety of statistical methods, and only the "successful" ones publicized, then the primary term for this would be data dredging. From Wikipedia:
Data dredging ( |
37,964 | What is it called to use random error as evidence? | There's the term publication bias, but that's more about studies done by different researchers where only the researchers who get "good" results publish them. A similar term is "file drawer effect".
The term p-hacking doesn't quite apply, as p-hacking refers to cherry-picking a metric for a particular data set, not cherry-picking the data set. For instance, if you're testing for ESP, and you have someone guess playing cards, you can look at how many times they get the exact card, how often they get the right card value, how often they get the right suit, how often they get the right color, etc. If you keep looking at different ways of measuring "success" until you get one with p<0.05, that's p-hacking. | What is it called to use random error as evidence? | There's the term publication bias, but that's more about studies done by different researchers where only the researchers who get "good" results publish them. A similar term is "file drawer effect".
T | What is it called to use random error as evidence?
There's the term publication bias, but that's more about studies done by different researchers where only the researchers who get "good" results publish them. A similar term is "file drawer effect".
The term p-hacking doesn't quite apply, as p-hacking refers to cherry-picking a metric for a particular data set, not cherry-picking the data set. For instance, if you're testing for ESP, and you have someone guess playing cards, you can look at how many times they get the exact card, how often they get the right card value, how often they get the right suit, how often they get the right color, etc. If you keep looking at different ways of measuring "success" until you get one with p<0.05, that's p-hacking. | What is it called to use random error as evidence?
There's the term publication bias, but that's more about studies done by different researchers where only the researchers who get "good" results publish them. A similar term is "file drawer effect".
T |
37,965 | In training a triplet network, I first have a solid drop in loss, but eventually the loss slowly but consistently increases. What could cause this? | Triplet models are notoriously tricky to train. Before starting a triplet loss project, I strongly recommend reading "FaceNet: A Unified Embedding for Face Recognition and Clustering" by Florian Schroff, Dmitry Kalenichenko, James Philbin because it outlines some of the key problems that arise when using triplet losses, as well as suggested remediations. In my experience, their tips and tricks provide enormous improvements to model training, both in terms of performance against a test set as well as wall-time consumed to train the model. In summary, the authors make several suggestions, but we need to motivate them.
Let's start by defining the problem. The goal of triplet loss is to find an embedding such that
$$
\left\|f(x^a_i) - f(x^p_i) \right\|_2^2+\alpha <
\left\|f(x_i^a)-f(x_i^n)\right\|_2^2
\forall \left(f(x_i^a),f(x_i^p),f(x_i^n)\right)\in\mathcal{T}
\tag{*}$$
where $\mathcal{T}$ is the set of all possible triplets. A triplet is composed of an anchor point, a positive point (same class as the anchor), and a negative point (distinct class from the anchor).
Clearly, iterating over all possible triplets becomes enormously expensive when the data set is even moderately sized. So one way to economize computation and potentially speed up the model is to deliberately choose which triplets to use in updating the model, instead of enumerating all of them, or only choosing triplets at random.
The loss is zero when the inequality $(*)$ holds, and becomes larger the more that this inequality is violated, giving us the loss function
$$\begin{aligned}
L &= \sum_i \max\left\{0,
\left\|f(x^a_i) - f(x^p_i) \right\|_2^2 -
\left\|f(x_i^a)-f(x_i^n)\right\|_2^2
+\alpha\right\} \\
&= \sum_i \text{ReLU}\left(\left\|f(x^a_i) - f(x^p_i) \right\|_2^2 -
\left\|f(x_i^a)-f(x_i^n)\right\|_2^2
+\alpha\right).
\end{aligned}
$$
My hypothesis of your observed behavior.
My understanding is that you're composing triplets by selecting points at random when constructing a triplet. After even a little training, it's usually the case that model arranges the classes well enough that the loss for a randomly-selected triplet is typically small or even zero (but not for all triplets). Counter-intuitively, this isn't helpful, because if the training losses are zero, there's no information available to adjust the weights. Instead, we want to focus on the triplets with the most information; these are the so-called hard triplets. This explains why the loss initially decreases, as well as explaining why you observe large swings in the loss value: most triplets become easy after a little training, but some triplets are hard.
Additionally, I believe you're seeing large swings in the loss value because the minibatch size is small.
This bring us to the first tip from the paper.
Focus on the hardest triplets.
Instead of composing a triplet at random, use online hard-negative mining to choose the triplets with the highest loss.
We want to search for these hard triplets online because which triplets are hard depends on their embeddings, which depend on the model parameters. In other words, the set of triplets labeled "hard" will probably change as the model trains.
So, within a batch, compare all of the distances and construct the triplets with the where the anchor-negative distance $ \left\|f(x_i^a)-f(x_i^n)\right\|_2^2 $ is the smallest. This is online mining because you're computing the batch and then picking which triplets to compare. It's hard negative mining because you're choosing the smallest anchor-negative distance. (By contrast, batch-hard mining chooses the hardest negative and the hardest positive. The hardest positive has the largest $\left\|f(x^a_i) - f(x^p_i) \right\|_2^2$. Batch-hard mining is an even harder task because both the positives and negatives are hardest.)
By construction, we know that the loss for all non-hard triplets must be smaller because hard triplets are characterized by having the largest losses. This means that the numerical values of hard mining will tend to be larger compared to other methods of choosing triplets.
This bring us to the second suggestion.
Use large batch sizes.
Because online hard negative mining looks for the largest losses amongst all possible triplets in a batch, using a large batch is helpful because the value of those maxima is larger in expectation. This is an obvious result of order statistics: appending more draws to a sample will produce a maximum that's at least as large. The FaceNet paper uses batch sizes of 1000. Increasing the batch size increases the difficulty of the task.
As additional justification for large batch sizes consider that we would like to make all triplet comparisons to find the hardest triplets at each step of computing the loss. However, because $|\mathcal{T}|$ is large, this is typically infeasible. So instead, we will look for the hard samples inside each mini-batch, for some large mini-batch size. This will tend to result in easier triplets compared to the hardest triplets within the entire data set, but is a necessary compromise to make feasible training models on large datasets.
This brings us to the third suggestion.
Start with semi-hard negative mining.
If we start training the model with online hard negative mining, the loss tends to just get stuck at a high value and not decrease. If we first train with semi-hard negative mining, and then switch to online hard negative mining, the model tends to do better.
Semi-hard negative mining has the same goal as $(*)$, but instead of focusing on all triplets in $\mathcal{T}$, it only looks to the triplets that already satisfy the a specific ordering:
$$
\left\|f(x^a_i) - f(x^p_i) \right\|_2^2 <
\left\|f(x^a_i) - f(x^n_i) \right\|_2^2 <
\alpha,
$$ and then choosing the hardest negative that satisfies this criterion. The semi-hard loss tends to quickly decrease to very small values because the underlying task is easier. The points are already ordered correctly, and any points which aren't ordered that way are ignored.
I think of this as a certain kind of supervised pre-training of the model: sort the negatives that are within the margin of the anchors so that the online batch hard loss task has a good starting point.
Look out for a collapsed model
Triplet models are susceptible to mapping each input to the same point. When this happens, the distances in $(*)$ go to zero, the loss gets stuck at $\alpha$ and the model is basically done updating. Semi-hard negative mining can also help prevent this from happening.
In my experience, the loss tending towards $\alpha$ is a clear signal that training isn't working as desired and the embeddings are not informative. You can check whether this is the case by examining the embedding vectors: if the classes tend to be close together, there's a problem.
I'm not sure you want to softmax your embeddings.
The FaceNet authors project their outputs to the unit sphere, i.e. the embedding vectors are constrained to unit length. This is because if we allow the embedding vectors to have any length, then the simple fact that data in high dimensions is spread out makes it easy to satisfy the desired inequality $(*)$.
Choosing a unit sphere projection implies that the largest distance between two points must be twice the radius, i.e. 2. The choice of $\alpha$ is likewise strongly linked to this spherical projection. The FaceNet authors don't write about how they chose $\alpha=0.2$ at all, but my guess is they experimented and found this value yielded nice results. 🤷
Choosing softmax for your embeddings means that the embeddings have $L^1$ unit-length instead of $L^2$ unit length, and each element is non-negative. It seems like this is a much stronger restriction than projecting to a sphere, and I wonder whether it will produce the desired result. Likewise, it might mean that you need to be careful about choosing $\alpha$, since the largest possible distance between embeddings is different.
Putting it all together
First, train with semi-hard negative mining. Then online hard negative mining. I've found modest gains from further training with online batch hard mining, but usually this improvement is entirely realized from the first epoch of online batch hard mining, and second and later epochs are basically flat. Furthermore, you can also increase the difficulty of the task by increasing the batch size, so you might start with sizes of 500, increase it to 1000 and then 2000 after some number of epochs. This might help to eke out larger gains.
Track the hardest loss throughout
Changing the how the triplets are selected changes the task; comparing the value of semi-hard loss to batch hard loss is like comparing apples to oranges. Because of how semi-hard loss is defined, its value will always be smaller than ordinary triplet loss. But we still want to achieve the inequality $(*)$! To make a consistent comparison as training progresses, you should measure the loss on the hardest task throughout training to confirm that the model is, indeed, improving as you change tasks during training.
Caveat: I don't know how or whether the use of BERT (or other Sesame Street models) in conjunction with triplet losses will change this analysis. I haven't used these models as extensively. However, because triplet loss is so tricky to use, my recommendation is starting there. | In training a triplet network, I first have a solid drop in loss, but eventually the loss slowly but | Triplet models are notoriously tricky to train. Before starting a triplet loss project, I strongly recommend reading "FaceNet: A Unified Embedding for Face Recognition and Clustering" by Florian Schro | In training a triplet network, I first have a solid drop in loss, but eventually the loss slowly but consistently increases. What could cause this?
Triplet models are notoriously tricky to train. Before starting a triplet loss project, I strongly recommend reading "FaceNet: A Unified Embedding for Face Recognition and Clustering" by Florian Schroff, Dmitry Kalenichenko, James Philbin because it outlines some of the key problems that arise when using triplet losses, as well as suggested remediations. In my experience, their tips and tricks provide enormous improvements to model training, both in terms of performance against a test set as well as wall-time consumed to train the model. In summary, the authors make several suggestions, but we need to motivate them.
Let's start by defining the problem. The goal of triplet loss is to find an embedding such that
$$
\left\|f(x^a_i) - f(x^p_i) \right\|_2^2+\alpha <
\left\|f(x_i^a)-f(x_i^n)\right\|_2^2
\forall \left(f(x_i^a),f(x_i^p),f(x_i^n)\right)\in\mathcal{T}
\tag{*}$$
where $\mathcal{T}$ is the set of all possible triplets. A triplet is composed of an anchor point, a positive point (same class as the anchor), and a negative point (distinct class from the anchor).
Clearly, iterating over all possible triplets becomes enormously expensive when the data set is even moderately sized. So one way to economize computation and potentially speed up the model is to deliberately choose which triplets to use in updating the model, instead of enumerating all of them, or only choosing triplets at random.
The loss is zero when the inequality $(*)$ holds, and becomes larger the more that this inequality is violated, giving us the loss function
$$\begin{aligned}
L &= \sum_i \max\left\{0,
\left\|f(x^a_i) - f(x^p_i) \right\|_2^2 -
\left\|f(x_i^a)-f(x_i^n)\right\|_2^2
+\alpha\right\} \\
&= \sum_i \text{ReLU}\left(\left\|f(x^a_i) - f(x^p_i) \right\|_2^2 -
\left\|f(x_i^a)-f(x_i^n)\right\|_2^2
+\alpha\right).
\end{aligned}
$$
My hypothesis of your observed behavior.
My understanding is that you're composing triplets by selecting points at random when constructing a triplet. After even a little training, it's usually the case that model arranges the classes well enough that the loss for a randomly-selected triplet is typically small or even zero (but not for all triplets). Counter-intuitively, this isn't helpful, because if the training losses are zero, there's no information available to adjust the weights. Instead, we want to focus on the triplets with the most information; these are the so-called hard triplets. This explains why the loss initially decreases, as well as explaining why you observe large swings in the loss value: most triplets become easy after a little training, but some triplets are hard.
Additionally, I believe you're seeing large swings in the loss value because the minibatch size is small.
This bring us to the first tip from the paper.
Focus on the hardest triplets.
Instead of composing a triplet at random, use online hard-negative mining to choose the triplets with the highest loss.
We want to search for these hard triplets online because which triplets are hard depends on their embeddings, which depend on the model parameters. In other words, the set of triplets labeled "hard" will probably change as the model trains.
So, within a batch, compare all of the distances and construct the triplets with the where the anchor-negative distance $ \left\|f(x_i^a)-f(x_i^n)\right\|_2^2 $ is the smallest. This is online mining because you're computing the batch and then picking which triplets to compare. It's hard negative mining because you're choosing the smallest anchor-negative distance. (By contrast, batch-hard mining chooses the hardest negative and the hardest positive. The hardest positive has the largest $\left\|f(x^a_i) - f(x^p_i) \right\|_2^2$. Batch-hard mining is an even harder task because both the positives and negatives are hardest.)
By construction, we know that the loss for all non-hard triplets must be smaller because hard triplets are characterized by having the largest losses. This means that the numerical values of hard mining will tend to be larger compared to other methods of choosing triplets.
This bring us to the second suggestion.
Use large batch sizes.
Because online hard negative mining looks for the largest losses amongst all possible triplets in a batch, using a large batch is helpful because the value of those maxima is larger in expectation. This is an obvious result of order statistics: appending more draws to a sample will produce a maximum that's at least as large. The FaceNet paper uses batch sizes of 1000. Increasing the batch size increases the difficulty of the task.
As additional justification for large batch sizes consider that we would like to make all triplet comparisons to find the hardest triplets at each step of computing the loss. However, because $|\mathcal{T}|$ is large, this is typically infeasible. So instead, we will look for the hard samples inside each mini-batch, for some large mini-batch size. This will tend to result in easier triplets compared to the hardest triplets within the entire data set, but is a necessary compromise to make feasible training models on large datasets.
This brings us to the third suggestion.
Start with semi-hard negative mining.
If we start training the model with online hard negative mining, the loss tends to just get stuck at a high value and not decrease. If we first train with semi-hard negative mining, and then switch to online hard negative mining, the model tends to do better.
Semi-hard negative mining has the same goal as $(*)$, but instead of focusing on all triplets in $\mathcal{T}$, it only looks to the triplets that already satisfy the a specific ordering:
$$
\left\|f(x^a_i) - f(x^p_i) \right\|_2^2 <
\left\|f(x^a_i) - f(x^n_i) \right\|_2^2 <
\alpha,
$$ and then choosing the hardest negative that satisfies this criterion. The semi-hard loss tends to quickly decrease to very small values because the underlying task is easier. The points are already ordered correctly, and any points which aren't ordered that way are ignored.
I think of this as a certain kind of supervised pre-training of the model: sort the negatives that are within the margin of the anchors so that the online batch hard loss task has a good starting point.
Look out for a collapsed model
Triplet models are susceptible to mapping each input to the same point. When this happens, the distances in $(*)$ go to zero, the loss gets stuck at $\alpha$ and the model is basically done updating. Semi-hard negative mining can also help prevent this from happening.
In my experience, the loss tending towards $\alpha$ is a clear signal that training isn't working as desired and the embeddings are not informative. You can check whether this is the case by examining the embedding vectors: if the classes tend to be close together, there's a problem.
I'm not sure you want to softmax your embeddings.
The FaceNet authors project their outputs to the unit sphere, i.e. the embedding vectors are constrained to unit length. This is because if we allow the embedding vectors to have any length, then the simple fact that data in high dimensions is spread out makes it easy to satisfy the desired inequality $(*)$.
Choosing a unit sphere projection implies that the largest distance between two points must be twice the radius, i.e. 2. The choice of $\alpha$ is likewise strongly linked to this spherical projection. The FaceNet authors don't write about how they chose $\alpha=0.2$ at all, but my guess is they experimented and found this value yielded nice results. 🤷
Choosing softmax for your embeddings means that the embeddings have $L^1$ unit-length instead of $L^2$ unit length, and each element is non-negative. It seems like this is a much stronger restriction than projecting to a sphere, and I wonder whether it will produce the desired result. Likewise, it might mean that you need to be careful about choosing $\alpha$, since the largest possible distance between embeddings is different.
Putting it all together
First, train with semi-hard negative mining. Then online hard negative mining. I've found modest gains from further training with online batch hard mining, but usually this improvement is entirely realized from the first epoch of online batch hard mining, and second and later epochs are basically flat. Furthermore, you can also increase the difficulty of the task by increasing the batch size, so you might start with sizes of 500, increase it to 1000 and then 2000 after some number of epochs. This might help to eke out larger gains.
Track the hardest loss throughout
Changing the how the triplets are selected changes the task; comparing the value of semi-hard loss to batch hard loss is like comparing apples to oranges. Because of how semi-hard loss is defined, its value will always be smaller than ordinary triplet loss. But we still want to achieve the inequality $(*)$! To make a consistent comparison as training progresses, you should measure the loss on the hardest task throughout training to confirm that the model is, indeed, improving as you change tasks during training.
Caveat: I don't know how or whether the use of BERT (or other Sesame Street models) in conjunction with triplet losses will change this analysis. I haven't used these models as extensively. However, because triplet loss is so tricky to use, my recommendation is starting there. | In training a triplet network, I first have a solid drop in loss, but eventually the loss slowly but
Triplet models are notoriously tricky to train. Before starting a triplet loss project, I strongly recommend reading "FaceNet: A Unified Embedding for Face Recognition and Clustering" by Florian Schro |
37,966 | Jaccard similarity in R | Looking at the Wikipedia page's edit history, it seems the problem was due to a confusion about the two types of mathematical notation that are used to represent the index. Using notation from set theory, we have:
$$
J(A,B) = \frac{|A\cap B|}{|A\cup B|} = \frac{|A\cap B|}{|A| + |B| - |A\cap B|}
$$
where $\cap$ denotes the intersection, $\cup$ denotes the union, and $\lvert\ \rvert$ denotes the cardinality.
Lower down, the formula was presented algebraically using counts from a matrix / contingency table $M$:
$$
J = \frac{M_{11}}{M_{10}+M_{01}+M_{11}}
$$
This seemed contradictory to an editor who commented that there was an "Erro in formula [sic]. Should be minus the intersection".
The two formulas are in fact consistent because although $|A\cap B|=M_{11}$, $|A|\ne M_{10}$ and $|B|\ne M_{01}$. The algebraic formula could have been presented (in a manner that is more cumbersome, but more clearly parallel to the top formula) like this:
$$
J = \frac{M_{11}}{\sum_j M_{1j} + \sum_i M_{i1} - M_{11}}
$$ | Jaccard similarity in R | Looking at the Wikipedia page's edit history, it seems the problem was due to a confusion about the two types of mathematical notation that are used to represent the index. Using notation from set th | Jaccard similarity in R
Looking at the Wikipedia page's edit history, it seems the problem was due to a confusion about the two types of mathematical notation that are used to represent the index. Using notation from set theory, we have:
$$
J(A,B) = \frac{|A\cap B|}{|A\cup B|} = \frac{|A\cap B|}{|A| + |B| - |A\cap B|}
$$
where $\cap$ denotes the intersection, $\cup$ denotes the union, and $\lvert\ \rvert$ denotes the cardinality.
Lower down, the formula was presented algebraically using counts from a matrix / contingency table $M$:
$$
J = \frac{M_{11}}{M_{10}+M_{01}+M_{11}}
$$
This seemed contradictory to an editor who commented that there was an "Erro in formula [sic]. Should be minus the intersection".
The two formulas are in fact consistent because although $|A\cap B|=M_{11}$, $|A|\ne M_{10}$ and $|B|\ne M_{01}$. The algebraic formula could have been presented (in a manner that is more cumbersome, but more clearly parallel to the top formula) like this:
$$
J = \frac{M_{11}}{\sum_j M_{1j} + \sum_i M_{i1} - M_{11}}
$$ | Jaccard similarity in R
Looking at the Wikipedia page's edit history, it seems the problem was due to a confusion about the two types of mathematical notation that are used to represent the index. Using notation from set th |
37,967 | Jaccard similarity in R | That formula is wrong indeed.
It should be m11 / (m01 + m10 + m11), since the Jaccard index is the size of the intersection between two sets, divided by the size of the union between those sets.
The correct value is 8 / (12 + 23 + 8) = 0.186. I find it weird though, that this is not the same value you get from the R package.
You understood correctly that the Jaccard index is a value between 0 and 1. For the example you gave the correct index is 30 / (2 + 2 + 30) = 0.882. | Jaccard similarity in R | That formula is wrong indeed.
It should be m11 / (m01 + m10 + m11), since the Jaccard index is the size of the intersection between two sets, divided by the size of the union between those sets.
The | Jaccard similarity in R
That formula is wrong indeed.
It should be m11 / (m01 + m10 + m11), since the Jaccard index is the size of the intersection between two sets, divided by the size of the union between those sets.
The correct value is 8 / (12 + 23 + 8) = 0.186. I find it weird though, that this is not the same value you get from the R package.
You understood correctly that the Jaccard index is a value between 0 and 1. For the example you gave the correct index is 30 / (2 + 2 + 30) = 0.882. | Jaccard similarity in R
That formula is wrong indeed.
It should be m11 / (m01 + m10 + m11), since the Jaccard index is the size of the intersection between two sets, divided by the size of the union between those sets.
The |
37,968 | Jaccard similarity in R | I wrote a simple function for calculating the Jaccard index (similarity coefficient) and the complementary Jaccard distance for binary attributes:
# Your dataset
df2 <- data.frame(
IDS = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0),
CESD = c(1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1))
# Function returns the Jaccard index and Jaccard distance
jaccard <- function(df, margin) {
if (margin == 1 | margin == 2) {
M_00 <- apply(df, margin, sum) == 0
M_11 <- apply(df, margin, sum) == 2
if (margin == 1) {
df <- df[!M_00, ]
JSim <- sum(M_11) / nrow(df)
} else {
df <- df[, !M_00]
JSim <- sum(M_11) / length(df)
}
JDist <- 1 - JSim
return(c(JSim = JSim, JDist = JDist))
} else break
}
The function takes two arguments: x a dataframe or matrix object, and m the MARGIN argument used in the apply function. If your data is in wide format set m = 2 to apply sum over the columns. If your data is in long format set m = 1 to apply sum over the rows.
> jaccard(df2, 1)
JSim JDist
0.1860465 0.8139535 | Jaccard similarity in R | I wrote a simple function for calculating the Jaccard index (similarity coefficient) and the complementary Jaccard distance for binary attributes:
# Your dataset
df2 <- data.frame(
IDS = c(1, 1, 1, | Jaccard similarity in R
I wrote a simple function for calculating the Jaccard index (similarity coefficient) and the complementary Jaccard distance for binary attributes:
# Your dataset
df2 <- data.frame(
IDS = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0),
CESD = c(1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1))
# Function returns the Jaccard index and Jaccard distance
jaccard <- function(df, margin) {
if (margin == 1 | margin == 2) {
M_00 <- apply(df, margin, sum) == 0
M_11 <- apply(df, margin, sum) == 2
if (margin == 1) {
df <- df[!M_00, ]
JSim <- sum(M_11) / nrow(df)
} else {
df <- df[, !M_00]
JSim <- sum(M_11) / length(df)
}
JDist <- 1 - JSim
return(c(JSim = JSim, JDist = JDist))
} else break
}
The function takes two arguments: x a dataframe or matrix object, and m the MARGIN argument used in the apply function. If your data is in wide format set m = 2 to apply sum over the columns. If your data is in long format set m = 1 to apply sum over the rows.
> jaccard(df2, 1)
JSim JDist
0.1860465 0.8139535 | Jaccard similarity in R
I wrote a simple function for calculating the Jaccard index (similarity coefficient) and the complementary Jaccard distance for binary attributes:
# Your dataset
df2 <- data.frame(
IDS = c(1, 1, 1, |
37,969 | Jaccard similarity in R | Solved. The problem was that Wikipedia actually is wrong, specifically the formula:
m11/(m10+m01-m11) | Jaccard similarity in R | Solved. The problem was that Wikipedia actually is wrong, specifically the formula:
m11/(m10+m01-m11) | Jaccard similarity in R
Solved. The problem was that Wikipedia actually is wrong, specifically the formula:
m11/(m10+m01-m11) | Jaccard similarity in R
Solved. The problem was that Wikipedia actually is wrong, specifically the formula:
m11/(m10+m01-m11) |
37,970 | Jaccard similarity in R | Augmentation of @jsb's code to enable a similarity measure across all observations.
# Jaccar Index
library(dplyr)
# Your dataset
df <- data.frame(t(data.frame(c1=rnorm(100),
c2=rnorm(100),
c3=rnorm(100),
c4=rnorm(100),
c5=rnorm(100),
c6=rnorm(100))))
df[df > 0] <- 1
df[df <= 0] <- 0
df
# Function returns the Jaccard index and Jaccard distance
# Parameters:
# 1. df, dataframe of interest
# 2. margin, axis in which the apply function is meant to move along
jaccard <- function(df, margin=1) {
if (margin == 1 | margin == 2) {
M_00 <- apply(df, margin, sum) == 0
M_11 <- apply(df, margin, sum) == 2
if (margin == 1) {
df <- df[!M_00, ]
JSim <- sum(M_11) / nrow(df)
} else {
df <- df[, !M_00]
JSim <- sum(M_11) / length(df)
}
JDist <- 1 - JSim
return(c(JSim = JSim, JDist = JDist))
} else break
}
jaccard(df[1:2,], margin=2)
jaccard_per_row <- function(df, margin=1){
require(magrittr)
require(dplyr)
key_pairs <- expand.grid(row.names(df), row.names(df))
results <- t(apply(key_pairs, 1, function(row) jaccard(df[c(row[1], row[2]),], margin=margin)))
key_pair <- key_pairs %>% mutate(pair = paste(Var1,"_",Var2,sep=""))
results <- data.frame(results)
row.names(results) <- key_pair$pair
results
}
jaccard_per_row(df, margin=2)
Output:
JSim JDist
c1_c1 1.0000000 0.0000000
c2_c1 0.3974359 0.6025641
c3_c1 0.3513514 0.6486486
c4_c1 0.3466667 0.6533333
c5_c1 0.3333333 0.6666667
c6_c1 0.3888889 0.6111111
c1_c2 0.3974359 0.6025641
c2_c2 1.0000000 0.0000000
c3_c2 0.3289474 0.6710526
c4_c2 0.4166667 0.5833333
c5_c2 0.3466667 0.6533333
c6_c2 0.3289474 0.6710526
c1_c3 0.3513514 0.6486486
c2_c3 0.3289474 0.6710526
c3_c3 1.0000000 0.0000000
c4_c3 0.2236842 0.7763158
c5_c3 0.3333333 0.6666667
c6_c3 0.3529412 0.6470588
c1_c4 0.3466667 0.6533333
c2_c4 0.4166667 0.5833333
c3_c4 0.2236842 0.7763158
c4_c4 1.0000000 0.0000000
c5_c4 0.3676471 0.6323529
c6_c4 0.2236842 0.7763158
c1_c5 0.3333333 0.6666667
c2_c5 0.3466667 0.6533333
c3_c5 0.3333333 0.6666667
c4_c5 0.3676471 0.6323529
c5_c5 1.0000000 0.0000000
c6_c5 0.2957746 0.7042254
c1_c6 0.3888889 0.6111111
c2_c6 0.3289474 0.6710526
c3_c6 0.3529412 0.6470588
c4_c6 0.2236842 0.7763158
c5_c6 0.2957746 0.7042254
c6_c6 1.0000000 0.0000000
Now you can determine which rows have a threshold above a desired percentage, and only keep very similar observations for your analysis.
Enjoy! | Jaccard similarity in R | Augmentation of @jsb's code to enable a similarity measure across all observations.
# Jaccar Index
library(dplyr)
# Your dataset
df <- data.frame(t(data.frame(c1=rnorm(100),
| Jaccard similarity in R
Augmentation of @jsb's code to enable a similarity measure across all observations.
# Jaccar Index
library(dplyr)
# Your dataset
df <- data.frame(t(data.frame(c1=rnorm(100),
c2=rnorm(100),
c3=rnorm(100),
c4=rnorm(100),
c5=rnorm(100),
c6=rnorm(100))))
df[df > 0] <- 1
df[df <= 0] <- 0
df
# Function returns the Jaccard index and Jaccard distance
# Parameters:
# 1. df, dataframe of interest
# 2. margin, axis in which the apply function is meant to move along
jaccard <- function(df, margin=1) {
if (margin == 1 | margin == 2) {
M_00 <- apply(df, margin, sum) == 0
M_11 <- apply(df, margin, sum) == 2
if (margin == 1) {
df <- df[!M_00, ]
JSim <- sum(M_11) / nrow(df)
} else {
df <- df[, !M_00]
JSim <- sum(M_11) / length(df)
}
JDist <- 1 - JSim
return(c(JSim = JSim, JDist = JDist))
} else break
}
jaccard(df[1:2,], margin=2)
jaccard_per_row <- function(df, margin=1){
require(magrittr)
require(dplyr)
key_pairs <- expand.grid(row.names(df), row.names(df))
results <- t(apply(key_pairs, 1, function(row) jaccard(df[c(row[1], row[2]),], margin=margin)))
key_pair <- key_pairs %>% mutate(pair = paste(Var1,"_",Var2,sep=""))
results <- data.frame(results)
row.names(results) <- key_pair$pair
results
}
jaccard_per_row(df, margin=2)
Output:
JSim JDist
c1_c1 1.0000000 0.0000000
c2_c1 0.3974359 0.6025641
c3_c1 0.3513514 0.6486486
c4_c1 0.3466667 0.6533333
c5_c1 0.3333333 0.6666667
c6_c1 0.3888889 0.6111111
c1_c2 0.3974359 0.6025641
c2_c2 1.0000000 0.0000000
c3_c2 0.3289474 0.6710526
c4_c2 0.4166667 0.5833333
c5_c2 0.3466667 0.6533333
c6_c2 0.3289474 0.6710526
c1_c3 0.3513514 0.6486486
c2_c3 0.3289474 0.6710526
c3_c3 1.0000000 0.0000000
c4_c3 0.2236842 0.7763158
c5_c3 0.3333333 0.6666667
c6_c3 0.3529412 0.6470588
c1_c4 0.3466667 0.6533333
c2_c4 0.4166667 0.5833333
c3_c4 0.2236842 0.7763158
c4_c4 1.0000000 0.0000000
c5_c4 0.3676471 0.6323529
c6_c4 0.2236842 0.7763158
c1_c5 0.3333333 0.6666667
c2_c5 0.3466667 0.6533333
c3_c5 0.3333333 0.6666667
c4_c5 0.3676471 0.6323529
c5_c5 1.0000000 0.0000000
c6_c5 0.2957746 0.7042254
c1_c6 0.3888889 0.6111111
c2_c6 0.3289474 0.6710526
c3_c6 0.3529412 0.6470588
c4_c6 0.2236842 0.7763158
c5_c6 0.2957746 0.7042254
c6_c6 1.0000000 0.0000000
Now you can determine which rows have a threshold above a desired percentage, and only keep very similar observations for your analysis.
Enjoy! | Jaccard similarity in R
Augmentation of @jsb's code to enable a similarity measure across all observations.
# Jaccar Index
library(dplyr)
# Your dataset
df <- data.frame(t(data.frame(c1=rnorm(100),
|
37,971 | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | Short answer in words: they're both equal to the probability of (A and B) occurring.
(Probability of A) times (Probability of B, given that A has happened) equals (Probability that both A and B happen).
Similarly, (Probability of B) times (Probability of A, given that B has happened) equals (Probability that both A and B happen). | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | Short answer in words: they're both equal to the probability of (A and B) occurring.
(Probability of A) times (Probability of B, given that A has happened) equals (Probability that both A and B happe | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
Short answer in words: they're both equal to the probability of (A and B) occurring.
(Probability of A) times (Probability of B, given that A has happened) equals (Probability that both A and B happen).
Similarly, (Probability of B) times (Probability of A, given that B has happened) equals (Probability that both A and B happen). | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
Short answer in words: they're both equal to the probability of (A and B) occurring.
(Probability of A) times (Probability of B, given that A has happened) equals (Probability that both A and B happe |
37,972 | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | First, recall
$$P(A|B)=\frac{P(A∩B)}{P(B)}$$
and consequently
$$P(A∩B)=P(A|B)P(B)$$
The trick here is to realize the very simple fact that $P(A∩B)= P(B∩A)$. This fact is quite intuitive; the probability that UNC wins and Duke loses is the same as the probability that Duke loses and UNC wins. So, in reality, we have two options:
$$P(A∩B)=P(A|B)P(B)$$
and
$$P(B∩A)=P(B|A)P(A)$$
and so
$$P(B|A)P(A)=P(A|B)P(B)$$ | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | First, recall
$$P(A|B)=\frac{P(A∩B)}{P(B)}$$
and consequently
$$P(A∩B)=P(A|B)P(B)$$
The trick here is to realize the very simple fact that $P(A∩B)= P(B∩A)$. This fact is quite intuitive; the probabil | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
First, recall
$$P(A|B)=\frac{P(A∩B)}{P(B)}$$
and consequently
$$P(A∩B)=P(A|B)P(B)$$
The trick here is to realize the very simple fact that $P(A∩B)= P(B∩A)$. This fact is quite intuitive; the probability that UNC wins and Duke loses is the same as the probability that Duke loses and UNC wins. So, in reality, we have two options:
$$P(A∩B)=P(A|B)P(B)$$
and
$$P(B∩A)=P(B|A)P(A)$$
and so
$$P(B|A)P(A)=P(A|B)P(B)$$ | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
First, recall
$$P(A|B)=\frac{P(A∩B)}{P(B)}$$
and consequently
$$P(A∩B)=P(A|B)P(B)$$
The trick here is to realize the very simple fact that $P(A∩B)= P(B∩A)$. This fact is quite intuitive; the probabil |
37,973 | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | If $A$ and $B$ are events such that $P(A), P(B) > 0$, how can I compute the probability that both events occurred? That is, how can I determine $P(A\cap B)$?
Well, if $A$ and $B$ are independent events, then I know that
$$P(A\cap B) = P(A)P(B)\tag{1}$$
(that is just the definition of independence) and so $P(A\cap B)$ is straightforward
to compute. Wouldn't it be wonderful if I could write $P(A\cap B)$ in general
as $P(A)\alpha(B;A)$ where $\alpha(B;A)$ is some quantity
(obviously dependent on $A$ and $B$) with the magical property that
that $P(A)\alpha(B;A)$ has value $P(A \cap B)$? Now for all this to
happen, we must have that $\alpha(B;A) = P(B)$ when
$A$ and $B$ are independent (so that $(1)$ holds). We can say a bit
more about this magical function. Since $(A\cap B) \subset A$, and so
$P(A\cap B) \leq P(A)$,
$\alpha(B;A)$ has maximum value $1$, and since it is possible that
$P(A\cap B) = 0$, $\alpha(B;A)$ can be as small as $0$. So, since
$\alpha(B;A)$ always has value in $[0,1]$, that is, it looks like a
probability and quacks (acts?) like a probability, we could even call it
a probability if we like, except we have not really said
what $\alpha(B;A)$ is the probability of. But whatever this
thingy is or what it means, we always have that
$$\begin{align}
P(A\cap B) &= P(A)\alpha(B;A) \tag{2}\\
\alpha(B;A) &= \frac{P(A\cap B)}{P(A)}\tag{3}
\end{align}$$
Similarly, if we interchange the roles of $A$ and $B$ in the above,
we can write
$$\begin{align}
P(B\cap A) &= P(B)\alpha(A;B) \tag{4}\\
\alpha(A;B) &= \frac{P(B\cap A)}{P(B)}\tag{5}
\end{align}$$
and since $A\cap B = B \cap A$, we can use $(2)$ and $(4)$ to
deduce that
$$P(A)\alpha(B;A) = P(B)\alpha(A;B)\tag{6}$$
which looks almost the same as what the OP is asking about.
So, what are these quantities $\alpha(B;A)$ and $\alpha(A;B)$ that
are "defined" by $(3)$ and $(5)$? Well, one way to think about this
is to consider that in order for both $A$ and $B$ to have occurred,
obviously $A$ must have occurred (which has probability $P(A)$), and
given that we have already assumed that $A$ has occurred, we should
think of $\alpha(B;A)$ as the conditional probability of $B$ given
that we have already assumed that $A$ has occurred. Thus, we call
$\alpha(B;A)$ as the
conditional probability of $B$ given that $A$ has occurred, denoted
$P(B\mid A)$ and defined as
$$ P(B\mid A)
= \frac{P(A\cap B)}{P(A)} ~ \text{provided that} ~ P(A) > 0.\tag{7}$$
Notice that when $A$ and $B$ are independent, $P(B\mid A) = P(B)$,
that is, knowing that $A$ has occurred does not in the least change
your estimate of the probability of $B$.
All of which is fine and dandy, but what is the intuition behind
all this? Well, many people make sense of probabilities as long-term
relative frequencies and so let us consider a sequence of $N$ trials
of the experiment, $N$ large. If the event $A$ occurred $N_A$ times on
these $N$ trials and the event $B$ occurred on $N_B$ trials, then
$$P(A) \approx \frac{N_A}{N}, \quad P(B) \approx \frac{N_B}{N}.$$
Similarly, $$P(A\cap B) \approx \frac{N_{A\cap B}}{N}$$ where we
note that the trials on which $A\cap B$ occurred are necessarily
a subset of the $N_A$ trials on which $A$ occurred (as well as a
subset of the $N_B$ trials on which $B$ occurred).
But, $P(B\mid A)$ is the (conditional) probability of $B$ given that
event $A$ has occurred, and so let us consider the $N_A$ trials on
which $A$ has occurred. On this subsequence of $N_A$ trials, what is
the relative frequency of $B$? Well, $B$ occurred on exactly $N_{A\cap B}$
of these $N_A$ trials, and so
$$P(B\mid A) \approx \frac{N_{A\cap B}}{N_A}
= \frac{\frac{N_{A\cap B}}{N}}{\frac{N_{A}}{N}}\approx \frac{P(A\cap B)}{P(A)}$$
which matches the definition in $(7)$.
OK, so now let the down-voting begin.... | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | If $A$ and $B$ are events such that $P(A), P(B) > 0$, how can I compute the probability that both events occurred? That is, how can I determine $P(A\cap B)$?
Well, if $A$ and $B$ are independent event | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
If $A$ and $B$ are events such that $P(A), P(B) > 0$, how can I compute the probability that both events occurred? That is, how can I determine $P(A\cap B)$?
Well, if $A$ and $B$ are independent events, then I know that
$$P(A\cap B) = P(A)P(B)\tag{1}$$
(that is just the definition of independence) and so $P(A\cap B)$ is straightforward
to compute. Wouldn't it be wonderful if I could write $P(A\cap B)$ in general
as $P(A)\alpha(B;A)$ where $\alpha(B;A)$ is some quantity
(obviously dependent on $A$ and $B$) with the magical property that
that $P(A)\alpha(B;A)$ has value $P(A \cap B)$? Now for all this to
happen, we must have that $\alpha(B;A) = P(B)$ when
$A$ and $B$ are independent (so that $(1)$ holds). We can say a bit
more about this magical function. Since $(A\cap B) \subset A$, and so
$P(A\cap B) \leq P(A)$,
$\alpha(B;A)$ has maximum value $1$, and since it is possible that
$P(A\cap B) = 0$, $\alpha(B;A)$ can be as small as $0$. So, since
$\alpha(B;A)$ always has value in $[0,1]$, that is, it looks like a
probability and quacks (acts?) like a probability, we could even call it
a probability if we like, except we have not really said
what $\alpha(B;A)$ is the probability of. But whatever this
thingy is or what it means, we always have that
$$\begin{align}
P(A\cap B) &= P(A)\alpha(B;A) \tag{2}\\
\alpha(B;A) &= \frac{P(A\cap B)}{P(A)}\tag{3}
\end{align}$$
Similarly, if we interchange the roles of $A$ and $B$ in the above,
we can write
$$\begin{align}
P(B\cap A) &= P(B)\alpha(A;B) \tag{4}\\
\alpha(A;B) &= \frac{P(B\cap A)}{P(B)}\tag{5}
\end{align}$$
and since $A\cap B = B \cap A$, we can use $(2)$ and $(4)$ to
deduce that
$$P(A)\alpha(B;A) = P(B)\alpha(A;B)\tag{6}$$
which looks almost the same as what the OP is asking about.
So, what are these quantities $\alpha(B;A)$ and $\alpha(A;B)$ that
are "defined" by $(3)$ and $(5)$? Well, one way to think about this
is to consider that in order for both $A$ and $B$ to have occurred,
obviously $A$ must have occurred (which has probability $P(A)$), and
given that we have already assumed that $A$ has occurred, we should
think of $\alpha(B;A)$ as the conditional probability of $B$ given
that we have already assumed that $A$ has occurred. Thus, we call
$\alpha(B;A)$ as the
conditional probability of $B$ given that $A$ has occurred, denoted
$P(B\mid A)$ and defined as
$$ P(B\mid A)
= \frac{P(A\cap B)}{P(A)} ~ \text{provided that} ~ P(A) > 0.\tag{7}$$
Notice that when $A$ and $B$ are independent, $P(B\mid A) = P(B)$,
that is, knowing that $A$ has occurred does not in the least change
your estimate of the probability of $B$.
All of which is fine and dandy, but what is the intuition behind
all this? Well, many people make sense of probabilities as long-term
relative frequencies and so let us consider a sequence of $N$ trials
of the experiment, $N$ large. If the event $A$ occurred $N_A$ times on
these $N$ trials and the event $B$ occurred on $N_B$ trials, then
$$P(A) \approx \frac{N_A}{N}, \quad P(B) \approx \frac{N_B}{N}.$$
Similarly, $$P(A\cap B) \approx \frac{N_{A\cap B}}{N}$$ where we
note that the trials on which $A\cap B$ occurred are necessarily
a subset of the $N_A$ trials on which $A$ occurred (as well as a
subset of the $N_B$ trials on which $B$ occurred).
But, $P(B\mid A)$ is the (conditional) probability of $B$ given that
event $A$ has occurred, and so let us consider the $N_A$ trials on
which $A$ has occurred. On this subsequence of $N_A$ trials, what is
the relative frequency of $B$? Well, $B$ occurred on exactly $N_{A\cap B}$
of these $N_A$ trials, and so
$$P(B\mid A) \approx \frac{N_{A\cap B}}{N_A}
= \frac{\frac{N_{A\cap B}}{N}}{\frac{N_{A}}{N}}\approx \frac{P(A\cap B)}{P(A)}$$
which matches the definition in $(7)$.
OK, so now let the down-voting begin.... | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
If $A$ and $B$ are events such that $P(A), P(B) > 0$, how can I compute the probability that both events occurred? That is, how can I determine $P(A\cap B)$?
Well, if $A$ and $B$ are independent event |
37,974 | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | Just trying a graphical intuition... Hope it flies... | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | Just trying a graphical intuition... Hope it flies... | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
Just trying a graphical intuition... Hope it flies... | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
Just trying a graphical intuition... Hope it flies... |
37,975 | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | $p(B\mid A)p(A) = p(B\cap A)=p(A\mid B)p(B)$ | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | $p(B\mid A)p(A) = p(B\cap A)=p(A\mid B)p(B)$ | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
$p(B\mid A)p(A) = p(B\cap A)=p(A\mid B)p(B)$ | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
$p(B\mid A)p(A) = p(B\cap A)=p(A\mid B)p(B)$ |
37,976 | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | A simple example is like this. Suppose we have A containing two events, "A = 1" or "A = 2", and B containing three events, "B = 1", "B = 2", or "B = 3". The (arbitrary) cell counts for the occurrence of the events are listed in the following table.
Now we can do a simple example:
\begin{align*}
P(A=1) & = \frac{6}{21} \\
P(B=2) &= \frac{7}{21} \\
P(A=1 \mid B=2) &= \frac{2}{7} \\
P(B=2 \mid A=1) &= \frac{2}{6} \\
\end{align*}
which yields to
\begin{align*}
P(A=1) \times P(B=2 \mid A=1) &= \frac{6}{21} \times \frac{2}{6} = \frac{2}{21} \\
P(B=2) \times P(A=1 \mid B=2) &= \frac{7}{21} \times \frac{2}{7} = \frac{2}{21}
\end{align*}
It's easy to verify that the equation does hold for other occurrence of event A and B.
The equation is generally valid. That's no magic of course.
If A and B are independent, then $P(B|A) = P(B)$, $P(A|B) = P(A)$. The two sides of the equation reduce to $P(A)P(B)$. The equation holds.
If A and B are not independent, then it's easy to be proved by the definition of conditional probability (or Bayes' theorem). | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | A simple example is like this. Suppose we have A containing two events, "A = 1" or "A = 2", and B containing three events, "B = 1", "B = 2", or "B = 3". The (arbitrary) cell counts for the occurrence | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
A simple example is like this. Suppose we have A containing two events, "A = 1" or "A = 2", and B containing three events, "B = 1", "B = 2", or "B = 3". The (arbitrary) cell counts for the occurrence of the events are listed in the following table.
Now we can do a simple example:
\begin{align*}
P(A=1) & = \frac{6}{21} \\
P(B=2) &= \frac{7}{21} \\
P(A=1 \mid B=2) &= \frac{2}{7} \\
P(B=2 \mid A=1) &= \frac{2}{6} \\
\end{align*}
which yields to
\begin{align*}
P(A=1) \times P(B=2 \mid A=1) &= \frac{6}{21} \times \frac{2}{6} = \frac{2}{21} \\
P(B=2) \times P(A=1 \mid B=2) &= \frac{7}{21} \times \frac{2}{7} = \frac{2}{21}
\end{align*}
It's easy to verify that the equation does hold for other occurrence of event A and B.
The equation is generally valid. That's no magic of course.
If A and B are independent, then $P(B|A) = P(B)$, $P(A|B) = P(A)$. The two sides of the equation reduce to $P(A)P(B)$. The equation holds.
If A and B are not independent, then it's easy to be proved by the definition of conditional probability (or Bayes' theorem). | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
A simple example is like this. Suppose we have A containing two events, "A = 1" or "A = 2", and B containing three events, "B = 1", "B = 2", or "B = 3". The (arbitrary) cell counts for the occurrence |
37,977 | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | Let us motivate the formula for conditional probability. We keep things are simple as possible. $S$, our sample space, will be finite. Let $E$ and $F$ be subsets of it (events).
When we write, $P(E|F)$ we are saying, "probability of $E$ given $F$". Thus, instead of the possibilities being in $S$, they are now all in $F$ since we are told that event $F$ has occurred. The new sample space is $F$. To find the probability of $E$ given $F$, it remains to count the number of possibilities which are in $E$. It would be wrong to write $|E|$ since this is counting the possibilities outside of $F$ (the new sample space). Therefore, there are $|E\cap F|$ possibilities of $E$ which happen within $F$.
It follows by the finite probability formula (event size divided by sample size),
$$ P(E|F) = \frac{|E\cap F|}{|F|} $$
But let us rewrite this formula to have probabilities on the right side. We use,
$$ P(E|F) = \frac{|E\cap F|}{|S|} \cdot \frac{|S|}{|F|} = \left( \frac{|E\cap F|}{|S|} \right) \bigg/\left( \frac{|F|}{|S|} \right) = \frac{P(E\cap F)}{P(F)} $$ | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$? | Let us motivate the formula for conditional probability. We keep things are simple as possible. $S$, our sample space, will be finite. Let $E$ and $F$ be subsets of it (events).
When we write, $P(E|F) | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
Let us motivate the formula for conditional probability. We keep things are simple as possible. $S$, our sample space, will be finite. Let $E$ and $F$ be subsets of it (events).
When we write, $P(E|F)$ we are saying, "probability of $E$ given $F$". Thus, instead of the possibilities being in $S$, they are now all in $F$ since we are told that event $F$ has occurred. The new sample space is $F$. To find the probability of $E$ given $F$, it remains to count the number of possibilities which are in $E$. It would be wrong to write $|E|$ since this is counting the possibilities outside of $F$ (the new sample space). Therefore, there are $|E\cap F|$ possibilities of $E$ which happen within $F$.
It follows by the finite probability formula (event size divided by sample size),
$$ P(E|F) = \frac{|E\cap F|}{|F|} $$
But let us rewrite this formula to have probabilities on the right side. We use,
$$ P(E|F) = \frac{|E\cap F|}{|S|} \cdot \frac{|S|}{|F|} = \left( \frac{|E\cap F|}{|S|} \right) \bigg/\left( \frac{|F|}{|S|} \right) = \frac{P(E\cap F)}{P(F)} $$ | Why is $p(A) \times p(B|A) = p(B) \times p(A|B)$?
Let us motivate the formula for conditional probability. We keep things are simple as possible. $S$, our sample space, will be finite. Let $E$ and $F$ be subsets of it (events).
When we write, $P(E|F) |
37,978 | Model evaluation and comparison for selecting the best model | You are using a wide range of different types of models, and that makes this an interesting situation. Usually, when people say they are engaged in model selection, they mean that they have one type of model, with differing sets of predictors (for example, a multiple regression model with variables A, B, C & D, versus A, B & A*B, etc.). Note that in order to determine the best model, we need to specify what 'best' means; because you are focusing on data mining approaches, I am assuming that you want to maximize predictive accuracy. Let me say a couple of things:
Can you / should you compare them with a Bayes factor? I suspect this can be done, but I have little expertise there, so I should let another CV contributor address that; there are many here who are quite strong on that topic.
Should I compare all methods by AIC? I would not use the AIC in your situation. In general, I think highly of the AIC, but it is not appropriate for every task. There are different versions of the AIC, but in essence, they work the same: The AIC adjusts a goodness-of-fit measure for the ability of a model to produce goodness-of-fit. It does that by penalizing the model for the number of parameters it has. Thus, this assumes that every parameter contributes equally to the ability of a model to fit data. When comparing one multiple regression model to another multiple regression model, that is true. However, it is not at all clear that the addition of another parameter to a multiple regression model equally adds to the ability of the model to fit data as adding another parameter to a very different type of model (e.g., a neural network model, or a classification tree).
Should I compare all methods by Kappa? I also know somewhat less about using Kappa for this goal, but here is a resource with some good general information about it, and here is a paper I stumbled across that does use it in this way, and may be helpful to you (n.b., I haven't read it).
Should I compare all methods by cross-validation? This is probably your best bet. The model selected is the one that minimizes prediction error on a holdout set.
"Will I ever be certain of selecting the best model?" Nope. We're
playing a probabilistic game here, and that's just the way it goes,
unfortunately. One approach that is probably worth your while is to bootstrap your data, and apply the model selection approach of your choice to each bootsample. This will give you an idea about how clearly one model is favored over the rest. This will be computationally expensive (to say the least), but a small number of iterations should suffice for your purposes, I should think 100 would be enough. | Model evaluation and comparison for selecting the best model | You are using a wide range of different types of models, and that makes this an interesting situation. Usually, when people say they are engaged in model selection, they mean that they have one type | Model evaluation and comparison for selecting the best model
You are using a wide range of different types of models, and that makes this an interesting situation. Usually, when people say they are engaged in model selection, they mean that they have one type of model, with differing sets of predictors (for example, a multiple regression model with variables A, B, C & D, versus A, B & A*B, etc.). Note that in order to determine the best model, we need to specify what 'best' means; because you are focusing on data mining approaches, I am assuming that you want to maximize predictive accuracy. Let me say a couple of things:
Can you / should you compare them with a Bayes factor? I suspect this can be done, but I have little expertise there, so I should let another CV contributor address that; there are many here who are quite strong on that topic.
Should I compare all methods by AIC? I would not use the AIC in your situation. In general, I think highly of the AIC, but it is not appropriate for every task. There are different versions of the AIC, but in essence, they work the same: The AIC adjusts a goodness-of-fit measure for the ability of a model to produce goodness-of-fit. It does that by penalizing the model for the number of parameters it has. Thus, this assumes that every parameter contributes equally to the ability of a model to fit data. When comparing one multiple regression model to another multiple regression model, that is true. However, it is not at all clear that the addition of another parameter to a multiple regression model equally adds to the ability of the model to fit data as adding another parameter to a very different type of model (e.g., a neural network model, or a classification tree).
Should I compare all methods by Kappa? I also know somewhat less about using Kappa for this goal, but here is a resource with some good general information about it, and here is a paper I stumbled across that does use it in this way, and may be helpful to you (n.b., I haven't read it).
Should I compare all methods by cross-validation? This is probably your best bet. The model selected is the one that minimizes prediction error on a holdout set.
"Will I ever be certain of selecting the best model?" Nope. We're
playing a probabilistic game here, and that's just the way it goes,
unfortunately. One approach that is probably worth your while is to bootstrap your data, and apply the model selection approach of your choice to each bootsample. This will give you an idea about how clearly one model is favored over the rest. This will be computationally expensive (to say the least), but a small number of iterations should suffice for your purposes, I should think 100 would be enough. | Model evaluation and comparison for selecting the best model
You are using a wide range of different types of models, and that makes this an interesting situation. Usually, when people say they are engaged in model selection, they mean that they have one type |
37,979 | Model evaluation and comparison for selecting the best model | In my mind, cross-validation is a pretty solid gold standard for making comparisons that focus on models' abilities to predict new data. That said, for the GLM case at least, AIC has been demonstrated (Stone, 1977) to be asymptotically equivalent to cross-validation, so if you're ok with asymptotic assumption, you can save yourself some compute time by going with AIC rather than computing the full cross-validation. | Model evaluation and comparison for selecting the best model | In my mind, cross-validation is a pretty solid gold standard for making comparisons that focus on models' abilities to predict new data. That said, for the GLM case at least, AIC has been demonstrated | Model evaluation and comparison for selecting the best model
In my mind, cross-validation is a pretty solid gold standard for making comparisons that focus on models' abilities to predict new data. That said, for the GLM case at least, AIC has been demonstrated (Stone, 1977) to be asymptotically equivalent to cross-validation, so if you're ok with asymptotic assumption, you can save yourself some compute time by going with AIC rather than computing the full cross-validation. | Model evaluation and comparison for selecting the best model
In my mind, cross-validation is a pretty solid gold standard for making comparisons that focus on models' abilities to predict new data. That said, for the GLM case at least, AIC has been demonstrated |
37,980 | Model evaluation and comparison for selecting the best model | Assuming yo are using classification error or something similar as your performance measure, then why don't you try cross validation of all models?
Split your data into, say, 10 chunks, and then do 10 rounds of build and test using one of those chunks as the test set and the other nine as the training (ie. round1: train 2-10, test 1. round2: train 1+3-10, test 2. round3: train 1-2+4-10, test 3).
This approach helps you find which algorithm (and which parameters for those models), perform the best.
One of the things I struggled with at first, was that it is not so much the actual model that gets built that matters, so much as the function you call and the parameters you provide to it that are important. | Model evaluation and comparison for selecting the best model | Assuming yo are using classification error or something similar as your performance measure, then why don't you try cross validation of all models?
Split your data into, say, 10 chunks, and then do 10 | Model evaluation and comparison for selecting the best model
Assuming yo are using classification error or something similar as your performance measure, then why don't you try cross validation of all models?
Split your data into, say, 10 chunks, and then do 10 rounds of build and test using one of those chunks as the test set and the other nine as the training (ie. round1: train 2-10, test 1. round2: train 1+3-10, test 2. round3: train 1-2+4-10, test 3).
This approach helps you find which algorithm (and which parameters for those models), perform the best.
One of the things I struggled with at first, was that it is not so much the actual model that gets built that matters, so much as the function you call and the parameters you provide to it that are important. | Model evaluation and comparison for selecting the best model
Assuming yo are using classification error or something similar as your performance measure, then why don't you try cross validation of all models?
Split your data into, say, 10 chunks, and then do 10 |
37,981 | Transforming the dummy values to be able to take logs | Just because you're taking logs of some of the variables in your model, there's no reason you have to take logs of all of them. Leave a 0/1 coded dummy variable as it is. | Transforming the dummy values to be able to take logs | Just because you're taking logs of some of the variables in your model, there's no reason you have to take logs of all of them. Leave a 0/1 coded dummy variable as it is. | Transforming the dummy values to be able to take logs
Just because you're taking logs of some of the variables in your model, there's no reason you have to take logs of all of them. Leave a 0/1 coded dummy variable as it is. | Transforming the dummy values to be able to take logs
Just because you're taking logs of some of the variables in your model, there's no reason you have to take logs of all of them. Leave a 0/1 coded dummy variable as it is. |
37,982 | Transforming the dummy values to be able to take logs | I agree with onestop. You may also find this blog post from Econometrics Beat useful in learning how to interpret the coefficients on dummy variables when the dependent variable is logged:
http://davegiles.blogspot.com/2011/03/dummies-for-dummies.html
The Cliffs Notes version is that for a model like
\begin{equation}
\ln(Y) = a + b \cdot \ln(X) + c \cdot D + \varepsilon,
\end{equation}
where $X$ is a continuous regressor, and $D$ is a zero-one dummy variable.
If $D$ switches from 0 to 1, the % impact of $D$ on $Y$ is $100 \cdot (\exp(c)-1).$
If $D$ switches from 1 to 0, the % impact of $D$ on $Y$ is $100 \cdot (\exp(-c)-1).$
And don't read anything into the title of the post. | Transforming the dummy values to be able to take logs | I agree with onestop. You may also find this blog post from Econometrics Beat useful in learning how to interpret the coefficients on dummy variables when the dependent variable is logged:
http://dave | Transforming the dummy values to be able to take logs
I agree with onestop. You may also find this blog post from Econometrics Beat useful in learning how to interpret the coefficients on dummy variables when the dependent variable is logged:
http://davegiles.blogspot.com/2011/03/dummies-for-dummies.html
The Cliffs Notes version is that for a model like
\begin{equation}
\ln(Y) = a + b \cdot \ln(X) + c \cdot D + \varepsilon,
\end{equation}
where $X$ is a continuous regressor, and $D$ is a zero-one dummy variable.
If $D$ switches from 0 to 1, the % impact of $D$ on $Y$ is $100 \cdot (\exp(c)-1).$
If $D$ switches from 1 to 0, the % impact of $D$ on $Y$ is $100 \cdot (\exp(-c)-1).$
And don't read anything into the title of the post. | Transforming the dummy values to be able to take logs
I agree with onestop. You may also find this blog post from Econometrics Beat useful in learning how to interpret the coefficients on dummy variables when the dependent variable is logged:
http://dave |
37,983 | Efficient way to merge multiple dataframes in R [closed] | You could cat them within R as follows:
read.table(pipe("cat bigfile1.txt bigfile2.txt bigfile3.txt")) | Efficient way to merge multiple dataframes in R [closed] | You could cat them within R as follows:
read.table(pipe("cat bigfile1.txt bigfile2.txt bigfile3.txt")) | Efficient way to merge multiple dataframes in R [closed]
You could cat them within R as follows:
read.table(pipe("cat bigfile1.txt bigfile2.txt bigfile3.txt")) | Efficient way to merge multiple dataframes in R [closed]
You could cat them within R as follows:
read.table(pipe("cat bigfile1.txt bigfile2.txt bigfile3.txt")) |
37,984 | Efficient way to merge multiple dataframes in R [closed] | Since you mention that they have the same column layout, you probably want the three (or more) data.frames to be appended below eachother, right?
In that case, you can look at rbind:
cres = rbind(c, c1, c2)
Beware, though: with a lot of data.frames, I've noticed the performance to be subpar (this has to do with the way data.frames are managed in-memory, as lists of columns). Also, there may be issues with factors: having the same column layout, but holding different levels for factors may break this (haven't tried). | Efficient way to merge multiple dataframes in R [closed] | Since you mention that they have the same column layout, you probably want the three (or more) data.frames to be appended below eachother, right?
In that case, you can look at rbind:
cres = rbind(c, c | Efficient way to merge multiple dataframes in R [closed]
Since you mention that they have the same column layout, you probably want the three (or more) data.frames to be appended below eachother, right?
In that case, you can look at rbind:
cres = rbind(c, c1, c2)
Beware, though: with a lot of data.frames, I've noticed the performance to be subpar (this has to do with the way data.frames are managed in-memory, as lists of columns). Also, there may be issues with factors: having the same column layout, but holding different levels for factors may break this (haven't tried). | Efficient way to merge multiple dataframes in R [closed]
Since you mention that they have the same column layout, you probably want the three (or more) data.frames to be appended below eachother, right?
In that case, you can look at rbind:
cres = rbind(c, c |
37,985 | Efficient way to merge multiple dataframes in R [closed] | Check out rbind.fill from plyr package. I've recently seen Hadley's comment that it is efficient but unable to find it. | Efficient way to merge multiple dataframes in R [closed] | Check out rbind.fill from plyr package. I've recently seen Hadley's comment that it is efficient but unable to find it. | Efficient way to merge multiple dataframes in R [closed]
Check out rbind.fill from plyr package. I've recently seen Hadley's comment that it is efficient but unable to find it. | Efficient way to merge multiple dataframes in R [closed]
Check out rbind.fill from plyr package. I've recently seen Hadley's comment that it is efficient but unable to find it. |
37,986 | Efficient way to merge multiple dataframes in R [closed] | If by efficient you mean "fast," check out the data.table package. It has very fast mergers. | Efficient way to merge multiple dataframes in R [closed] | If by efficient you mean "fast," check out the data.table package. It has very fast mergers. | Efficient way to merge multiple dataframes in R [closed]
If by efficient you mean "fast," check out the data.table package. It has very fast mergers. | Efficient way to merge multiple dataframes in R [closed]
If by efficient you mean "fast," check out the data.table package. It has very fast mergers. |
37,987 | Efficient way to merge multiple dataframes in R [closed] | You can try join statement in sqldf package. I find working with SQL in case large dataset much easier. Please find the link here for reference | Efficient way to merge multiple dataframes in R [closed] | You can try join statement in sqldf package. I find working with SQL in case large dataset much easier. Please find the link here for reference | Efficient way to merge multiple dataframes in R [closed]
You can try join statement in sqldf package. I find working with SQL in case large dataset much easier. Please find the link here for reference | Efficient way to merge multiple dataframes in R [closed]
You can try join statement in sqldf package. I find working with SQL in case large dataset much easier. Please find the link here for reference |
37,988 | Can arbitrary precision calculations be useful for machine learning? | No, it is almost never a problem. First of all, there's a measurement error—even physicists account for it, while the rest of us rarely can be lucky enough for as precise measurements as theirs. Second, you are dealing with sampled data, so there is error due to sampling. Finally, we have all kinds of biases and noise in the data. In the end, we are usually far from having precise data, so we don’t need algorithms more precise than the data itself.
More than this, there is research showing that you can train neural networks with low (8-bit, 2-bit) precision without performance drop. Some argue that this might even have a regularizing effect. It can probably be extended to some degree to other models. | Can arbitrary precision calculations be useful for machine learning? | No, it is almost never a problem. First of all, there's a measurement error—even physicists account for it, while the rest of us rarely can be lucky enough for as precise measurements as theirs. Secon | Can arbitrary precision calculations be useful for machine learning?
No, it is almost never a problem. First of all, there's a measurement error—even physicists account for it, while the rest of us rarely can be lucky enough for as precise measurements as theirs. Second, you are dealing with sampled data, so there is error due to sampling. Finally, we have all kinds of biases and noise in the data. In the end, we are usually far from having precise data, so we don’t need algorithms more precise than the data itself.
More than this, there is research showing that you can train neural networks with low (8-bit, 2-bit) precision without performance drop. Some argue that this might even have a regularizing effect. It can probably be extended to some degree to other models. | Can arbitrary precision calculations be useful for machine learning?
No, it is almost never a problem. First of all, there's a measurement error—even physicists account for it, while the rest of us rarely can be lucky enough for as precise measurements as theirs. Secon |
37,989 | Can arbitrary precision calculations be useful for machine learning? | Yes, precision can be problematic on multiple fronts. First, regression itself generally approaches a flat region, like the bottom of a parabola, where the minimum loss function of the regression is located. This typically halves the number of significant figures in the loss function and may reduce the precision of any parameters of a model even more than that. Second, in order to calculate some transcendental functions one may need much higher precision for the calculation process itself than are available for the functional value when that process is completed. See https://blogs.ubc.ca/infiniteseriesmodule/units/unit-3-power-series/taylor-series/maclaurin-expansion-of-sinx/ For example, look at how the intermediate terms of the series expansion of sin(x) achieve large magnitudes before convergence for the sin(12 radians)
This shows that although sin(x) is bounded above and below by $\pm1$, the individual terms are sometimes $\pm20000$, so if we are not careful, the absolute error from those terms could be greater than 1. Now it is true that one can, rather than take the sine of 12, transform that request so that the calculation is performed in the region of principal sine, for which the precision problem is mitigated (but not eliminated), however, the point here is that to guarantee any particular precision for a function, more precision may be necessary during the calculation than is returned when that functional answer is produced.
Edit: The additional OP edit question is whether quadruple precision is enough. The correct answer is sometimes. To see what precision is needed some type of error propagation analysis is needed. Also see propagation of uncertainty and the delta method. That will tell you what the function for fitting needs for precision, if you know what that is. Then one has to account for the precision loss for the loss functions used during regression, and most software allows you to specify what that is. One should then augment the data calculation precision to include both the functional precision loss and the regression precision loss.
In the sine example above, one can find the maximum absolute term magnitude in low precision, let's call that $\Delta$, and then set the precision at $D+\log_{10}(\Delta)$, where $D$ is the precision desired, and only then calculate each term value at the higher precision for later summation. Some software routines augment precision automatically during summation, others do not, and if not, then the error propagation from summation would be added to the precision request for $\sin(x)$.
One final suggestion. Calculating what precision is needed for which problem can be daunting and in some cases for which the algorithms are not well characterized, e.g., some machine learning routines, are not reasonably achievable. In that case, one could proceed heuristically by increasing precision until it no longer makes a difference to the precision desired in the answer. But, don't be surprised if that turns out to be a larger number of significant figures than one would guess without doing such a test. | Can arbitrary precision calculations be useful for machine learning? | Yes, precision can be problematic on multiple fronts. First, regression itself generally approaches a flat region, like the bottom of a parabola, where the minimum loss function of the regression is l | Can arbitrary precision calculations be useful for machine learning?
Yes, precision can be problematic on multiple fronts. First, regression itself generally approaches a flat region, like the bottom of a parabola, where the minimum loss function of the regression is located. This typically halves the number of significant figures in the loss function and may reduce the precision of any parameters of a model even more than that. Second, in order to calculate some transcendental functions one may need much higher precision for the calculation process itself than are available for the functional value when that process is completed. See https://blogs.ubc.ca/infiniteseriesmodule/units/unit-3-power-series/taylor-series/maclaurin-expansion-of-sinx/ For example, look at how the intermediate terms of the series expansion of sin(x) achieve large magnitudes before convergence for the sin(12 radians)
This shows that although sin(x) is bounded above and below by $\pm1$, the individual terms are sometimes $\pm20000$, so if we are not careful, the absolute error from those terms could be greater than 1. Now it is true that one can, rather than take the sine of 12, transform that request so that the calculation is performed in the region of principal sine, for which the precision problem is mitigated (but not eliminated), however, the point here is that to guarantee any particular precision for a function, more precision may be necessary during the calculation than is returned when that functional answer is produced.
Edit: The additional OP edit question is whether quadruple precision is enough. The correct answer is sometimes. To see what precision is needed some type of error propagation analysis is needed. Also see propagation of uncertainty and the delta method. That will tell you what the function for fitting needs for precision, if you know what that is. Then one has to account for the precision loss for the loss functions used during regression, and most software allows you to specify what that is. One should then augment the data calculation precision to include both the functional precision loss and the regression precision loss.
In the sine example above, one can find the maximum absolute term magnitude in low precision, let's call that $\Delta$, and then set the precision at $D+\log_{10}(\Delta)$, where $D$ is the precision desired, and only then calculate each term value at the higher precision for later summation. Some software routines augment precision automatically during summation, others do not, and if not, then the error propagation from summation would be added to the precision request for $\sin(x)$.
One final suggestion. Calculating what precision is needed for which problem can be daunting and in some cases for which the algorithms are not well characterized, e.g., some machine learning routines, are not reasonably achievable. In that case, one could proceed heuristically by increasing precision until it no longer makes a difference to the precision desired in the answer. But, don't be surprised if that turns out to be a larger number of significant figures than one would guess without doing such a test. | Can arbitrary precision calculations be useful for machine learning?
Yes, precision can be problematic on multiple fronts. First, regression itself generally approaches a flat region, like the bottom of a parabola, where the minimum loss function of the regression is l |
37,990 | Are discrete random variables, with same domain and uniform probability, always independent? | It is simple to construct an example where both variables are marginally uniformly distributed, but they are not independent. The simplest example is to take $X \sim \text{U} \{ -1,0,1 \}$ and let $Y=X$. In this case both of the variables have a uniform distribution, but they are perfectly correlated. | Are discrete random variables, with same domain and uniform probability, always independent? | It is simple to construct an example where both variables are marginally uniformly distributed, but they are not independent. The simplest example is to take $X \sim \text{U} \{ -1,0,1 \}$ and let $Y | Are discrete random variables, with same domain and uniform probability, always independent?
It is simple to construct an example where both variables are marginally uniformly distributed, but they are not independent. The simplest example is to take $X \sim \text{U} \{ -1,0,1 \}$ and let $Y=X$. In this case both of the variables have a uniform distribution, but they are perfectly correlated. | Are discrete random variables, with same domain and uniform probability, always independent?
It is simple to construct an example where both variables are marginally uniformly distributed, but they are not independent. The simplest example is to take $X \sim \text{U} \{ -1,0,1 \}$ and let $Y |
37,991 | Are discrete random variables, with same domain and uniform probability, always independent? | You can say $X$ and $Y$ each have mean $0$ and variance $\frac23$ and in general
Their covariance is equal to $\mathbb E[XY]$ here and can be any value between $-\frac23$ and $+\frac23$ so their correlation can be any value between $-1$ and $+1$
$\mathbb P[XY=+1]$ and $\mathbb P[XY=-1]$ can each be between $0$ and $\frac23$
$\mathbb P[XY=0]$ can be between $\frac13$ and $\frac23$
If they are independent then
Their covariance is equal to $\mathbb E[XY]$ and is $0$
$\mathbb P[XY=+1]=\mathbb P[XY=-1] =\frac29$
$\mathbb P[XY=0] =\frac59$
It is possible for them to have covariance and $\mathbb E[XY]=0$ without them being independent, for example if $(0,0)$ has probability $\frac13$ while $(+1,+1)$, $(+1,-1)$, $(-1,+1)$, $(-1,-1)$ each have probability $\frac16$, in which case $\mathbb P[XY=+1]=\mathbb P[XY=-1] =\frac13$ and also $\mathbb P[XY=0] =\frac13$ | Are discrete random variables, with same domain and uniform probability, always independent? | You can say $X$ and $Y$ each have mean $0$ and variance $\frac23$ and in general
Their covariance is equal to $\mathbb E[XY]$ here and can be any value between $-\frac23$ and $+\frac23$ so their corr | Are discrete random variables, with same domain and uniform probability, always independent?
You can say $X$ and $Y$ each have mean $0$ and variance $\frac23$ and in general
Their covariance is equal to $\mathbb E[XY]$ here and can be any value between $-\frac23$ and $+\frac23$ so their correlation can be any value between $-1$ and $+1$
$\mathbb P[XY=+1]$ and $\mathbb P[XY=-1]$ can each be between $0$ and $\frac23$
$\mathbb P[XY=0]$ can be between $\frac13$ and $\frac23$
If they are independent then
Their covariance is equal to $\mathbb E[XY]$ and is $0$
$\mathbb P[XY=+1]=\mathbb P[XY=-1] =\frac29$
$\mathbb P[XY=0] =\frac59$
It is possible for them to have covariance and $\mathbb E[XY]=0$ without them being independent, for example if $(0,0)$ has probability $\frac13$ while $(+1,+1)$, $(+1,-1)$, $(-1,+1)$, $(-1,-1)$ each have probability $\frac16$, in which case $\mathbb P[XY=+1]=\mathbb P[XY=-1] =\frac13$ and also $\mathbb P[XY=0] =\frac13$ | Are discrete random variables, with same domain and uniform probability, always independent?
You can say $X$ and $Y$ each have mean $0$ and variance $\frac23$ and in general
Their covariance is equal to $\mathbb E[XY]$ here and can be any value between $-\frac23$ and $+\frac23$ so their corr |
37,992 | Are discrete random variables, with same domain and uniform probability, always independent? | No, suppose you have $X$ as you described and for $Y$ you flip a coin and in 0.5 probability it takes the value of $X$ otherwise you uniformly sample from $\{-1,0,1\}$.
The marginal probabilities will be uniform yet they are obviously dependent. Short simulation below.
x <- sample(c(-1, 0, 1), 100000, TRUE)
coin <- rbinom(100000, 1, 0.5)
y <- coin * sample(c(-1, 0, 1), 100000, TRUE) + (1 - coin) * x
> cor(x, y)
[1] 0.4986911
You can see that $X$ any $Y$ are dependent and have the same marginal uniform distribution. | Are discrete random variables, with same domain and uniform probability, always independent? | No, suppose you have $X$ as you described and for $Y$ you flip a coin and in 0.5 probability it takes the value of $X$ otherwise you uniformly sample from $\{-1,0,1\}$.
The marginal probabilities will | Are discrete random variables, with same domain and uniform probability, always independent?
No, suppose you have $X$ as you described and for $Y$ you flip a coin and in 0.5 probability it takes the value of $X$ otherwise you uniformly sample from $\{-1,0,1\}$.
The marginal probabilities will be uniform yet they are obviously dependent. Short simulation below.
x <- sample(c(-1, 0, 1), 100000, TRUE)
coin <- rbinom(100000, 1, 0.5)
y <- coin * sample(c(-1, 0, 1), 100000, TRUE) + (1 - coin) * x
> cor(x, y)
[1] 0.4986911
You can see that $X$ any $Y$ are dependent and have the same marginal uniform distribution. | Are discrete random variables, with same domain and uniform probability, always independent?
No, suppose you have $X$ as you described and for $Y$ you flip a coin and in 0.5 probability it takes the value of $X$ otherwise you uniformly sample from $\{-1,0,1\}$.
The marginal probabilities will |
37,993 | Why do I see a pattern in the residuals in this well specified model? | How close the residuals at specific $x$ values are to zero depends on the sample size. Now, the sample size in real examples will be whatever it is, so there's not much use saying it should be bigger, but you do need to calibrate your expectations of what sort of departures from zero mean are detectable with small samples.
A useful step in this direction is to simulate multiple realisations, not just one. Here are four realisations of the top left plot from your diagnostics
Looking at just one of them, you might think there was a pattern. Looking at all four of them shows the sort of 'pattern' that arises just by chance. You could do this for multiple sample sizes, and get more idea of the sort of patterns that arise by chance from well-specified models vs the sort that mean something. | Why do I see a pattern in the residuals in this well specified model? | How close the residuals at specific $x$ values are to zero depends on the sample size. Now, the sample size in real examples will be whatever it is, so there's not much use saying it should be bigger, | Why do I see a pattern in the residuals in this well specified model?
How close the residuals at specific $x$ values are to zero depends on the sample size. Now, the sample size in real examples will be whatever it is, so there's not much use saying it should be bigger, but you do need to calibrate your expectations of what sort of departures from zero mean are detectable with small samples.
A useful step in this direction is to simulate multiple realisations, not just one. Here are four realisations of the top left plot from your diagnostics
Looking at just one of them, you might think there was a pattern. Looking at all four of them shows the sort of 'pattern' that arises just by chance. You could do this for multiple sample sizes, and get more idea of the sort of patterns that arise by chance from well-specified models vs the sort that mean something. | Why do I see a pattern in the residuals in this well specified model?
How close the residuals at specific $x$ values are to zero depends on the sample size. Now, the sample size in real examples will be whatever it is, so there's not much use saying it should be bigger, |
37,994 | Why do I see a pattern in the residuals in this well specified model? | Thomas answer is great (+1), I just wanted to clear up a particular confusion in the wording of your question:
there is a pattern in the residuals, they don't seem to be zero-mean.
The mean is zero. You can easily check this:
set.seed(1)
df <- data.frame(id=seq(1, 12, 1))
df$age <- c(18, 19, 20, 40, 41, 42,
60, 61, 62, 40, 41, 42)
df$treat <- c(rep(1,6), rep(0,6))
df$rec <- 2*df$age + rnorm(nrow(df), 0, 2)
mod2 <- lm(df$rec ~ df$treat+df$age)
# Mean value of the residuals
mean(residuals(mod2))
This is equal to -6.473289e-17, which is $0.0000000000000000647 \approx 0$. The only difference from zero here is due to (lack of) precision.
Note that you don't even have to set the second argument of rnorm() to zero:
set.seed(1)
df <- data.frame(id=seq(1, 12, 1))
df$age <- c(18, 19, 20, 40, 41, 42,
60, 61, 62, 40, 41, 42)
df$treat <- c(rep(1,6), rep(0,6))
df$rec <- 2*df$age + rnorm(nrow(df), 1000, 2) # large mean
mod2 <- lm(df$rec ~ df$treat+df$age)
# Mean value of the residuals
mean(residuals(mod2))
Which returns -1.853384e-17... Still practically zero. So what happened? The 1000 just got added to the intercept. | Why do I see a pattern in the residuals in this well specified model? | Thomas answer is great (+1), I just wanted to clear up a particular confusion in the wording of your question:
there is a pattern in the residuals, they don't seem to be zero-mean.
The mean is zero. | Why do I see a pattern in the residuals in this well specified model?
Thomas answer is great (+1), I just wanted to clear up a particular confusion in the wording of your question:
there is a pattern in the residuals, they don't seem to be zero-mean.
The mean is zero. You can easily check this:
set.seed(1)
df <- data.frame(id=seq(1, 12, 1))
df$age <- c(18, 19, 20, 40, 41, 42,
60, 61, 62, 40, 41, 42)
df$treat <- c(rep(1,6), rep(0,6))
df$rec <- 2*df$age + rnorm(nrow(df), 0, 2)
mod2 <- lm(df$rec ~ df$treat+df$age)
# Mean value of the residuals
mean(residuals(mod2))
This is equal to -6.473289e-17, which is $0.0000000000000000647 \approx 0$. The only difference from zero here is due to (lack of) precision.
Note that you don't even have to set the second argument of rnorm() to zero:
set.seed(1)
df <- data.frame(id=seq(1, 12, 1))
df$age <- c(18, 19, 20, 40, 41, 42,
60, 61, 62, 40, 41, 42)
df$treat <- c(rep(1,6), rep(0,6))
df$rec <- 2*df$age + rnorm(nrow(df), 1000, 2) # large mean
mod2 <- lm(df$rec ~ df$treat+df$age)
# Mean value of the residuals
mean(residuals(mod2))
Which returns -1.853384e-17... Still practically zero. So what happened? The 1000 just got added to the intercept. | Why do I see a pattern in the residuals in this well specified model?
Thomas answer is great (+1), I just wanted to clear up a particular confusion in the wording of your question:
there is a pattern in the residuals, they don't seem to be zero-mean.
The mean is zero. |
37,995 | Understanding standard errors on a regression table | The standard error determines how much variability "surrounds" a coefficient estimate. A coefficient is significant if it is non-zero. The typical rule of thumb, is that you go about two standard deviations above and below the estimate to get a 95% confidence interval for a coefficient estimate.
So most likely what your professor is doing, is looking to see if the coefficient estimate is at least two standard errors away from 0 (or in other words looking to see if the standard error is small relative to the coefficient value). This is how you can eyeball significance without a p-value. | Understanding standard errors on a regression table | The standard error determines how much variability "surrounds" a coefficient estimate. A coefficient is significant if it is non-zero. The typical rule of thumb, is that you go about two standard de | Understanding standard errors on a regression table
The standard error determines how much variability "surrounds" a coefficient estimate. A coefficient is significant if it is non-zero. The typical rule of thumb, is that you go about two standard deviations above and below the estimate to get a 95% confidence interval for a coefficient estimate.
So most likely what your professor is doing, is looking to see if the coefficient estimate is at least two standard errors away from 0 (or in other words looking to see if the standard error is small relative to the coefficient value). This is how you can eyeball significance without a p-value. | Understanding standard errors on a regression table
The standard error determines how much variability "surrounds" a coefficient estimate. A coefficient is significant if it is non-zero. The typical rule of thumb, is that you go about two standard de |
37,996 | Understanding standard errors on a regression table | I will stick to the case of a simple linear regression. Generalisation to multiple regression is straightforward in the principles albeit ugly in the algebra. Imagine we have some values of a predictor or explanatory variable, $x_i$, and we observe the values of the response variable at those points, $y_i$. If the true relationship is linear, and my model is correctly specified (for instance no omitted-variable bias from other predictors I have forgotten to include), then those $y_i$ were generated from:
$$y_i = \beta_0 + \beta_1 x_i + \epsilon_i$$
Now $\epsilon_i$ is random error or disturbance term, which has, let's say, the $\mathcal{N}(0,\sigma^2)$ distribution. That assumption of normality, with the same variance (homoscedasticity) for each $\epsilon_i$, is important for all those lovely confidence intervals and significance tests to work. For the same reason I shall assume that $\epsilon_i$ and $\epsilon_j$ are not correlated so long as $i \neq j$ (we must permit, of course, the inevitable and harmless fact that $\epsilon_i$ is perfectly correlated with itself) - this is the assumption that disturbances are not autocorrelated.
Note that all we get to observe are the $x_i$ and $y_i$, but that we can't directly see the $\epsilon_i$ and their $\sigma^2$ or (more interesting to us) the $\beta_0$ and $\beta_1$. We obtain (OLS or "least squares") estimates of those regression parameters, $\hat{\beta_0}$ and $\hat{\beta_1}$, but we wouldn't expect them to match $\beta_0$ and $\beta_1$ exactly. Moreover, if I were to go away and repeat my sampling process, then even if I use the same $x_i$'s as the first sample, I won't obtain the same $y_i$'s - and therefore my estimates $\hat{\beta_0}$ and $\hat{\beta_1}$ will be different to before. This is because in each new realisation, I get different values of the error $\epsilon_i$ contributing towards my $y_i$ values.
The fact that my regression estimators come out differently each time I resample, tells me that they follow a sampling distribution. If you know a little statistical theory, then that may not come as a surprise to you - even outside the context of regression, estimators have probability distributions because they are random variables, which is in turn because they are functions of sample data that is itself random. With the assumptions listed above, it turns out that:
$$\hat{\beta_0} \sim \mathcal{N}\left(\beta_0,\, \sigma^2 \left( \frac{1}{n} + \frac{\bar{x}^2}{\sum(X_i - \bar{X})^2} \right) \right) $$
$$\hat{\beta_1} \sim \mathcal{N}\left(\beta_1, \, \frac{\sigma^2}{\sum(X_i - \bar{X})^2} \right) $$
It's nice to know that $\mathbb{E}(\hat{\beta_i}) = \beta_i$, so that "on average" my estimates will match the true regression coefficients (actually this fact doesn't need all the assumptions I made before - for instance it doesn't matter if the error term is not normally distributed, or if they're heteroscedastic, but correct model specification with no autocorrelation of errors is important). If I were to take many samples, the average of the estimates I obtain would converge towards the true parameters. You may find this less reassuring once you remember that we only get to see one sample! But the unbiasedness of our estimators is a good thing.
Also interesting is the variance. In essence this is a measure of how badly wrong our estimators are likely to be. For example, it'd be very helpful if we could construct a $z$ interval that lets us say that the estimate for the slope parameter, $\hat{\beta_1}$, we would obtain from a sample is 95% likely to lie within approximately $\pm 1.96 \sqrt{\frac{\sigma^2}{\sum(X_i - \bar{X})^2}}$ of the true (but unknown) value of the slope, $\beta_1$. Sadly this is not as useful as we would like because, crucially, we do not know $\sigma^2$. It's a parameter for the variance of the whole population of random errors, and we only observed a finite sample.
If instead of $\sigma$ we use the estimate $s$ we calculated from our sample (confusingly, this is often known as the "standard error of the regression" or "residual standard error") we can find the standard error for our estimates of the regression coefficients. For $\hat{\beta_1}$ this would be $\sqrt{\frac{s^2}{\sum(X_i - \bar{X})^2}}$. Now, because we have had to estimate the variance of a normally distributed variable, we will have to use Student's $t$ rather than $z$ to form confidence intervals - we use the residual degrees of freedom from the regression, which in simple linear regression is $n-2$ and for multiple regression we subtract one more degree of freedom for each additional slope estimated. But for reasonably large $n$, and hence larger degrees of freedom, there isn't much difference between $t$ and $z$. Rules of thumb like "there's a 95% chance that the observed value will lie within two standard errors of the correct value" or "an observed slope estimate that is four standard errors away from zero will clearly be highly statistically significant" will work just fine.
I find a good way of understanding error is to think about the circumstances in which I'd expect my regression estimates to be more (good!) or less (bad!) likely to lie close to the true values. Suppose that my data were "noisier", which happens if the variance of the error terms, $\sigma^2$, were high. (I can't see that directly, but in my regression output I'd likely notice that the standard error of the regression was high.) Then most of the variation I can in $y$ see will be due to the random error. This will mask the "signal" of the relationship between $y$ and $x$, which will now explain a relatively small fraction of variation, and makes the shape of that relationship harder to ascertain. Note that this does not mean I will underestimate the slope - as I said before, the slope estimator will be unbiased, and since it is normally distributed, I'm just as likely to underestimate as I am to overestimate. But since it is harder to pick the relationship out from the background noise, I am more likely than before to make big underestimates or big overestimates. My standard error has increased, and my estimated regression coefficients are less reliable.
Intuition matches algebra - note how $s^2$ appears in the numerator of my standard error for $\hat{\beta_1}$, so if it's higher, the distribution of $\hat{\beta_1}$ is more spread out. This means more probability in the tails (just where I don't want it - this corresponds to estimates far from the true value) and less probability around the peak (so less chance of the slope estimate being near the true slope). Here is are the probability density curves of $\hat{\beta_1}$ with high and low standard error:
It's instructive to rewrite the standard error of $\hat{\beta_1}$ using the mean square deviation, $$\text{MSD}(x) = \frac{1}{n} \sum(x_i - \bar{x})^2$$
This is a measure of how spread out the range of observed $x$ values was. With this in mind, the standard error of $\hat{\beta_1}$ becomes:
$$\text{se}(\hat{\beta_1}) = \sqrt{\frac{s^2}{n \text{MSD}(x)}}$$
The fact that $n$ and $\text{MSD}(x)$ are in the denominator reaffirms two other intuitive facts about our uncertainty. We can reduce uncertainty by increasing sample size, while keeping constant the range of $x$ values we sample over. As ever, this comes at a cost - that square root means that to halve our uncertainty, we would have to quadruple our sample size (a situation familiar from many applications outside regression, such as picking a sample size for political polls). But it's also easier to pick out the trend of $y$ against $x$, if we spread our observations out across a wider range of $x$ values and hence increase the MSD. Again, by quadrupling the spread of $x$ values, we can halve our uncertainty in the slope parameters.
When you chose your sample size, took steps to reduce random error (e.g. from measurement error) and perhaps decided on the range of predictor values you would sample across, you were hoping to reduce the uncertainty in your regression estimates. In that respect, the standard errors tell you just how successful you have been.
I append code for the plot:
x <- seq(-5, 5, length=200)
y <- dnorm(x, mean=0, sd=1)
y2 <- dnorm(x, mean=0, sd=2)
plot(x, y, type = "l", lwd = 2, axes = FALSE, xlab = "estimated coefficient", ylab="")
lines(x, y2, lwd = 2, col = "blue")
axis(1, at = c(-5, -2.5, 0, 2.5, 5), labels = c("", "large underestimate", "true β", "large overestimate", ""))
abline(v=0, lty = "dotted")
legend("topright", title="Standard error of estimator",
c("Low","High"), fill=c("black", "blue"), horiz=TRUE) | Understanding standard errors on a regression table | I will stick to the case of a simple linear regression. Generalisation to multiple regression is straightforward in the principles albeit ugly in the algebra. Imagine we have some values of a predicto | Understanding standard errors on a regression table
I will stick to the case of a simple linear regression. Generalisation to multiple regression is straightforward in the principles albeit ugly in the algebra. Imagine we have some values of a predictor or explanatory variable, $x_i$, and we observe the values of the response variable at those points, $y_i$. If the true relationship is linear, and my model is correctly specified (for instance no omitted-variable bias from other predictors I have forgotten to include), then those $y_i$ were generated from:
$$y_i = \beta_0 + \beta_1 x_i + \epsilon_i$$
Now $\epsilon_i$ is random error or disturbance term, which has, let's say, the $\mathcal{N}(0,\sigma^2)$ distribution. That assumption of normality, with the same variance (homoscedasticity) for each $\epsilon_i$, is important for all those lovely confidence intervals and significance tests to work. For the same reason I shall assume that $\epsilon_i$ and $\epsilon_j$ are not correlated so long as $i \neq j$ (we must permit, of course, the inevitable and harmless fact that $\epsilon_i$ is perfectly correlated with itself) - this is the assumption that disturbances are not autocorrelated.
Note that all we get to observe are the $x_i$ and $y_i$, but that we can't directly see the $\epsilon_i$ and their $\sigma^2$ or (more interesting to us) the $\beta_0$ and $\beta_1$. We obtain (OLS or "least squares") estimates of those regression parameters, $\hat{\beta_0}$ and $\hat{\beta_1}$, but we wouldn't expect them to match $\beta_0$ and $\beta_1$ exactly. Moreover, if I were to go away and repeat my sampling process, then even if I use the same $x_i$'s as the first sample, I won't obtain the same $y_i$'s - and therefore my estimates $\hat{\beta_0}$ and $\hat{\beta_1}$ will be different to before. This is because in each new realisation, I get different values of the error $\epsilon_i$ contributing towards my $y_i$ values.
The fact that my regression estimators come out differently each time I resample, tells me that they follow a sampling distribution. If you know a little statistical theory, then that may not come as a surprise to you - even outside the context of regression, estimators have probability distributions because they are random variables, which is in turn because they are functions of sample data that is itself random. With the assumptions listed above, it turns out that:
$$\hat{\beta_0} \sim \mathcal{N}\left(\beta_0,\, \sigma^2 \left( \frac{1}{n} + \frac{\bar{x}^2}{\sum(X_i - \bar{X})^2} \right) \right) $$
$$\hat{\beta_1} \sim \mathcal{N}\left(\beta_1, \, \frac{\sigma^2}{\sum(X_i - \bar{X})^2} \right) $$
It's nice to know that $\mathbb{E}(\hat{\beta_i}) = \beta_i$, so that "on average" my estimates will match the true regression coefficients (actually this fact doesn't need all the assumptions I made before - for instance it doesn't matter if the error term is not normally distributed, or if they're heteroscedastic, but correct model specification with no autocorrelation of errors is important). If I were to take many samples, the average of the estimates I obtain would converge towards the true parameters. You may find this less reassuring once you remember that we only get to see one sample! But the unbiasedness of our estimators is a good thing.
Also interesting is the variance. In essence this is a measure of how badly wrong our estimators are likely to be. For example, it'd be very helpful if we could construct a $z$ interval that lets us say that the estimate for the slope parameter, $\hat{\beta_1}$, we would obtain from a sample is 95% likely to lie within approximately $\pm 1.96 \sqrt{\frac{\sigma^2}{\sum(X_i - \bar{X})^2}}$ of the true (but unknown) value of the slope, $\beta_1$. Sadly this is not as useful as we would like because, crucially, we do not know $\sigma^2$. It's a parameter for the variance of the whole population of random errors, and we only observed a finite sample.
If instead of $\sigma$ we use the estimate $s$ we calculated from our sample (confusingly, this is often known as the "standard error of the regression" or "residual standard error") we can find the standard error for our estimates of the regression coefficients. For $\hat{\beta_1}$ this would be $\sqrt{\frac{s^2}{\sum(X_i - \bar{X})^2}}$. Now, because we have had to estimate the variance of a normally distributed variable, we will have to use Student's $t$ rather than $z$ to form confidence intervals - we use the residual degrees of freedom from the regression, which in simple linear regression is $n-2$ and for multiple regression we subtract one more degree of freedom for each additional slope estimated. But for reasonably large $n$, and hence larger degrees of freedom, there isn't much difference between $t$ and $z$. Rules of thumb like "there's a 95% chance that the observed value will lie within two standard errors of the correct value" or "an observed slope estimate that is four standard errors away from zero will clearly be highly statistically significant" will work just fine.
I find a good way of understanding error is to think about the circumstances in which I'd expect my regression estimates to be more (good!) or less (bad!) likely to lie close to the true values. Suppose that my data were "noisier", which happens if the variance of the error terms, $\sigma^2$, were high. (I can't see that directly, but in my regression output I'd likely notice that the standard error of the regression was high.) Then most of the variation I can in $y$ see will be due to the random error. This will mask the "signal" of the relationship between $y$ and $x$, which will now explain a relatively small fraction of variation, and makes the shape of that relationship harder to ascertain. Note that this does not mean I will underestimate the slope - as I said before, the slope estimator will be unbiased, and since it is normally distributed, I'm just as likely to underestimate as I am to overestimate. But since it is harder to pick the relationship out from the background noise, I am more likely than before to make big underestimates or big overestimates. My standard error has increased, and my estimated regression coefficients are less reliable.
Intuition matches algebra - note how $s^2$ appears in the numerator of my standard error for $\hat{\beta_1}$, so if it's higher, the distribution of $\hat{\beta_1}$ is more spread out. This means more probability in the tails (just where I don't want it - this corresponds to estimates far from the true value) and less probability around the peak (so less chance of the slope estimate being near the true slope). Here is are the probability density curves of $\hat{\beta_1}$ with high and low standard error:
It's instructive to rewrite the standard error of $\hat{\beta_1}$ using the mean square deviation, $$\text{MSD}(x) = \frac{1}{n} \sum(x_i - \bar{x})^2$$
This is a measure of how spread out the range of observed $x$ values was. With this in mind, the standard error of $\hat{\beta_1}$ becomes:
$$\text{se}(\hat{\beta_1}) = \sqrt{\frac{s^2}{n \text{MSD}(x)}}$$
The fact that $n$ and $\text{MSD}(x)$ are in the denominator reaffirms two other intuitive facts about our uncertainty. We can reduce uncertainty by increasing sample size, while keeping constant the range of $x$ values we sample over. As ever, this comes at a cost - that square root means that to halve our uncertainty, we would have to quadruple our sample size (a situation familiar from many applications outside regression, such as picking a sample size for political polls). But it's also easier to pick out the trend of $y$ against $x$, if we spread our observations out across a wider range of $x$ values and hence increase the MSD. Again, by quadrupling the spread of $x$ values, we can halve our uncertainty in the slope parameters.
When you chose your sample size, took steps to reduce random error (e.g. from measurement error) and perhaps decided on the range of predictor values you would sample across, you were hoping to reduce the uncertainty in your regression estimates. In that respect, the standard errors tell you just how successful you have been.
I append code for the plot:
x <- seq(-5, 5, length=200)
y <- dnorm(x, mean=0, sd=1)
y2 <- dnorm(x, mean=0, sd=2)
plot(x, y, type = "l", lwd = 2, axes = FALSE, xlab = "estimated coefficient", ylab="")
lines(x, y2, lwd = 2, col = "blue")
axis(1, at = c(-5, -2.5, 0, 2.5, 5), labels = c("", "large underestimate", "true β", "large overestimate", ""))
abline(v=0, lty = "dotted")
legend("topright", title="Standard error of estimator",
c("Low","High"), fill=c("black", "blue"), horiz=TRUE) | Understanding standard errors on a regression table
I will stick to the case of a simple linear regression. Generalisation to multiple regression is straightforward in the principles albeit ugly in the algebra. Imagine we have some values of a predicto |
37,997 | Understanding standard errors on a regression table | The SE is a measure of precision of the estimate. It also can indicate model fit problems. For example, if it is abnormally large relative to the coefficient then that is a red flag for (multi)collinearity. The model is essentially unable to precisely estimate the parameter because of collinearity with one or more of the other predictors.
The SE is essentially the standard deviation of the sampling distribution for that particular statistic. This is why a coefficient that is more than about twice as large as the SE will be statistically significant at p=<.05. You might go back and look at the standard deviation table for the standard normal distribution (Wikipedia has a nice visual of the distribution).
Think of it this way, if you assume that the null hypothesis is true - that is, assume that the actual coefficient in the population is zero, how unlikely would your sample have to be in order to get the coefficient you got? If your sample statistic (the coefficient) is 2 standard errors (again, think "standard deviations") away from zero then it is one of only 5% (i.e. p=.05) of samples that are possible assuming that the true value (the population parameter) is zero. That's is a rather improbable sample, right? So we conclude instead that our sample isn't that improbable, it must be that the null hypothesis is false and the population parameter is some non zero value. We "reject the null hypothesis." Hence, the statistic is "significant" when it is 2 or more standard deviations away from zero which basically means that the null hypothesis is probably false because that would entail us randomly picking a rather unrepresentative and unlikely sample.
I am playing a little fast and lose with the numbers. There is, of course, a correction for the degrees freedom and a distinction between 1 or 2 tailed tests of significance. With a good number of degrees freedom (around 70 if I recall) the coefficient will be significant on a two tailed test if it is (at least) twice as large as the standard error. With a 1 tailed test where all 5% of the sampling distribution is lumped in that one tail, those same 70 degrees freedom will require that the coefficient be only (at least) ~1.7 times larger than the standard error.
So twice as large as the coefficient is a good rule of thumb assuming you have decent degrees freedom and a two tailed test of significance. Less than 2 might be statistically significant if you're using a 1 tailed test. More than 2 might be required if you have few degrees freedom and are using a 2 tailed test.
edited to add: Something else to think about: if the confidence interval includes zero then the effect will not be statistically significant. The confidence interval (at the 95% level) is approximately 2 standard errors. Confidence intervals and significance testing rely on essentially the same logic and it all comes back to standard deviations. | Understanding standard errors on a regression table | The SE is a measure of precision of the estimate. It also can indicate model fit problems. For example, if it is abnormally large relative to the coefficient then that is a red flag for (multi)colli | Understanding standard errors on a regression table
The SE is a measure of precision of the estimate. It also can indicate model fit problems. For example, if it is abnormally large relative to the coefficient then that is a red flag for (multi)collinearity. The model is essentially unable to precisely estimate the parameter because of collinearity with one or more of the other predictors.
The SE is essentially the standard deviation of the sampling distribution for that particular statistic. This is why a coefficient that is more than about twice as large as the SE will be statistically significant at p=<.05. You might go back and look at the standard deviation table for the standard normal distribution (Wikipedia has a nice visual of the distribution).
Think of it this way, if you assume that the null hypothesis is true - that is, assume that the actual coefficient in the population is zero, how unlikely would your sample have to be in order to get the coefficient you got? If your sample statistic (the coefficient) is 2 standard errors (again, think "standard deviations") away from zero then it is one of only 5% (i.e. p=.05) of samples that are possible assuming that the true value (the population parameter) is zero. That's is a rather improbable sample, right? So we conclude instead that our sample isn't that improbable, it must be that the null hypothesis is false and the population parameter is some non zero value. We "reject the null hypothesis." Hence, the statistic is "significant" when it is 2 or more standard deviations away from zero which basically means that the null hypothesis is probably false because that would entail us randomly picking a rather unrepresentative and unlikely sample.
I am playing a little fast and lose with the numbers. There is, of course, a correction for the degrees freedom and a distinction between 1 or 2 tailed tests of significance. With a good number of degrees freedom (around 70 if I recall) the coefficient will be significant on a two tailed test if it is (at least) twice as large as the standard error. With a 1 tailed test where all 5% of the sampling distribution is lumped in that one tail, those same 70 degrees freedom will require that the coefficient be only (at least) ~1.7 times larger than the standard error.
So twice as large as the coefficient is a good rule of thumb assuming you have decent degrees freedom and a two tailed test of significance. Less than 2 might be statistically significant if you're using a 1 tailed test. More than 2 might be required if you have few degrees freedom and are using a 2 tailed test.
edited to add: Something else to think about: if the confidence interval includes zero then the effect will not be statistically significant. The confidence interval (at the 95% level) is approximately 2 standard errors. Confidence intervals and significance testing rely on essentially the same logic and it all comes back to standard deviations. | Understanding standard errors on a regression table
The SE is a measure of precision of the estimate. It also can indicate model fit problems. For example, if it is abnormally large relative to the coefficient then that is a red flag for (multi)colli |
37,998 | Understanding standard errors on a regression table | If you can divide the coefficient by its standard error in your head, you can use these rough rules of thumb assuming the sample size is "large" and you don't have "too many" regressors. When this is not the case, you should really be using the $t$ distribution, but most people don't have it readily available in their brain.
These rules are derived from the standard normal approximation for a two-sided test ($H_0: \beta=0$ vs. $H_a: \beta\ne0$)):
1.28 will give you SS at $20\%$.
1.64 will give you SS at $10\%$
1.96 will give you SS at $5\%$
2.56 will give you SS at $1\%$
SS is shorthand for "statistically significant from zero in a two-sided test".
Often, you will see the 1.96 rounded up to 2. | Understanding standard errors on a regression table | If you can divide the coefficient by its standard error in your head, you can use these rough rules of thumb assuming the sample size is "large" and you don't have "too many" regressors. When this is | Understanding standard errors on a regression table
If you can divide the coefficient by its standard error in your head, you can use these rough rules of thumb assuming the sample size is "large" and you don't have "too many" regressors. When this is not the case, you should really be using the $t$ distribution, but most people don't have it readily available in their brain.
These rules are derived from the standard normal approximation for a two-sided test ($H_0: \beta=0$ vs. $H_a: \beta\ne0$)):
1.28 will give you SS at $20\%$.
1.64 will give you SS at $10\%$
1.96 will give you SS at $5\%$
2.56 will give you SS at $1\%$
SS is shorthand for "statistically significant from zero in a two-sided test".
Often, you will see the 1.96 rounded up to 2. | Understanding standard errors on a regression table
If you can divide the coefficient by its standard error in your head, you can use these rough rules of thumb assuming the sample size is "large" and you don't have "too many" regressors. When this is |
37,999 | Understanding standard errors on a regression table | Picking up on Underminer, regression coefficients are estimates of a population parameter. Due to sampling error (and other things if you have accounted for them), the SE shows you how much uncertainty there is around your estimate. If you calculate a 95% confidence interval using the standard error, that will give you the confidence that 95 out of 100 similar estimates will capture the true population parameter in their intervals. Just another way of saying the p value is the probability that the coefficient is do to random error.
Also, SEs are useful for doing other hypothesis tests - not just testing that a coefficient is 0, but for comparing coefficients across variables or sub-populations. | Understanding standard errors on a regression table | Picking up on Underminer, regression coefficients are estimates of a population parameter. Due to sampling error (and other things if you have accounted for them), the SE shows you how much uncertaint | Understanding standard errors on a regression table
Picking up on Underminer, regression coefficients are estimates of a population parameter. Due to sampling error (and other things if you have accounted for them), the SE shows you how much uncertainty there is around your estimate. If you calculate a 95% confidence interval using the standard error, that will give you the confidence that 95 out of 100 similar estimates will capture the true population parameter in their intervals. Just another way of saying the p value is the probability that the coefficient is do to random error.
Also, SEs are useful for doing other hypothesis tests - not just testing that a coefficient is 0, but for comparing coefficients across variables or sub-populations. | Understanding standard errors on a regression table
Picking up on Underminer, regression coefficients are estimates of a population parameter. Due to sampling error (and other things if you have accounted for them), the SE shows you how much uncertaint |
38,000 | Are there useful applications of SVD that use only the smallest singular values? | It acts like a highpass filter in a slightly different space.
There is lots of linear data, and in many cases you are looking for that linear relationship, so a low-pass (high-blocking) filter lets you retain the important part.
For non-linear data, usually stuff that you have applied the simple methods without success to, the high-pass means that you throw away the unimportant (linear) part.
This makes me wonder about computational photography and edging. Thanks.. | Are there useful applications of SVD that use only the smallest singular values? | It acts like a highpass filter in a slightly different space.
There is lots of linear data, and in many cases you are looking for that linear relationship, so a low-pass (high-blocking) filter lets | Are there useful applications of SVD that use only the smallest singular values?
It acts like a highpass filter in a slightly different space.
There is lots of linear data, and in many cases you are looking for that linear relationship, so a low-pass (high-blocking) filter lets you retain the important part.
For non-linear data, usually stuff that you have applied the simple methods without success to, the high-pass means that you throw away the unimportant (linear) part.
This makes me wonder about computational photography and edging. Thanks.. | Are there useful applications of SVD that use only the smallest singular values?
It acts like a highpass filter in a slightly different space.
There is lots of linear data, and in many cases you are looking for that linear relationship, so a low-pass (high-blocking) filter lets |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.