idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
28,801
Is a logistic regression biased when the outcome variable is split 5% - 95%?
In theory, you will be able to discriminate better if the proportions of "good" and "bad" are roughly similar in size. You might be able to move towards this by stratified sampling, oversampling bad cases and then reweighting to return to the true proportions later. This carries some risks. In particular your model is likely to be labelling individuals as "potentially bad" - presumably those who may not pay their utility bills when due. It is important that the impact of errors when doing this are properly recognised: in particular how many "good customers" will be labelled "potentially bad" by the model, and you are less likely to get the reweighting wrong if you have not distorted your model by stratified sampling.
Is a logistic regression biased when the outcome variable is split 5% - 95%?
In theory, you will be able to discriminate better if the proportions of "good" and "bad" are roughly similar in size. You might be able to move towards this by stratified sampling, oversampling bad
Is a logistic regression biased when the outcome variable is split 5% - 95%? In theory, you will be able to discriminate better if the proportions of "good" and "bad" are roughly similar in size. You might be able to move towards this by stratified sampling, oversampling bad cases and then reweighting to return to the true proportions later. This carries some risks. In particular your model is likely to be labelling individuals as "potentially bad" - presumably those who may not pay their utility bills when due. It is important that the impact of errors when doing this are properly recognised: in particular how many "good customers" will be labelled "potentially bad" by the model, and you are less likely to get the reweighting wrong if you have not distorted your model by stratified sampling.
Is a logistic regression biased when the outcome variable is split 5% - 95%? In theory, you will be able to discriminate better if the proportions of "good" and "bad" are roughly similar in size. You might be able to move towards this by stratified sampling, oversampling bad
28,802
Is a logistic regression biased when the outcome variable is split 5% - 95%?
There are many ways in which you can think of logistic regressions. My favorite way is to think that your response variable, $y_i$, follows a Bernoulli distribution with probability $p_i$. An $p_i$, in turn, is a function of some predictors. More formally: $$y_i \sim \text{Bernoulli}(p_i)$$ $$p_i = \text{logit}^{-1}(a + b_1x_1 + ... +b_nx_n)$$ where $\text{logit}^{-1} = \frac{\exp(X)}{1+\exp(x)}$ Now does it matter it you have low proportion of failures (bad accounts)? Not really, as long as your sample data is balanced, as some people already pointed. However, if your data is not balanced, then getting more data may be almost useless if there is some selection effects you are not taking into account. In this case, you should use matching, but the lack of balance may turn matching pretty useless. Another strategy is trying to find a natural experiment, so you can use instrumental variable or regression disconinuity design. Last, but not least, if you have a balanced sample or there is no selection bias, you may be worried with the fact the bad account is rare. I don't think 5% is rare, but just in case, take a look at the paper by Gary King about running a rare event logistic. In the Zelig package,in R, you can run a rare event logistic.
Is a logistic regression biased when the outcome variable is split 5% - 95%?
There are many ways in which you can think of logistic regressions. My favorite way is to think that your response variable, $y_i$, follows a Bernoulli distribution with probability $p_i$. An $p_i$, i
Is a logistic regression biased when the outcome variable is split 5% - 95%? There are many ways in which you can think of logistic regressions. My favorite way is to think that your response variable, $y_i$, follows a Bernoulli distribution with probability $p_i$. An $p_i$, in turn, is a function of some predictors. More formally: $$y_i \sim \text{Bernoulli}(p_i)$$ $$p_i = \text{logit}^{-1}(a + b_1x_1 + ... +b_nx_n)$$ where $\text{logit}^{-1} = \frac{\exp(X)}{1+\exp(x)}$ Now does it matter it you have low proportion of failures (bad accounts)? Not really, as long as your sample data is balanced, as some people already pointed. However, if your data is not balanced, then getting more data may be almost useless if there is some selection effects you are not taking into account. In this case, you should use matching, but the lack of balance may turn matching pretty useless. Another strategy is trying to find a natural experiment, so you can use instrumental variable or regression disconinuity design. Last, but not least, if you have a balanced sample or there is no selection bias, you may be worried with the fact the bad account is rare. I don't think 5% is rare, but just in case, take a look at the paper by Gary King about running a rare event logistic. In the Zelig package,in R, you can run a rare event logistic.
Is a logistic regression biased when the outcome variable is split 5% - 95%? There are many ways in which you can think of logistic regressions. My favorite way is to think that your response variable, $y_i$, follows a Bernoulli distribution with probability $p_i$. An $p_i$, i
28,803
Is a logistic regression biased when the outcome variable is split 5% - 95%?
Okay so I work in Fraud Detection so this sort of problem is not new to me. I think the machine learning community has quite a bit to say about unbalanced data (as in classes are unbalanced). So there are a couple of dead easy strategies that I think have already been mentioned, and a couple of neat ideas, and some way out there. I'm not even going to pretend to know what this means for the asymptotics for your problem, but it always seems to give me reasonable results in logistic regression. There may be a paper in there somewhere, not sure though. Here are your options as I see it: Oversample the minority class. This amounts to sampling the minority class with replacement until you have the same number of observations as the majority class. There are fancy ways to do this so that you do things like jittering the observation values, so that you have values close to the original but aren't perfect copies, etc. Undersample, this is where you take a subsample of the majority class. Again fancy ways to do this so that you are removing majority samples that are the closest to the minority samples, using nearest neighbor algorithms and so forth. Reweight the classes. For logistic regression this is what I do. Essentially, you are changing the loss function to penalize a misclassified minority case much more heavily than a misclassified majority class. But then again you are technically not doing maximum likelihood. Simulate data. Lot's of neat ideas that I've played with here. You can use SMOTE to generate data, Generative Adversarial Networks, Autoencoders using the generative portion, kernel density estimators to draw new samples. At any rate, I've used all of these methods, but I find the simplest is to just reweight the problem for logistic regression anyway. One thing you can do to gut check your model though is to take: -Intercept/beta That should be the decision boundary (50% probability of being in either class) on a given variable ceteris paribus. If it doesn't make sense, e.g. the decision boundary is a negative number on a variable that is strictly positive, then you've got bias in your logistic regression that needs to be corrected.
Is a logistic regression biased when the outcome variable is split 5% - 95%?
Okay so I work in Fraud Detection so this sort of problem is not new to me. I think the machine learning community has quite a bit to say about unbalanced data (as in classes are unbalanced). So there
Is a logistic regression biased when the outcome variable is split 5% - 95%? Okay so I work in Fraud Detection so this sort of problem is not new to me. I think the machine learning community has quite a bit to say about unbalanced data (as in classes are unbalanced). So there are a couple of dead easy strategies that I think have already been mentioned, and a couple of neat ideas, and some way out there. I'm not even going to pretend to know what this means for the asymptotics for your problem, but it always seems to give me reasonable results in logistic regression. There may be a paper in there somewhere, not sure though. Here are your options as I see it: Oversample the minority class. This amounts to sampling the minority class with replacement until you have the same number of observations as the majority class. There are fancy ways to do this so that you do things like jittering the observation values, so that you have values close to the original but aren't perfect copies, etc. Undersample, this is where you take a subsample of the majority class. Again fancy ways to do this so that you are removing majority samples that are the closest to the minority samples, using nearest neighbor algorithms and so forth. Reweight the classes. For logistic regression this is what I do. Essentially, you are changing the loss function to penalize a misclassified minority case much more heavily than a misclassified majority class. But then again you are technically not doing maximum likelihood. Simulate data. Lot's of neat ideas that I've played with here. You can use SMOTE to generate data, Generative Adversarial Networks, Autoencoders using the generative portion, kernel density estimators to draw new samples. At any rate, I've used all of these methods, but I find the simplest is to just reweight the problem for logistic regression anyway. One thing you can do to gut check your model though is to take: -Intercept/beta That should be the decision boundary (50% probability of being in either class) on a given variable ceteris paribus. If it doesn't make sense, e.g. the decision boundary is a negative number on a variable that is strictly positive, then you've got bias in your logistic regression that needs to be corrected.
Is a logistic regression biased when the outcome variable is split 5% - 95%? Okay so I work in Fraud Detection so this sort of problem is not new to me. I think the machine learning community has quite a bit to say about unbalanced data (as in classes are unbalanced). So there
28,804
Singular value decomposition of a three-dimensional array
There are several notions of decomposition of such a tensor. Last year I asked essentially the same question on the MaplePrimes site, answered it myself by referring to wikipedia, and provided an implementation for one of those notions (the CANDECOMP/PARAFAC decomposition) in a follow-up post (applied to decomposing the $3\times m \times n$ tensor given by the R,G,B entries of an image).
Singular value decomposition of a three-dimensional array
There are several notions of decomposition of such a tensor. Last year I asked essentially the same question on the MaplePrimes site, answered it myself by referring to wikipedia, and provided an impl
Singular value decomposition of a three-dimensional array There are several notions of decomposition of such a tensor. Last year I asked essentially the same question on the MaplePrimes site, answered it myself by referring to wikipedia, and provided an implementation for one of those notions (the CANDECOMP/PARAFAC decomposition) in a follow-up post (applied to decomposing the $3\times m \times n$ tensor given by the R,G,B entries of an image).
Singular value decomposition of a three-dimensional array There are several notions of decomposition of such a tensor. Last year I asked essentially the same question on the MaplePrimes site, answered it myself by referring to wikipedia, and provided an impl
28,805
Statistical significance of changes over time on a 5-point Likert item
1. Coding scheme In terms of assessing statistical significance using a t-test, it is the relative distances between the scale points that matters. Thus, (0, 0.25, 0.5, 0.75, 1) is equivalent to (1, 2, 3, 4, 5). From my experience an equal distance coding scheme, such as those mentioned previously are the most common, and seem reasonable for Likert items. If you explore optimal scaling, you might be able to derive an alternative coding scheme. 2. Statistical test The question of how to assess group differences on a Likert item has already been answered here. The first issue is whether you can link observations across the two time points. It sounds like you had a different sample. This leads to a few options: Independent groups t-test: this is a simple option; it also does test for differences in group means; purists will argue that the p-value may be not entirely accurate; however, depending on your purposes, it may be adequate. Bootstrapped test of differences in group means: If you still want to test differences between group means but are uncomfortable with the discrete nature of dependent variable, then you could use a bootstrap to generate confidence intervals from which you could draw inferences about changes in group means. Mann-Whitney U test (among other non-parametric tests): Such a test does not assume normality, but it is also testing a different hypothesis.
Statistical significance of changes over time on a 5-point Likert item
1. Coding scheme In terms of assessing statistical significance using a t-test, it is the relative distances between the scale points that matters. Thus, (0, 0.25, 0.5, 0.75, 1) is equivalent to (1, 2
Statistical significance of changes over time on a 5-point Likert item 1. Coding scheme In terms of assessing statistical significance using a t-test, it is the relative distances between the scale points that matters. Thus, (0, 0.25, 0.5, 0.75, 1) is equivalent to (1, 2, 3, 4, 5). From my experience an equal distance coding scheme, such as those mentioned previously are the most common, and seem reasonable for Likert items. If you explore optimal scaling, you might be able to derive an alternative coding scheme. 2. Statistical test The question of how to assess group differences on a Likert item has already been answered here. The first issue is whether you can link observations across the two time points. It sounds like you had a different sample. This leads to a few options: Independent groups t-test: this is a simple option; it also does test for differences in group means; purists will argue that the p-value may be not entirely accurate; however, depending on your purposes, it may be adequate. Bootstrapped test of differences in group means: If you still want to test differences between group means but are uncomfortable with the discrete nature of dependent variable, then you could use a bootstrap to generate confidence intervals from which you could draw inferences about changes in group means. Mann-Whitney U test (among other non-parametric tests): Such a test does not assume normality, but it is also testing a different hypothesis.
Statistical significance of changes over time on a 5-point Likert item 1. Coding scheme In terms of assessing statistical significance using a t-test, it is the relative distances between the scale points that matters. Thus, (0, 0.25, 0.5, 0.75, 1) is equivalent to (1, 2
28,806
Statistical significance of changes over time on a 5-point Likert item
Wilcoxon Ranksum Test aka Mann-Whitney is the way to go in the case of ordinal data. The bootstrapping solution is also elegant albeit not the "classic" way to go. The Bootstrapping method might also be valuable in case you aim for other things like factor analysis. In case of regression analysis you might chose ordered probit or ordered logit as a model specification. BTW: If your scale has a larger range (>10 values per variable) you might use the results as a metric variable, wich makes a t-test a safe choice. Be adviced that this is a little dirty and may be considered devil's work by some. stephan
Statistical significance of changes over time on a 5-point Likert item
Wilcoxon Ranksum Test aka Mann-Whitney is the way to go in the case of ordinal data. The bootstrapping solution is also elegant albeit not the "classic" way to go. The Bootstrapping method might also
Statistical significance of changes over time on a 5-point Likert item Wilcoxon Ranksum Test aka Mann-Whitney is the way to go in the case of ordinal data. The bootstrapping solution is also elegant albeit not the "classic" way to go. The Bootstrapping method might also be valuable in case you aim for other things like factor analysis. In case of regression analysis you might chose ordered probit or ordered logit as a model specification. BTW: If your scale has a larger range (>10 values per variable) you might use the results as a metric variable, wich makes a t-test a safe choice. Be adviced that this is a little dirty and may be considered devil's work by some. stephan
Statistical significance of changes over time on a 5-point Likert item Wilcoxon Ranksum Test aka Mann-Whitney is the way to go in the case of ordinal data. The bootstrapping solution is also elegant albeit not the "classic" way to go. The Bootstrapping method might also
28,807
How to make representative sample set from a large overall dataset?
If you don't wish to parse the entire data set then you probably can't use stratified sampling, so I'd suggest taking a large simple random sample. By taking a random sample, you ensure that the sample will, on average, be representative of the entire dataset, and standard statistical measures of precision such as standard errors and confidence intervals will tell you how far off the population values your sample estimates are likely to be, so there's no real need to validate that a sample is representative of the population unless you have some concerns that is was truly sampled at random. How large a simple random sample? Well, the larger the sample, the more precise your estimates will be. As you already have the data, conventional sample size calculations aren't really applicable -- you may as well use as much of your dataset as is practical for computing. Unless you're planning to do some complex analyses that will make computation time an issue, a simple approach would be to make the simple random sample as large as can be analysed on your PC without leading to paging or other memory issues. One rule of thumb to limit the size of your dataset to no more than half your computer's RAM so as to have space to manipulate it and leave space for the OS and maybe a couple of other smaller applications (such as an editor and a web browser). Another limitation is that 32-bit Windows operating systems won't allow the address space for any single application to be larger than $2^{31}$ bytes = 2.1GB, so if you're using 32-bit Windows, 1GB may be a reasonable limit on the size of a dataset. It's then a matter of some simple arithmetic to calculate how many observations you can sample given how many variables you have for each observation and how many bytes each variable takes up.
How to make representative sample set from a large overall dataset?
If you don't wish to parse the entire data set then you probably can't use stratified sampling, so I'd suggest taking a large simple random sample. By taking a random sample, you ensure that the sampl
How to make representative sample set from a large overall dataset? If you don't wish to parse the entire data set then you probably can't use stratified sampling, so I'd suggest taking a large simple random sample. By taking a random sample, you ensure that the sample will, on average, be representative of the entire dataset, and standard statistical measures of precision such as standard errors and confidence intervals will tell you how far off the population values your sample estimates are likely to be, so there's no real need to validate that a sample is representative of the population unless you have some concerns that is was truly sampled at random. How large a simple random sample? Well, the larger the sample, the more precise your estimates will be. As you already have the data, conventional sample size calculations aren't really applicable -- you may as well use as much of your dataset as is practical for computing. Unless you're planning to do some complex analyses that will make computation time an issue, a simple approach would be to make the simple random sample as large as can be analysed on your PC without leading to paging or other memory issues. One rule of thumb to limit the size of your dataset to no more than half your computer's RAM so as to have space to manipulate it and leave space for the OS and maybe a couple of other smaller applications (such as an editor and a web browser). Another limitation is that 32-bit Windows operating systems won't allow the address space for any single application to be larger than $2^{31}$ bytes = 2.1GB, so if you're using 32-bit Windows, 1GB may be a reasonable limit on the size of a dataset. It's then a matter of some simple arithmetic to calculate how many observations you can sample given how many variables you have for each observation and how many bytes each variable takes up.
How to make representative sample set from a large overall dataset? If you don't wish to parse the entire data set then you probably can't use stratified sampling, so I'd suggest taking a large simple random sample. By taking a random sample, you ensure that the sampl
28,808
How to make representative sample set from a large overall dataset?
On your second question first, you might ask, "how was the data entered?" If you think that the data was entered in a relatively arbitrary fashion (i.e., independent of any observable or unobservable characteristics of your observations that might influence your ultimate analysis using the data), then you might consider the first 5 million, say, or however many you're comfortable working with, as representative of the full sample and select randomly from this group to create a sample that you can work with. To compare two empirical distributions, you can use qq-plots and the two-sample Kolmogorov–Smirnov non-parametric test for differences in distributions (see, e.g., here: http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test). In this case, you would test the distribution of each variable in your sample against the distribution of that variable in your "full" data set (again, it could be just 5 million observations from your full sample). The KS test can suffer from low power (i.e., it's hard to reject the null hypothesis of no difference between the groups), but, with that many observations, you should be okay.
How to make representative sample set from a large overall dataset?
On your second question first, you might ask, "how was the data entered?" If you think that the data was entered in a relatively arbitrary fashion (i.e., independent of any observable or unobservable
How to make representative sample set from a large overall dataset? On your second question first, you might ask, "how was the data entered?" If you think that the data was entered in a relatively arbitrary fashion (i.e., independent of any observable or unobservable characteristics of your observations that might influence your ultimate analysis using the data), then you might consider the first 5 million, say, or however many you're comfortable working with, as representative of the full sample and select randomly from this group to create a sample that you can work with. To compare two empirical distributions, you can use qq-plots and the two-sample Kolmogorov–Smirnov non-parametric test for differences in distributions (see, e.g., here: http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test). In this case, you would test the distribution of each variable in your sample against the distribution of that variable in your "full" data set (again, it could be just 5 million observations from your full sample). The KS test can suffer from low power (i.e., it's hard to reject the null hypothesis of no difference between the groups), but, with that many observations, you should be okay.
How to make representative sample set from a large overall dataset? On your second question first, you might ask, "how was the data entered?" If you think that the data was entered in a relatively arbitrary fashion (i.e., independent of any observable or unobservable
28,809
Distribution of sample correlation
To quote the Wikipedia article on the Fisher transformation : If $(X, Y)$ has a bivariate normal distribution, and if the $(X_i, Y_i)$ pairs used to form the sample correlation coefficient $r$ are independent for $i = 1, \ldots, n,$ then $$z = {1 \over 2}\ln{1+r \over 1-r} = \operatorname{arctanh}(r)$$ is approximately normally distributed with mean ${1 \over 2}\ln{{1+\rho} \over {1-\rho}},$ and standard error ${1 \over \sqrt{N-3}},$ where $N$ is the sample size.
Distribution of sample correlation
To quote the Wikipedia article on the Fisher transformation : If $(X, Y)$ has a bivariate normal distribution, and if the $(X_i, Y_i)$ pairs used to form the sample correlation coefficient $r$ are ind
Distribution of sample correlation To quote the Wikipedia article on the Fisher transformation : If $(X, Y)$ has a bivariate normal distribution, and if the $(X_i, Y_i)$ pairs used to form the sample correlation coefficient $r$ are independent for $i = 1, \ldots, n,$ then $$z = {1 \over 2}\ln{1+r \over 1-r} = \operatorname{arctanh}(r)$$ is approximately normally distributed with mean ${1 \over 2}\ln{{1+\rho} \over {1-\rho}},$ and standard error ${1 \over \sqrt{N-3}},$ where $N$ is the sample size.
Distribution of sample correlation To quote the Wikipedia article on the Fisher transformation : If $(X, Y)$ has a bivariate normal distribution, and if the $(X_i, Y_i)$ pairs used to form the sample correlation coefficient $r$ are ind
28,810
How could a Tukey HSD test be more signif then the uncorrected P value of t.test?
Because your pairwise $t$-test above is not adjusted for age, and age explains a lot of the variance in StressReduction.
How could a Tukey HSD test be more signif then the uncorrected P value of t.test?
Because your pairwise $t$-test above is not adjusted for age, and age explains a lot of the variance in StressReduction.
How could a Tukey HSD test be more signif then the uncorrected P value of t.test? Because your pairwise $t$-test above is not adjusted for age, and age explains a lot of the variance in StressReduction.
How could a Tukey HSD test be more signif then the uncorrected P value of t.test? Because your pairwise $t$-test above is not adjusted for age, and age explains a lot of the variance in StressReduction.
28,811
Is there a relationship between the median of a function of random variables and the function of the median of random variables?
Let the cdf of $x$ be denoted by $F_X(x)$. Thus, the median of $X$ denoted by $m_x$ satisfies: $F_X(m_x)=0.5$ Consider $Y = X^2$. Thus, the cdf of $Y$ is given by: $P(Y \le y) = P(X^2 \le y)$ In other words, the cdf of $Y$ is given by: $F_Y(y) = F_X(\sqrt{y}) - F_X(-\sqrt{y})$ The median for $Y$ denoted by $m_Y$ satisfies: $F_Y(m_y)=0.5$ In other words, it should satisfy: $F_X(\sqrt{m_y}) - F_X(-\sqrt{m_y}) = 0.5$ If $m_y = (m_x)^2$ then it must be that: $F_X(m_x) - F_X(-m_x) = 0.5$ The above with the first equation suggests that the relationship $m(x^2) = m(x)^2$ will only hold if $F_X(-m_x) = 0$. Thus, the relationship holds only if the support of $X$ is positive. The examples you examined in your code have a positive support and hence you find that $m(x^2) = m(x)^2$. If you try a uniform distribution (e.g., U(-1,1)) you will find that $m(x^2) \ne m(x)^2$
Is there a relationship between the median of a function of random variables and the function of the
Let the cdf of $x$ be denoted by $F_X(x)$. Thus, the median of $X$ denoted by $m_x$ satisfies: $F_X(m_x)=0.5$ Consider $Y = X^2$. Thus, the cdf of $Y$ is given by: $P(Y \le y) = P(X^2 \le y)$ In oth
Is there a relationship between the median of a function of random variables and the function of the median of random variables? Let the cdf of $x$ be denoted by $F_X(x)$. Thus, the median of $X$ denoted by $m_x$ satisfies: $F_X(m_x)=0.5$ Consider $Y = X^2$. Thus, the cdf of $Y$ is given by: $P(Y \le y) = P(X^2 \le y)$ In other words, the cdf of $Y$ is given by: $F_Y(y) = F_X(\sqrt{y}) - F_X(-\sqrt{y})$ The median for $Y$ denoted by $m_Y$ satisfies: $F_Y(m_y)=0.5$ In other words, it should satisfy: $F_X(\sqrt{m_y}) - F_X(-\sqrt{m_y}) = 0.5$ If $m_y = (m_x)^2$ then it must be that: $F_X(m_x) - F_X(-m_x) = 0.5$ The above with the first equation suggests that the relationship $m(x^2) = m(x)^2$ will only hold if $F_X(-m_x) = 0$. Thus, the relationship holds only if the support of $X$ is positive. The examples you examined in your code have a positive support and hence you find that $m(x^2) = m(x)^2$. If you try a uniform distribution (e.g., U(-1,1)) you will find that $m(x^2) \ne m(x)^2$
Is there a relationship between the median of a function of random variables and the function of the Let the cdf of $x$ be denoted by $F_X(x)$. Thus, the median of $X$ denoted by $m_x$ satisfies: $F_X(m_x)=0.5$ Consider $Y = X^2$. Thus, the cdf of $Y$ is given by: $P(Y \le y) = P(X^2 \le y)$ In oth
28,812
Is there a relationship between the median of a function of random variables and the function of the median of random variables?
It seems to me that if $f$ is strictly monotonic, $m \circ f=f \circ m$, and the question reduces to $\mu\circ f>f\circ\mu$, which is covered by Jensen's inequality. So strict convexity and strict monotonicity together would be a sufficient condition.
Is there a relationship between the median of a function of random variables and the function of the
It seems to me that if $f$ is strictly monotonic, $m \circ f=f \circ m$, and the question reduces to $\mu\circ f>f\circ\mu$, which is covered by Jensen's inequality. So strict convexity and strict mon
Is there a relationship between the median of a function of random variables and the function of the median of random variables? It seems to me that if $f$ is strictly monotonic, $m \circ f=f \circ m$, and the question reduces to $\mu\circ f>f\circ\mu$, which is covered by Jensen's inequality. So strict convexity and strict monotonicity together would be a sufficient condition.
Is there a relationship between the median of a function of random variables and the function of the It seems to me that if $f$ is strictly monotonic, $m \circ f=f \circ m$, and the question reduces to $\mu\circ f>f\circ\mu$, which is covered by Jensen's inequality. So strict convexity and strict mon
28,813
What is wrong with treating everything as a hyperparameter?
Treating "everything" as a hyperparameter leads to an infinite regress of priors In principle, you can take any constant in a distribution that has an allowable range, and you can then treat it like a conditioning random variable. Consequently, in principle you can always have more hyperparameters in your analysis if you want to. But you have to stop somewhere. Treating a formerly fixed quantity in a prior distribution as a hyperparameter means that you are changing your prior distribution. To see this, suppose you have a prior for $\theta$ using some constant $\phi$. If you treat $\phi$ as a hyperparameter with density $f$ then you get the following change in your (marginal) prior for your parameter: $$\begin{matrix} & & & \text{Prior} \\[6pt] \text{Known constant } \phi & & & \pi(\theta|\phi) \\[6pt] \text{Hyperparameter } \phi & & & \pi(\theta) = \int \pi(\theta|\phi) f(\phi) d \phi \\[6pt] \end{matrix}$$ Every time we take a fixed quantity in the prior and treat it as a hyperparameter, we change the (marginal) prior. Usually this change makes the prior become more diffuse, because of the additional uncertainty in relation to a quantity it depends on. If we were to try to "treat everything as a hyperparameter" that would just mean that we would construct an infinite regress of prior distributions, as we take more and more quantities and assign them a hyperprior, thereby changing the (marginal) prior. You would never get to a point where you have exhausted all quantities that could be generalised to hyperparameters, so you would never get to an endpoint giving you a prior distribution to use in your analysis.
What is wrong with treating everything as a hyperparameter?
Treating "everything" as a hyperparameter leads to an infinite regress of priors In principle, you can take any constant in a distribution that has an allowable range, and you can then treat it like a
What is wrong with treating everything as a hyperparameter? Treating "everything" as a hyperparameter leads to an infinite regress of priors In principle, you can take any constant in a distribution that has an allowable range, and you can then treat it like a conditioning random variable. Consequently, in principle you can always have more hyperparameters in your analysis if you want to. But you have to stop somewhere. Treating a formerly fixed quantity in a prior distribution as a hyperparameter means that you are changing your prior distribution. To see this, suppose you have a prior for $\theta$ using some constant $\phi$. If you treat $\phi$ as a hyperparameter with density $f$ then you get the following change in your (marginal) prior for your parameter: $$\begin{matrix} & & & \text{Prior} \\[6pt] \text{Known constant } \phi & & & \pi(\theta|\phi) \\[6pt] \text{Hyperparameter } \phi & & & \pi(\theta) = \int \pi(\theta|\phi) f(\phi) d \phi \\[6pt] \end{matrix}$$ Every time we take a fixed quantity in the prior and treat it as a hyperparameter, we change the (marginal) prior. Usually this change makes the prior become more diffuse, because of the additional uncertainty in relation to a quantity it depends on. If we were to try to "treat everything as a hyperparameter" that would just mean that we would construct an infinite regress of prior distributions, as we take more and more quantities and assign them a hyperprior, thereby changing the (marginal) prior. You would never get to a point where you have exhausted all quantities that could be generalised to hyperparameters, so you would never get to an endpoint giving you a prior distribution to use in your analysis.
What is wrong with treating everything as a hyperparameter? Treating "everything" as a hyperparameter leads to an infinite regress of priors In principle, you can take any constant in a distribution that has an allowable range, and you can then treat it like a
28,814
What is wrong with treating everything as a hyperparameter?
I'd think about this from a practical perspective, for example a simple supervised classification task. For this, one would normally chose a model to start with based on some heuristic about data size, shape, and quality. Said model will be parameterized, with our aim being to learn a good set of parameters to predict the class of novel examples drawn from the same distribution as the training data. As you say, it would be perfectly possible to learn the entire set of parameters for said model using some kind of hyperparameter optimization framework. But this would be an incredibly inefficient way of training your classifier, as you treat the entire function as a black box. The classifier you've chosen will probably come with its own optimization function that aims to produce the lowest possible error on a training set, usually using some kind of feedback mechanism to update the parameters based on the quality of the predictions it is producing. Your choice of model was a prior you imposed, but that model probably has parameters that can be used to define it but that can't be learned by the standard training algorithm for that model. Example: the number of trees in a random forest. So we need some mechanism to chose these 'hyperparameters', which are hopefully few in number. Of course the space of hyperparameters is essentially infinite so we come to some reasonable balance based on a compute/time budget, and evaluate various settings of the parameters (probably using cross validation) to find a good model for the task at hand.
What is wrong with treating everything as a hyperparameter?
I'd think about this from a practical perspective, for example a simple supervised classification task. For this, one would normally chose a model to start with based on some heuristic about data size
What is wrong with treating everything as a hyperparameter? I'd think about this from a practical perspective, for example a simple supervised classification task. For this, one would normally chose a model to start with based on some heuristic about data size, shape, and quality. Said model will be parameterized, with our aim being to learn a good set of parameters to predict the class of novel examples drawn from the same distribution as the training data. As you say, it would be perfectly possible to learn the entire set of parameters for said model using some kind of hyperparameter optimization framework. But this would be an incredibly inefficient way of training your classifier, as you treat the entire function as a black box. The classifier you've chosen will probably come with its own optimization function that aims to produce the lowest possible error on a training set, usually using some kind of feedback mechanism to update the parameters based on the quality of the predictions it is producing. Your choice of model was a prior you imposed, but that model probably has parameters that can be used to define it but that can't be learned by the standard training algorithm for that model. Example: the number of trees in a random forest. So we need some mechanism to chose these 'hyperparameters', which are hopefully few in number. Of course the space of hyperparameters is essentially infinite so we come to some reasonable balance based on a compute/time budget, and evaluate various settings of the parameters (probably using cross validation) to find a good model for the task at hand.
What is wrong with treating everything as a hyperparameter? I'd think about this from a practical perspective, for example a simple supervised classification task. For this, one would normally chose a model to start with based on some heuristic about data size
28,815
How to avoid overfitting bias when both hyperparameter tuning and model selecting?
As @DikranMarsupial say, you need a nested validation procedure. In the inner e.g. cross validation, you do all the tuning of your model - that includes both choosing hyperparameters and model family. In principle, you could also have a triply nested validation structure, with the innermost tuning the respective model family hyperparameters, the middle one choosing the model family and the outer as usual to obtain a generalization error estimate for the final model. The disadvantage with this, however, is that splitting more often than necessary means that the data partitions become rather small and thus the whole procedure may become more unstable (small optimization/validation/test set mean uncertain performance estimates). Update: Nesting vs. cross validation or hold-out Nesting is independent of the question what splitting scheme you employ at each level of the nested set-up. You can do cross validation at each level, single split at each level or any mixture you deem suitable for your task. 2 nested levels and both CV is what is often referred to as nested cross validation, 2 nested levels and both single split is equivalent to the famous train - validation [optimization] - test [verification] setup. Mixes are less common, but are a perfectly valid design choice as well. If you have sufficient data so that single splits are a sensible option, you may also have sufficient data to do three such splits, i.e. work with 4 subsets of your data. One thing you need to keep in mind, though, is: a single split in the optimization steps* you deprive yourself of a very easy and important means of checking whether your optimization is stable which cross validation (or doing several splits) provides. * whether combined hyperparameter with model family or model family choice plus "normal" hyperparameter optimization Triply nested vs. "normal" nested This would be convenient in that it is easy to implement in a way that guards against accidental data leaks - and which I suspect is what you were originally after with your question: estimate_generalization_error() which splits the data into test and train and on its train data calls choose_model_family() which employs another internal split to guide the choice and calls and on its training split calls the various optimize_model_*() which implement another internal split to optimize the usual hyperparameters for each model family (*), and on its training split calls the respective low-level model fitting function. Here, choose_model_family() and optimize_model_*() are an alternative to a combined tuning function that does the work of both in one split. Since both are training steps, it is allowed to combine them. If you do grid search for hyperparameter tuning, you can think of this as a sparse grid with model family x all possible hyperparameters where evaluate only combinations that happen to exist (e.g. skip mtry for SVM). Or you look at the search space as a list of plausible hyperparamter combinations that you check out: - logistic regression - SVM with cost = 1, gamma = 10 - SVM with cost = 0.1, gamma = 100 ... - random forest with ... to find the global optimum across model families and model family specific hyperparameters. There is nothing special about model_family - it is a hyperparameter for the final model like cost or gamma are for SVMs. In order to wrap your head around the equivalence, consider optimizing gamma and cost for an SVM. Method one: set up a grid or a list of all plausible cost; gamma combinations and search that for the optimum. This is the analogue to the "normal" nested approach. Method two: set up a list of all plausible cost values. for each cost value, optimize gamma. select the cost with best optimized gamma This is the analogue to the triply nested approach. In both cases, we can "flatten" the nested structure into a single loop iterating over a list or grid (I'm sorry, I lack the proper English terms - maybe someone can help?). This is also vaguely similar to "flattening" a recursive structure into an iterative one [though the triply nested is not recursive, since we have different functions f(g(h()))]. This flattening approach potentially has the further advantage that it may be better suited to advanced optimization heuristics. As an example, consider moving from "select the observed optimum" to the one-standard-deviation rule. With the flattened approach, you can now look across model families which model is least complex not more than 1 sd worse than the observed optimum.
How to avoid overfitting bias when both hyperparameter tuning and model selecting?
As @DikranMarsupial say, you need a nested validation procedure. In the inner e.g. cross validation, you do all the tuning of your model - that includes both choosing hyperparameters and model family.
How to avoid overfitting bias when both hyperparameter tuning and model selecting? As @DikranMarsupial say, you need a nested validation procedure. In the inner e.g. cross validation, you do all the tuning of your model - that includes both choosing hyperparameters and model family. In principle, you could also have a triply nested validation structure, with the innermost tuning the respective model family hyperparameters, the middle one choosing the model family and the outer as usual to obtain a generalization error estimate for the final model. The disadvantage with this, however, is that splitting more often than necessary means that the data partitions become rather small and thus the whole procedure may become more unstable (small optimization/validation/test set mean uncertain performance estimates). Update: Nesting vs. cross validation or hold-out Nesting is independent of the question what splitting scheme you employ at each level of the nested set-up. You can do cross validation at each level, single split at each level or any mixture you deem suitable for your task. 2 nested levels and both CV is what is often referred to as nested cross validation, 2 nested levels and both single split is equivalent to the famous train - validation [optimization] - test [verification] setup. Mixes are less common, but are a perfectly valid design choice as well. If you have sufficient data so that single splits are a sensible option, you may also have sufficient data to do three such splits, i.e. work with 4 subsets of your data. One thing you need to keep in mind, though, is: a single split in the optimization steps* you deprive yourself of a very easy and important means of checking whether your optimization is stable which cross validation (or doing several splits) provides. * whether combined hyperparameter with model family or model family choice plus "normal" hyperparameter optimization Triply nested vs. "normal" nested This would be convenient in that it is easy to implement in a way that guards against accidental data leaks - and which I suspect is what you were originally after with your question: estimate_generalization_error() which splits the data into test and train and on its train data calls choose_model_family() which employs another internal split to guide the choice and calls and on its training split calls the various optimize_model_*() which implement another internal split to optimize the usual hyperparameters for each model family (*), and on its training split calls the respective low-level model fitting function. Here, choose_model_family() and optimize_model_*() are an alternative to a combined tuning function that does the work of both in one split. Since both are training steps, it is allowed to combine them. If you do grid search for hyperparameter tuning, you can think of this as a sparse grid with model family x all possible hyperparameters where evaluate only combinations that happen to exist (e.g. skip mtry for SVM). Or you look at the search space as a list of plausible hyperparamter combinations that you check out: - logistic regression - SVM with cost = 1, gamma = 10 - SVM with cost = 0.1, gamma = 100 ... - random forest with ... to find the global optimum across model families and model family specific hyperparameters. There is nothing special about model_family - it is a hyperparameter for the final model like cost or gamma are for SVMs. In order to wrap your head around the equivalence, consider optimizing gamma and cost for an SVM. Method one: set up a grid or a list of all plausible cost; gamma combinations and search that for the optimum. This is the analogue to the "normal" nested approach. Method two: set up a list of all plausible cost values. for each cost value, optimize gamma. select the cost with best optimized gamma This is the analogue to the triply nested approach. In both cases, we can "flatten" the nested structure into a single loop iterating over a list or grid (I'm sorry, I lack the proper English terms - maybe someone can help?). This is also vaguely similar to "flattening" a recursive structure into an iterative one [though the triply nested is not recursive, since we have different functions f(g(h()))]. This flattening approach potentially has the further advantage that it may be better suited to advanced optimization heuristics. As an example, consider moving from "select the observed optimum" to the one-standard-deviation rule. With the flattened approach, you can now look across model families which model is least complex not more than 1 sd worse than the observed optimum.
How to avoid overfitting bias when both hyperparameter tuning and model selecting? As @DikranMarsupial say, you need a nested validation procedure. In the inner e.g. cross validation, you do all the tuning of your model - that includes both choosing hyperparameters and model family.
28,816
How to avoid overfitting bias when both hyperparameter tuning and model selecting?
Just to add to @cbeleites answer (which I tend to agree with), there is nothing inherently different about nested cross validation that it will stop the issue in the OP. Nested cross validation is simply the cross validated analog to a train/test split with cross validation performed on the training set. All this serves to do is reduce variance in your estimate of the generalization error by averaging splits. That said, obviously reducing variance in your estimate is a good thing, and nested CV should be done over a single train/test split if time allows. For the OP as I see it there are two solutions (I will describe it under a single train/test split instead of nested CV but it could obviously be applied to nested CV as well). The first solution would be to perform a train/test split and then split the training set into train/test again. You now have a training set and two sets. For each model family perform cross validation on the training set to determine hyper-parameters. For each model-family select the best performing hyper-parameters and obtain an estimate of generalization error from test set 1. Then compare the error rates of each model family to select the best and obtain its generalization error on test set 2. This would eliminate your issue of optimistic bias due to selecting the model using data that was used for training however would add more pessimistic bias as you have to remove data from training for test set 2. The other solution as cbeleites described, is to simply treat model selection as hyper-paramters. When you are determining the best hyper-parameters, include model-family in this selection. That is, you aren't just comparing a random forest with mtry = 1 to a random forest with mtry = 2... you are comparing random forest with mtry = 1 to mtry = 2 and to SVM with cost = 1 etc. Finally I suppose the other option is to live with the optimistic bias of the method in the OP. From what I understand one of the main reasons leading to the requirement of a test set is that as the hyper-parameter search space grows so to does the likelihood of selecting an over-fit model. If model selection is done using the test set but only between 3 or 4 model families I wonder how much optimistic bias this actually causes. In fact, I would not be surprised if this was the largely predominate method used in practice, particularly for those who use pre-built functionality a la sci-kit learn or caret. After all these packages allow a grid search of a single model-family, not multiple at the same time.
How to avoid overfitting bias when both hyperparameter tuning and model selecting?
Just to add to @cbeleites answer (which I tend to agree with), there is nothing inherently different about nested cross validation that it will stop the issue in the OP. Nested cross validation is sim
How to avoid overfitting bias when both hyperparameter tuning and model selecting? Just to add to @cbeleites answer (which I tend to agree with), there is nothing inherently different about nested cross validation that it will stop the issue in the OP. Nested cross validation is simply the cross validated analog to a train/test split with cross validation performed on the training set. All this serves to do is reduce variance in your estimate of the generalization error by averaging splits. That said, obviously reducing variance in your estimate is a good thing, and nested CV should be done over a single train/test split if time allows. For the OP as I see it there are two solutions (I will describe it under a single train/test split instead of nested CV but it could obviously be applied to nested CV as well). The first solution would be to perform a train/test split and then split the training set into train/test again. You now have a training set and two sets. For each model family perform cross validation on the training set to determine hyper-parameters. For each model-family select the best performing hyper-parameters and obtain an estimate of generalization error from test set 1. Then compare the error rates of each model family to select the best and obtain its generalization error on test set 2. This would eliminate your issue of optimistic bias due to selecting the model using data that was used for training however would add more pessimistic bias as you have to remove data from training for test set 2. The other solution as cbeleites described, is to simply treat model selection as hyper-paramters. When you are determining the best hyper-parameters, include model-family in this selection. That is, you aren't just comparing a random forest with mtry = 1 to a random forest with mtry = 2... you are comparing random forest with mtry = 1 to mtry = 2 and to SVM with cost = 1 etc. Finally I suppose the other option is to live with the optimistic bias of the method in the OP. From what I understand one of the main reasons leading to the requirement of a test set is that as the hyper-parameter search space grows so to does the likelihood of selecting an over-fit model. If model selection is done using the test set but only between 3 or 4 model families I wonder how much optimistic bias this actually causes. In fact, I would not be surprised if this was the largely predominate method used in practice, particularly for those who use pre-built functionality a la sci-kit learn or caret. After all these packages allow a grid search of a single model-family, not multiple at the same time.
How to avoid overfitting bias when both hyperparameter tuning and model selecting? Just to add to @cbeleites answer (which I tend to agree with), there is nothing inherently different about nested cross validation that it will stop the issue in the OP. Nested cross validation is sim
28,817
Levenshtein Distance vs Damerau Levenstein vs Optimal String Alignment Distance
For two strings, a and b: Levenshtein Distance: The minimal number of insertions, deletions, and symbol substitutions required to transform a into b. Damerau Levenstein: Like the Levenstein Distance, but you can also use transpositions (swapping of adjacent symbols). Optimal String Alignment Distance: Like Damerau Levenstein, but you are not allowed to apply multiple transformations on a same substring (e.g. first transpose two symbols, then insert a third between them). The distances can all be computed using dynamic programming. In my opinion, the Wikipedia page explains it quite well. If you are interested in books, you can try Gusfield's "Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology" (and others your online book shop might recommend to you after you search for the above).
Levenshtein Distance vs Damerau Levenstein vs Optimal String Alignment Distance
For two strings, a and b: Levenshtein Distance: The minimal number of insertions, deletions, and symbol substitutions required to transform a into b. Damerau Levenstein: Like the Levenstein Distance,
Levenshtein Distance vs Damerau Levenstein vs Optimal String Alignment Distance For two strings, a and b: Levenshtein Distance: The minimal number of insertions, deletions, and symbol substitutions required to transform a into b. Damerau Levenstein: Like the Levenstein Distance, but you can also use transpositions (swapping of adjacent symbols). Optimal String Alignment Distance: Like Damerau Levenstein, but you are not allowed to apply multiple transformations on a same substring (e.g. first transpose two symbols, then insert a third between them). The distances can all be computed using dynamic programming. In my opinion, the Wikipedia page explains it quite well. If you are interested in books, you can try Gusfield's "Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology" (and others your online book shop might recommend to you after you search for the above).
Levenshtein Distance vs Damerau Levenstein vs Optimal String Alignment Distance For two strings, a and b: Levenshtein Distance: The minimal number of insertions, deletions, and symbol substitutions required to transform a into b. Damerau Levenstein: Like the Levenstein Distance,
28,818
Levenshtein Distance vs Damerau Levenstein vs Optimal String Alignment Distance
I presume you understand the general purpose behind the algorithms. I.e. compute the 'distance' in respect to how many 'edits' it would take to transform string A so that it equals string B. The algorithms for this (in general) are constrained by 'types' of edits it can handle when evaluating. The Levenshtein computes the distance taking three possible ways into account, insertions, deletions or substitutions, of single characters. The Wikipedia article covers the details beyond what would make sense to attempt here, https://en.wikipedia.org/wiki/Levenshtein_distance. The Damerau-Levenshtein variant adds a fourth way, the ability to account for transposition of two adjacent characters, as a possible step. Again, Wikipedia coverage is extensive and comparative to the other methods, https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance. The Optimal String Alignment adds a similar fourth way. Though it is similar, it's not the same as the 'true' Damerau-Levenshtein algorithm above. OSA is covered in the D-L article above. Attempting to explain in short, the algorithms keeps a matrix of minimum costs at the character intersects based on the 3/4 ways of comparing edit costs when iterating through the two strings. Examine the two example matrices in the Levenshtein article above, to understand how the matrix forms. Here's an article that explains it: https://people.cs.pitt.edu/~kirk/cs1501/Pruhs/Spring2006/assignments/editdistance/Levenshtein%20Distance.htm, There is also a third article on 'edit distance' in general: https://en.wikipedia.org/wiki/Edit_distance Other than these articles, you can search the Internet because there are lots of information out there. The main thing is not to drown in lingo. The algorithms are actually simple (i.e. elegant) which sometimes makes people think too much.
Levenshtein Distance vs Damerau Levenstein vs Optimal String Alignment Distance
I presume you understand the general purpose behind the algorithms. I.e. compute the 'distance' in respect to how many 'edits' it would take to transform string A so that it equals string B. The algor
Levenshtein Distance vs Damerau Levenstein vs Optimal String Alignment Distance I presume you understand the general purpose behind the algorithms. I.e. compute the 'distance' in respect to how many 'edits' it would take to transform string A so that it equals string B. The algorithms for this (in general) are constrained by 'types' of edits it can handle when evaluating. The Levenshtein computes the distance taking three possible ways into account, insertions, deletions or substitutions, of single characters. The Wikipedia article covers the details beyond what would make sense to attempt here, https://en.wikipedia.org/wiki/Levenshtein_distance. The Damerau-Levenshtein variant adds a fourth way, the ability to account for transposition of two adjacent characters, as a possible step. Again, Wikipedia coverage is extensive and comparative to the other methods, https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance. The Optimal String Alignment adds a similar fourth way. Though it is similar, it's not the same as the 'true' Damerau-Levenshtein algorithm above. OSA is covered in the D-L article above. Attempting to explain in short, the algorithms keeps a matrix of minimum costs at the character intersects based on the 3/4 ways of comparing edit costs when iterating through the two strings. Examine the two example matrices in the Levenshtein article above, to understand how the matrix forms. Here's an article that explains it: https://people.cs.pitt.edu/~kirk/cs1501/Pruhs/Spring2006/assignments/editdistance/Levenshtein%20Distance.htm, There is also a third article on 'edit distance' in general: https://en.wikipedia.org/wiki/Edit_distance Other than these articles, you can search the Internet because there are lots of information out there. The main thing is not to drown in lingo. The algorithms are actually simple (i.e. elegant) which sometimes makes people think too much.
Levenshtein Distance vs Damerau Levenstein vs Optimal String Alignment Distance I presume you understand the general purpose behind the algorithms. I.e. compute the 'distance' in respect to how many 'edits' it would take to transform string A so that it equals string B. The algor
28,819
Having a hard time with the law of the iterated logarithm
You are looking at a truly tiny simulation. Here's one that went out to $n=2^{1000} \approx 1.07\times 10^{301}:$ (In order to plot it I thinned the walk to $999$ values equally spaced horizontally and connected them with line segments. The actual simulation is, of course, much more detailed than can be shown here :-).) Clearly this walk repeatedly hits (and slightly exceeds) the $\pm 1$ thresholds. The Law of the Iterated Logarithm says this behavior will continue ad infinitum, with the excursions beyond this threshold growing ever rarer. The range from $n=10^8$ to $n=10^{10},$ which is basically that visible in your plot, is bounded on the left and right by the light blue lines. In the overall context, we can hardly expect the scaled random walk to vary much within such a narrow interval. The point is that this law requires you to plot the scaled, standardized random walk on a logarithmic (or even log-log) axis for the index $n.$
Having a hard time with the law of the iterated logarithm
You are looking at a truly tiny simulation. Here's one that went out to $n=2^{1000} \approx 1.07\times 10^{301}:$ (In order to plot it I thinned the walk to $999$ values equally spaced horizontally
Having a hard time with the law of the iterated logarithm You are looking at a truly tiny simulation. Here's one that went out to $n=2^{1000} \approx 1.07\times 10^{301}:$ (In order to plot it I thinned the walk to $999$ values equally spaced horizontally and connected them with line segments. The actual simulation is, of course, much more detailed than can be shown here :-).) Clearly this walk repeatedly hits (and slightly exceeds) the $\pm 1$ thresholds. The Law of the Iterated Logarithm says this behavior will continue ad infinitum, with the excursions beyond this threshold growing ever rarer. The range from $n=10^8$ to $n=10^{10},$ which is basically that visible in your plot, is bounded on the left and right by the light blue lines. In the overall context, we can hardly expect the scaled random walk to vary much within such a narrow interval. The point is that this law requires you to plot the scaled, standardized random walk on a logarithmic (or even log-log) axis for the index $n.$
Having a hard time with the law of the iterated logarithm You are looking at a truly tiny simulation. Here's one that went out to $n=2^{1000} \approx 1.07\times 10^{301}:$ (In order to plot it I thinned the walk to $999$ values equally spaced horizontally
28,820
Trying to Estimate Disease Prevalence from Fragmentary Test Results
Notation. Let $\pi = P(\text{Disease})$ be the prevalence of the disease in the population and $\tau = P(\text{Pos Test})$ be the proportion testing positive. For the test, let $\eta = P(\text{Pos}|\text{Disease})$ be the sensitivity and $\theta = P(\text{Neg}|\text{No Disease})$ be its specificity. Also, given test results, let $\gamma = P(\text{Disease}| \text{Pos})$ and $\delta = P(\text{No Disease}| \text{Neg})$ be, respectively, the predictive powers of a positive or negative test. If a test is of gold standard quality with $\eta = \theta = 1,$ then $\pi = \tau.$ Tests that accurately sequence the genome of the virus may be gold standard tests. Often the first tests for a virus may have considerably lower values of $\eta$ and $\theta.$ It is difficult to find accounts of values of $\eta$ and $\theta$ for any of the tests in current use for COVID-19. (According to one unauthorized report, a test used in China had $\theta \approx 0.7.)$ Traditional estimate. First, we look at results for tests with $\eta = \theta = 0.95.$ Then for data with $n= 11\,500; a = 1206,$ we have $\hat \tau = t = 0.1049.$ The associated 95% Wald confidence interval for $\tau$ is $(0.0993, 0.1105)$ from which one can derive the confidence interval $(0.0547, 0.0672)$ for $\pi.$ Also, $\pi = 0.061$ implies that the predictive power of a positive test is $\gamma = 0.5523.$ Only about half of the subjects testing positive are actually infected. Some computations in R follow: ETA = THETA = .95 n = 11500; a = 1206; p0m = -1:1; t = a/n wald.TAU = t +p0m*1.96*sqrt(t*(1-t)/n); wald.TAU [1] 0.09926973 0.10486957 0.11046940 ci.PI = (wald.T + THETA - 1)/(ETA + THETA - 1); ci.PI [1] 0.05474415 0.06096618 0.06718822 PI = (t + THETA -1)/(ETA + THETA -1); PI [1] 0.06096618 GAMMA = PI*ETA/(PI*ETA + (1-PI)*(1-THETA)); GAMMA [1] 0.5522849 When the traditional estimate is problematic. For a poorer test with $\eta = \theta = 0.90,$ this method gives a CI for $\pi$ as $(-0.0009, 0.0131),$ which has a (nonsensical) negative left endpoint. (We would expect about 1150 false positive tests even with no infected subjects. This is getting close to the the observed number 1206 of positive tests.) In such circumstances, one wonders whether to trust the point estimates $\hat \pi = 0.0061$ and $\hat \gamma = 0.0522.$ ETA = THETA = .9 n = 11500; a = 1206; p0m = -1:1; t = a/n wald.TAU = t +p0m*1.96*sqrt(t*(1-t)/n); wald.TAU [1] 0.09926973 0.10486957 0.11046940 ci.PI = (wald.T + THETA - 1)/(ETA + THETA - 1); ci.PI [1] -0.0009128343 0.0060869565 0.0130867473 PI = (t + THETA -1)/(ETA + THETA -1); PI [1] 0.006086957 GAMMA = PI*ETA/(PI*ETA + (1-PI)*(1-THETA)); GAMMA [1] 0.05223881 A Gibbs sampler. One useful alternative approach is to assume a beta prior distribution on prevalence $\pi \sim \mathsf{Beta(\alpha, \beta)}.$ Even if noninformative with $\alpha = \beta = 0.5,$ such a prior distribution excludes value of $\pi$ outside $(0,1).$ Then we use a Gibbs sampler to find the posterior distribution of $\pi,$ given our data $n = 11\,500, a = 1206.$ Its steps, within each iteration, are as follows: We begin with an arbitrary value of $\pi_1 \in (0,1)$ and use it to estimate 'latent' counts of subjects with the disease based on predictive values $\gamma$ and $\delta.$ We sample counts $X \sim \mathsf{Binom}(a, \gamma)$ and $Y \sim \mathsf{Binom}(n-A, 1 - \delta).$ Then with the estimated $S = X+Y$ infected subjects, we update the beta prior at this step as $\pi|S \sim \mathsf{BETA}(\alpha + S, \beta + n - S).$ Finally, we sample $\pi_2$ from this updated distribution. Even with arbitrary $\pi_1,$ this new value $\pi_2$ is likely to be closer to the truth. Simulated posterior distribution. Iterating through many such steps we get successive values $\pi_1, \pi_2, \pi_3, \dots ,$ in a convergent Markov chain, for which the limiting distribution is the desired posterior distribution of the prevalence. To make sure that the chain has reached steady state, we use only the last half of the many values $\pi_i$ generated in this way. Cutting 2.5% of the probability from each tail of the simulated posterior distribution, we can obtain a 95% Bayesian probability estimate for prevalence $\pi.$ set.seed(1020) m = 10^5 # iterations PI = GAMMA = Gamma = numeric(m) # vectors for results PI[1] = .5 # initial value alpha = .5; beta = .5 # parameters of beta prior ETA = .9; THETA = .9 # sensitivity; specificity n = 11500; A = 1206; B = n - A # data for (i in 2:m) { num.x = PI[i-1]*ETA; den.x = num.x + (1-PI[i-1])*(1 - THETA) GAMMA[i] = num.x/den.x X = rbinom(1, A, num.x/den.x) # use est of gamma as probability num.y = PI[i-1]*(1 - ETA); den.y = num.y + (1-PI[i-1])*THETA Y = rbinom(1, B, num.y/den.y) # use 1 - est of delta as probability PI[i] = rbeta(1, X + Y + alpha, n - X - Y + beta) } aft.brn = seq(floor(m/2),m) quantile(PI[aft.brn], c(.025, .975)) 2.5% 97.5% 3.329477e-05 1.225794e-02 quantile(PI[aft.brn], .95) 95% 0.01101075 mean(PI[aft.brn]) [1] 0.0049096 quantile(GAMMA[aft.brn], c(.025, .975)) 2.5% 97.5% 0.0002995732 0.1004690791 mean(GAMMA[aft.brn]) [1] 0.04176755 quantile(Gamma[aft.brn], c(.025, .975)) 2.5% 97.5% 0.0002995732 0.1004690791 Because the two-sided Bayesian probability interval $(.00003, .0123)$ has its lower endpoint so near to 0, we also look at the one-sided interval $(0, .0110)$ for $\pi.$ Because we estimate the predictive power $\gamma$ of a positive test at each step of the chain, we capture its values in order to get a 95\% Bayesian probability interval $(0.0003, 0.1005)$ for the predictive power $\gamma$ of a positive test. If we were to sequester subjects that get a positive result with this test, then only a relatively small proportion of sequestered subjects would actually be infected. Diagnostic Plots. Not all Gibbs samplers converge as anticipated. Diagnostic plots show that this one does. A plot of successive values of $\pi$ shows the nature of the convergence of the Markov chain. The history plot of $\pi$ shows that the chain 'mixes well'; that is, it moves freely among appropriate values. There are no points of near-absorption. The trace of running averages of the $\pi_i$ shows smooth convergence to prevalence 0.0049. Vertical blue lines indicate the burn-in period. The ACF plot shows that the $\pi_i$ are not independent. Among the $m = 100\,000$ values, perhaps there are 1000 independent ones. In many Gibbs samplers, Markov dependence 'wears away' more rapidly than here. The plot at lower-right is a history plot of the $\gamma_i.$ Variations. If we run the same program with $\eta = \theta = .95,$ results are nearly the same as for the traditional procedure. If we have useful prior information (or opinions) about the prevalence, we can incorporate that information into the prior distribution on $\pi.$ References: (1) Suess, Gardner, & Johnson (2002), "Hierarchical Bayesian model for prevalence inferences and determination of a country’s status for an animal pathogen" Preventive Veterinary Medicine, and its references. (2) Suess & Trumbo (2010) Probability simulation and Gibbs sampling, (Sect. 9.1), Springer.
Trying to Estimate Disease Prevalence from Fragmentary Test Results
Notation. Let $\pi = P(\text{Disease})$ be the prevalence of the disease in the population and $\tau = P(\text{Pos Test})$ be the proportion testing positive. For the test, let $\eta = P(\text{Pos}|\t
Trying to Estimate Disease Prevalence from Fragmentary Test Results Notation. Let $\pi = P(\text{Disease})$ be the prevalence of the disease in the population and $\tau = P(\text{Pos Test})$ be the proportion testing positive. For the test, let $\eta = P(\text{Pos}|\text{Disease})$ be the sensitivity and $\theta = P(\text{Neg}|\text{No Disease})$ be its specificity. Also, given test results, let $\gamma = P(\text{Disease}| \text{Pos})$ and $\delta = P(\text{No Disease}| \text{Neg})$ be, respectively, the predictive powers of a positive or negative test. If a test is of gold standard quality with $\eta = \theta = 1,$ then $\pi = \tau.$ Tests that accurately sequence the genome of the virus may be gold standard tests. Often the first tests for a virus may have considerably lower values of $\eta$ and $\theta.$ It is difficult to find accounts of values of $\eta$ and $\theta$ for any of the tests in current use for COVID-19. (According to one unauthorized report, a test used in China had $\theta \approx 0.7.)$ Traditional estimate. First, we look at results for tests with $\eta = \theta = 0.95.$ Then for data with $n= 11\,500; a = 1206,$ we have $\hat \tau = t = 0.1049.$ The associated 95% Wald confidence interval for $\tau$ is $(0.0993, 0.1105)$ from which one can derive the confidence interval $(0.0547, 0.0672)$ for $\pi.$ Also, $\pi = 0.061$ implies that the predictive power of a positive test is $\gamma = 0.5523.$ Only about half of the subjects testing positive are actually infected. Some computations in R follow: ETA = THETA = .95 n = 11500; a = 1206; p0m = -1:1; t = a/n wald.TAU = t +p0m*1.96*sqrt(t*(1-t)/n); wald.TAU [1] 0.09926973 0.10486957 0.11046940 ci.PI = (wald.T + THETA - 1)/(ETA + THETA - 1); ci.PI [1] 0.05474415 0.06096618 0.06718822 PI = (t + THETA -1)/(ETA + THETA -1); PI [1] 0.06096618 GAMMA = PI*ETA/(PI*ETA + (1-PI)*(1-THETA)); GAMMA [1] 0.5522849 When the traditional estimate is problematic. For a poorer test with $\eta = \theta = 0.90,$ this method gives a CI for $\pi$ as $(-0.0009, 0.0131),$ which has a (nonsensical) negative left endpoint. (We would expect about 1150 false positive tests even with no infected subjects. This is getting close to the the observed number 1206 of positive tests.) In such circumstances, one wonders whether to trust the point estimates $\hat \pi = 0.0061$ and $\hat \gamma = 0.0522.$ ETA = THETA = .9 n = 11500; a = 1206; p0m = -1:1; t = a/n wald.TAU = t +p0m*1.96*sqrt(t*(1-t)/n); wald.TAU [1] 0.09926973 0.10486957 0.11046940 ci.PI = (wald.T + THETA - 1)/(ETA + THETA - 1); ci.PI [1] -0.0009128343 0.0060869565 0.0130867473 PI = (t + THETA -1)/(ETA + THETA -1); PI [1] 0.006086957 GAMMA = PI*ETA/(PI*ETA + (1-PI)*(1-THETA)); GAMMA [1] 0.05223881 A Gibbs sampler. One useful alternative approach is to assume a beta prior distribution on prevalence $\pi \sim \mathsf{Beta(\alpha, \beta)}.$ Even if noninformative with $\alpha = \beta = 0.5,$ such a prior distribution excludes value of $\pi$ outside $(0,1).$ Then we use a Gibbs sampler to find the posterior distribution of $\pi,$ given our data $n = 11\,500, a = 1206.$ Its steps, within each iteration, are as follows: We begin with an arbitrary value of $\pi_1 \in (0,1)$ and use it to estimate 'latent' counts of subjects with the disease based on predictive values $\gamma$ and $\delta.$ We sample counts $X \sim \mathsf{Binom}(a, \gamma)$ and $Y \sim \mathsf{Binom}(n-A, 1 - \delta).$ Then with the estimated $S = X+Y$ infected subjects, we update the beta prior at this step as $\pi|S \sim \mathsf{BETA}(\alpha + S, \beta + n - S).$ Finally, we sample $\pi_2$ from this updated distribution. Even with arbitrary $\pi_1,$ this new value $\pi_2$ is likely to be closer to the truth. Simulated posterior distribution. Iterating through many such steps we get successive values $\pi_1, \pi_2, \pi_3, \dots ,$ in a convergent Markov chain, for which the limiting distribution is the desired posterior distribution of the prevalence. To make sure that the chain has reached steady state, we use only the last half of the many values $\pi_i$ generated in this way. Cutting 2.5% of the probability from each tail of the simulated posterior distribution, we can obtain a 95% Bayesian probability estimate for prevalence $\pi.$ set.seed(1020) m = 10^5 # iterations PI = GAMMA = Gamma = numeric(m) # vectors for results PI[1] = .5 # initial value alpha = .5; beta = .5 # parameters of beta prior ETA = .9; THETA = .9 # sensitivity; specificity n = 11500; A = 1206; B = n - A # data for (i in 2:m) { num.x = PI[i-1]*ETA; den.x = num.x + (1-PI[i-1])*(1 - THETA) GAMMA[i] = num.x/den.x X = rbinom(1, A, num.x/den.x) # use est of gamma as probability num.y = PI[i-1]*(1 - ETA); den.y = num.y + (1-PI[i-1])*THETA Y = rbinom(1, B, num.y/den.y) # use 1 - est of delta as probability PI[i] = rbeta(1, X + Y + alpha, n - X - Y + beta) } aft.brn = seq(floor(m/2),m) quantile(PI[aft.brn], c(.025, .975)) 2.5% 97.5% 3.329477e-05 1.225794e-02 quantile(PI[aft.brn], .95) 95% 0.01101075 mean(PI[aft.brn]) [1] 0.0049096 quantile(GAMMA[aft.brn], c(.025, .975)) 2.5% 97.5% 0.0002995732 0.1004690791 mean(GAMMA[aft.brn]) [1] 0.04176755 quantile(Gamma[aft.brn], c(.025, .975)) 2.5% 97.5% 0.0002995732 0.1004690791 Because the two-sided Bayesian probability interval $(.00003, .0123)$ has its lower endpoint so near to 0, we also look at the one-sided interval $(0, .0110)$ for $\pi.$ Because we estimate the predictive power $\gamma$ of a positive test at each step of the chain, we capture its values in order to get a 95\% Bayesian probability interval $(0.0003, 0.1005)$ for the predictive power $\gamma$ of a positive test. If we were to sequester subjects that get a positive result with this test, then only a relatively small proportion of sequestered subjects would actually be infected. Diagnostic Plots. Not all Gibbs samplers converge as anticipated. Diagnostic plots show that this one does. A plot of successive values of $\pi$ shows the nature of the convergence of the Markov chain. The history plot of $\pi$ shows that the chain 'mixes well'; that is, it moves freely among appropriate values. There are no points of near-absorption. The trace of running averages of the $\pi_i$ shows smooth convergence to prevalence 0.0049. Vertical blue lines indicate the burn-in period. The ACF plot shows that the $\pi_i$ are not independent. Among the $m = 100\,000$ values, perhaps there are 1000 independent ones. In many Gibbs samplers, Markov dependence 'wears away' more rapidly than here. The plot at lower-right is a history plot of the $\gamma_i.$ Variations. If we run the same program with $\eta = \theta = .95,$ results are nearly the same as for the traditional procedure. If we have useful prior information (or opinions) about the prevalence, we can incorporate that information into the prior distribution on $\pi.$ References: (1) Suess, Gardner, & Johnson (2002), "Hierarchical Bayesian model for prevalence inferences and determination of a country’s status for an animal pathogen" Preventive Veterinary Medicine, and its references. (2) Suess & Trumbo (2010) Probability simulation and Gibbs sampling, (Sect. 9.1), Springer.
Trying to Estimate Disease Prevalence from Fragmentary Test Results Notation. Let $\pi = P(\text{Disease})$ be the prevalence of the disease in the population and $\tau = P(\text{Pos Test})$ be the proportion testing positive. For the test, let $\eta = P(\text{Pos}|\t
28,821
Dropout in Linear Regression
$\newcommand{E}{\text{E}}$First let $R * X = M$ for convenience. Expanding the loss we have $$ \|y - Mw\|^2 = y^Ty - 2w^TM^Ty + w^TM^TMw. $$ Taking the expectation w.r.t. $R$ we have $$ \E_R\left(\|y - Mw\|^2\right) = y^Ty - 2w^T(\E M)^Ty + w^T\E(M^TM)w. $$ The expected value of a matrix is the matrix of cell-wise expected values, so $$ (\E_R M)_{ij} = \E_R((R * X)_{ij}) = X_{ij}\E_R(R_{ij}) = p X_{ij} $$ so $$ 2w^T(\E M)^Ty = 2pw^TX^Ty. $$ For the last term, $$ (M^TM)_{ij} = \sum_{k=1}^N M_{ki}M_{kj} = \sum_{k=1}^N R_{ki}R_{kj}X_{ki}X_{kj} $$ therefore $$ (\E_R M^TM)_{ij} = \sum_{k=1}^N \E_R(R_{ki}R_{kj})X_{ki}X_{kj}. $$ If $i \neq j$ then they are independent so the off-diagonal elements result in $p^2 (X^TX)_{ij}$. For the diagonal elements we have $$ \sum_{k=1}^N \E_R(R_{ki}^2)X_{ki}^2 = p(X^TX)_{ii}. $$ Finishing this off, we can note that $$ \|y - pXw\|^2 = y^Ty - 2pw^TX^Ty + p^2w^TX^TXw $$ and we've found $$ \E_R\|y - Mw\|^2 = y^Ty - 2pw^TX^Ty + w^T\E_R(M^TM)w \\ = \|y - pXw\|^2 - p^2w^TX^TXw + w^T\E_R(M^TM)w \\ = \|y - pXw\|^2 + w^T\left(\E_R(M^TM) - p^2 X^TX\right)w. $$ In $\E_R(M^TM) - p^2 X^TX$, I showed that every off-diagonal element is zero so the result is $$ \E_R(M^TM) - p^2 X^TX = p(1-p)\text{diag}(X^TX). $$ The paper defines $\Gamma = \text{diag}(X^TX)^{1/2}$ so $\|\Gamma w\|^2 = w^T\text{diag}(X^TX)w$ which means we are done.
Dropout in Linear Regression
$\newcommand{E}{\text{E}}$First let $R * X = M$ for convenience. Expanding the loss we have $$ \|y - Mw\|^2 = y^Ty - 2w^TM^Ty + w^TM^TMw. $$ Taking the expectation w.r.t. $R$ we have $$ \E_R\left(\|y
Dropout in Linear Regression $\newcommand{E}{\text{E}}$First let $R * X = M$ for convenience. Expanding the loss we have $$ \|y - Mw\|^2 = y^Ty - 2w^TM^Ty + w^TM^TMw. $$ Taking the expectation w.r.t. $R$ we have $$ \E_R\left(\|y - Mw\|^2\right) = y^Ty - 2w^T(\E M)^Ty + w^T\E(M^TM)w. $$ The expected value of a matrix is the matrix of cell-wise expected values, so $$ (\E_R M)_{ij} = \E_R((R * X)_{ij}) = X_{ij}\E_R(R_{ij}) = p X_{ij} $$ so $$ 2w^T(\E M)^Ty = 2pw^TX^Ty. $$ For the last term, $$ (M^TM)_{ij} = \sum_{k=1}^N M_{ki}M_{kj} = \sum_{k=1}^N R_{ki}R_{kj}X_{ki}X_{kj} $$ therefore $$ (\E_R M^TM)_{ij} = \sum_{k=1}^N \E_R(R_{ki}R_{kj})X_{ki}X_{kj}. $$ If $i \neq j$ then they are independent so the off-diagonal elements result in $p^2 (X^TX)_{ij}$. For the diagonal elements we have $$ \sum_{k=1}^N \E_R(R_{ki}^2)X_{ki}^2 = p(X^TX)_{ii}. $$ Finishing this off, we can note that $$ \|y - pXw\|^2 = y^Ty - 2pw^TX^Ty + p^2w^TX^TXw $$ and we've found $$ \E_R\|y - Mw\|^2 = y^Ty - 2pw^TX^Ty + w^T\E_R(M^TM)w \\ = \|y - pXw\|^2 - p^2w^TX^TXw + w^T\E_R(M^TM)w \\ = \|y - pXw\|^2 + w^T\left(\E_R(M^TM) - p^2 X^TX\right)w. $$ In $\E_R(M^TM) - p^2 X^TX$, I showed that every off-diagonal element is zero so the result is $$ \E_R(M^TM) - p^2 X^TX = p(1-p)\text{diag}(X^TX). $$ The paper defines $\Gamma = \text{diag}(X^TX)^{1/2}$ so $\|\Gamma w\|^2 = w^T\text{diag}(X^TX)w$ which means we are done.
Dropout in Linear Regression $\newcommand{E}{\text{E}}$First let $R * X = M$ for convenience. Expanding the loss we have $$ \|y - Mw\|^2 = y^Ty - 2w^TM^Ty + w^TM^TMw. $$ Taking the expectation w.r.t. $R$ we have $$ \E_R\left(\|y
28,822
How to use the transformer for inference
A popular method for such sequence generation tasks is beam search. It keeps a number of K best sequences generated so far as the "output" sequences. In the original paper different beam sizes was used for different tasks. If we use a beam size K=1, it becomes the greedy method in the blog you mentioned.
How to use the transformer for inference
A popular method for such sequence generation tasks is beam search. It keeps a number of K best sequences generated so far as the "output" sequences. In the original paper different beam sizes was us
How to use the transformer for inference A popular method for such sequence generation tasks is beam search. It keeps a number of K best sequences generated so far as the "output" sequences. In the original paper different beam sizes was used for different tasks. If we use a beam size K=1, it becomes the greedy method in the blog you mentioned.
How to use the transformer for inference A popular method for such sequence generation tasks is beam search. It keeps a number of K best sequences generated so far as the "output" sequences. In the original paper different beam sizes was us
28,823
Can it be over fitting when validation loss and validation accuracy is both increasing?
Yes, absolutely. First of all, overfitting is best judged by looking at loss, rather than accuracy, for a series of reasons including the fact that accuracy is not a good way to estimate the performance of classification models. See here: https://stats.stackexchange.com/a/312787/58675 Why is accuracy not the best measure for assessing classification models? Classification probability threshold Secondly, even if you use accuracy, rather than loss, to judge overfitting (and you shouldn't), you can't just look at the (smoothed) derivative of accuracy on the test curve, i.e., if it's increasing on average or not. You should first of all look at the gap between training accuracy and test accuracy. And in your case this gap is very large: you'd better use a scale which starts either at 0, or at the accuracy of the random classifier (i.e., the classifiers which assigns each instance to the majority class), but even with your scale, we're talking a training accuracy of nearly 100%, vs. a test accuracy which doesn't even get to 65%. TL;DR: you don't want to hear it, but your model is as overfit as they get. PS: you're focusing on the wrong problem. The issue here is not whether to do early stopping at the 1th epoch for a test accuracy of 55%, or whether to stop at epoch 7 for an accuracy of 65%. The real issue here is that your training accuracy (but again, I would focus on the test loss) is way too high with respect to your test accuracy. 55%, 65% or even 75% are all crap with respect to 99%. This is a textbook case of overfitting. You need to do something about it, not focus on the "less worse" epoch for early stopping.
Can it be over fitting when validation loss and validation accuracy is both increasing?
Yes, absolutely. First of all, overfitting is best judged by looking at loss, rather than accuracy, for a series of reasons including the fact that accuracy is not a good way to estimate the performan
Can it be over fitting when validation loss and validation accuracy is both increasing? Yes, absolutely. First of all, overfitting is best judged by looking at loss, rather than accuracy, for a series of reasons including the fact that accuracy is not a good way to estimate the performance of classification models. See here: https://stats.stackexchange.com/a/312787/58675 Why is accuracy not the best measure for assessing classification models? Classification probability threshold Secondly, even if you use accuracy, rather than loss, to judge overfitting (and you shouldn't), you can't just look at the (smoothed) derivative of accuracy on the test curve, i.e., if it's increasing on average or not. You should first of all look at the gap between training accuracy and test accuracy. And in your case this gap is very large: you'd better use a scale which starts either at 0, or at the accuracy of the random classifier (i.e., the classifiers which assigns each instance to the majority class), but even with your scale, we're talking a training accuracy of nearly 100%, vs. a test accuracy which doesn't even get to 65%. TL;DR: you don't want to hear it, but your model is as overfit as they get. PS: you're focusing on the wrong problem. The issue here is not whether to do early stopping at the 1th epoch for a test accuracy of 55%, or whether to stop at epoch 7 for an accuracy of 65%. The real issue here is that your training accuracy (but again, I would focus on the test loss) is way too high with respect to your test accuracy. 55%, 65% or even 75% are all crap with respect to 99%. This is a textbook case of overfitting. You need to do something about it, not focus on the "less worse" epoch for early stopping.
Can it be over fitting when validation loss and validation accuracy is both increasing? Yes, absolutely. First of all, overfitting is best judged by looking at loss, rather than accuracy, for a series of reasons including the fact that accuracy is not a good way to estimate the performan
28,824
Can it be over fitting when validation loss and validation accuracy is both increasing?
There is at least two possible cause to this curve variation in this case that can happen. The reasonable possible distinctions that we can assume by looking at this graphical dataset, is as follow : 1- This training network indicate validation loss because the model is overfitting. This answer here could be is a personal review by myself while studying this subject, and having hard time to come over the conclusion. There is many answer here, but the ideal choice would be the number 1 answer that i have mentioned. 2- Other possible cause would be that this training network have unknown variante or error in the trained dataset, like a spontaneous reaction for example. Feel free to comment.
Can it be over fitting when validation loss and validation accuracy is both increasing?
There is at least two possible cause to this curve variation in this case that can happen. The reasonable possible distinctions that we can assume by looking at this graphical dataset, is as follow :
Can it be over fitting when validation loss and validation accuracy is both increasing? There is at least two possible cause to this curve variation in this case that can happen. The reasonable possible distinctions that we can assume by looking at this graphical dataset, is as follow : 1- This training network indicate validation loss because the model is overfitting. This answer here could be is a personal review by myself while studying this subject, and having hard time to come over the conclusion. There is many answer here, but the ideal choice would be the number 1 answer that i have mentioned. 2- Other possible cause would be that this training network have unknown variante or error in the trained dataset, like a spontaneous reaction for example. Feel free to comment.
Can it be over fitting when validation loss and validation accuracy is both increasing? There is at least two possible cause to this curve variation in this case that can happen. The reasonable possible distinctions that we can assume by looking at this graphical dataset, is as follow :
28,825
T-test in the presence of autocorrelation
With model variations like this, it is usually possible to adjust the standard pivotal quantity for the T-test, and perform a test analogous to a standard T-test, by calculating the standard error of your point estimator for the mean, and adjusting your test accordingly. This may require quite a bit of algebra, but it is usually possible to do with a bit of work. In this answer we will derive the variance of the sample mean and use this to find the standard error of the sample mean, expressed in its usual fashion but with an adjustment for the effective sample size. We will then form a quasi-pivotal quantity based on the sample mean that can be used for hypothesis testing. Variance of the sample mean in an AR(1) model: For an AR$(1)$ model with auto-regression parameter $-1<\phi<1$ you have (using some algebra shown in Appendix at bottom of post): $$\begin{equation} \begin{aligned} \mathbb{V}(\bar{X}_n) = \mathbb{V} \Big( \frac{1}{n} \sum_{t=1}^n X_t \Big) &= \frac{1}{n^2} \sum_{t=1}^n \sum_{r=1}^n \mathbb{C}(X_t,X_r) \\[6pt] &= \frac{\sigma^2}{n^2} \sum_{t=1}^n \sum_{r=1}^n \frac{\phi^{|t-r|}}{1-\phi^2} \\[6pt] &= \frac{1}{1-\phi^2} \cdot \frac{\sigma^2}{n^2} \sum_{t=1}^n \sum_{r=1}^n \phi^{|t-r|} \\[6pt] &= \frac{1}{1-\phi^2} \cdot \frac{\sigma^2}{n^2} \Bigg[ n + 2 \sum_{k=1}^{n-1} (n-k) \phi^k \Bigg] \\[6pt] &= \frac{n - 2\phi - n\phi^2 + 2\phi^{n+1}}{(1-\phi^2)(1-\phi)^2} \cdot \frac{\sigma^2}{n^2} \\[6pt] &= \frac{1}{n} \cdot \frac{n - 2\phi - n\phi^2 + 2\phi^{n+1}}{n(1-\phi)^2} \cdot \frac{\sigma^2}{1-\phi^2} \\[6pt] &= \frac{1}{n_\text{eff}(\phi)} \cdot \frac{\sigma^2}{1-\phi^2}, \\[6pt] \end{aligned} \end{equation}$$ where the "effective sample size" is defined by: $$n_\text{eff}(\phi) \equiv \frac{n^2(1-\phi)^2}{n - 2\phi - n\phi^2 + 2\phi^{n+1}}.$$ When $n=1$ we have $n_\text{eff}(\phi) = 1$ and as $n \rightarrow \infty$ we have $n_\text{eff}(\phi) \rightarrow n (1-\phi)/(1+\phi)$. The ratio $n_\text{eff}/n$ is decreasing with respect to $n$ if $\phi > 0$ and is increasing with respect to $n$ if $\phi < 0$. Standard error and quasi-pivotal quantity: Now that we have the variance of the sample mean, we have standard error: $$\text{se}(\bar{X}_n) = \frac{1}{\sqrt{n_\text{eff}(\phi)}} \cdot \frac{\sigma}{\sqrt{1-\phi^2}}.$$ The value $\text{se}(X_t) = \sigma / \sqrt{1-\phi^2}$ is the standard error for a single observation in the model. As we increase $n$ we adjust the standard error by dividing through by the effective sample size. If you are willing to incorporate the variability of this function from your estimator of $\phi$ as one lost degree-of-freedom, and otherwise ignore its variability, then you can form the quasi-pivotal quantity with approximate distribution: $$\frac{\bar{X}_n - \mu}{\hat{\text{se}}(X_t) / \sqrt{n_\text{eff}(\phi)}} \sim \text{T}(df = n-2).$$ This allows you to test the hypothesis of zero mean using a standard T-test, with an adjustment for the estimated auto-correlation between the values. Note that this is a fairly crude adjustment which does not take account of the variability in the estimator for $\phi$. Appendix - Some mathematical working: Here is the mathematical working for the last step of the above result. Using the results for sums of geometric sequences we have: $$\begin{equation} \begin{aligned} \sum_{k=1}^{n-1} (n-k) \phi^k &= n \sum_{k=1}^{n-1} \phi^k - \sum_{k=1}^{n-1} k \phi^k \\[6pt] &= n \sum_{k=1}^{n-1} \phi^k - \phi \frac{d}{d\phi} \sum_{k=1}^{n-1} \phi^k \\[6pt] &= n \cdot \frac{\phi-\phi^n}{1-\phi} - \phi \cdot \frac{d}{d\phi} \frac{\phi-\phi^n}{1-\phi} \\[6pt] &= n \phi \cdot \frac{(1-\phi)(1-\phi^{n-1})}{(1-\phi)^2} - \phi \cdot \frac{(1-\phi)(1-n\phi^{n-1}) + (\phi-\phi^n)}{(1-\phi)^2} \\[6pt] &= n \phi \cdot \frac{1-\phi-\phi^{n-1} + \phi^n}{(1-\phi)^2} - \phi \cdot \frac{1 -\phi -n\phi^{n-1} +n\phi^n +\phi -\phi^n}{(1-\phi)^2} \\[6pt] &= \frac{\phi}{(1-\phi)^2} \Big[ n (1-\phi-\phi^{n-1} + \phi^n) - (1 -n\phi^{n-1} +n\phi^n -\phi^n) \Big] \\[6pt] &= \frac{\phi}{(1-\phi)^2} \Big[ n - n\phi - n\phi^{n-1} + n\phi^n - 1 + n\phi^{n-1} - n\phi^n + \phi^n \Big] \\[6pt] &= \frac{\phi}{(1-\phi)^2} \Big[ (n-1) - n\phi +\phi^n \Big]. \\[6pt] \end{aligned} \end{equation}$$ We then have: $$\begin{equation} \begin{aligned} n + 2\sum_{k=1}^{n-1} (n-k) \phi^k &= n + \frac{\phi}{(1-\phi)^2} \Big[ 2(n-1) - 2n\phi + 2\phi^n \Big] \\[6pt] &= \frac{1}{(1-\phi)^2} \Big[ n (1-\phi)^2 + 2(n-1)\phi - 2n\phi^2 + 2\phi^{n+1} \Big] \\[6pt] &= \frac{1}{(1-\phi)^2} \Big[ n -2n\phi +n\phi^2 + 2n\phi - 2\phi - 2n\phi^2 + 2\phi^{n+1} \Big] \\[6pt] &= \frac{1}{(1-\phi)^2} \Big[ n - 2\phi - n\phi^2 + 2\phi^{n+1} \Big] \\[6pt] &= \frac{n - 2\phi - n\phi^2 + 2\phi^{n+1}}{(1-\phi)^2}. \\[6pt] \end{aligned} \end{equation}$$
T-test in the presence of autocorrelation
With model variations like this, it is usually possible to adjust the standard pivotal quantity for the T-test, and perform a test analogous to a standard T-test, by calculating the standard error of
T-test in the presence of autocorrelation With model variations like this, it is usually possible to adjust the standard pivotal quantity for the T-test, and perform a test analogous to a standard T-test, by calculating the standard error of your point estimator for the mean, and adjusting your test accordingly. This may require quite a bit of algebra, but it is usually possible to do with a bit of work. In this answer we will derive the variance of the sample mean and use this to find the standard error of the sample mean, expressed in its usual fashion but with an adjustment for the effective sample size. We will then form a quasi-pivotal quantity based on the sample mean that can be used for hypothesis testing. Variance of the sample mean in an AR(1) model: For an AR$(1)$ model with auto-regression parameter $-1<\phi<1$ you have (using some algebra shown in Appendix at bottom of post): $$\begin{equation} \begin{aligned} \mathbb{V}(\bar{X}_n) = \mathbb{V} \Big( \frac{1}{n} \sum_{t=1}^n X_t \Big) &= \frac{1}{n^2} \sum_{t=1}^n \sum_{r=1}^n \mathbb{C}(X_t,X_r) \\[6pt] &= \frac{\sigma^2}{n^2} \sum_{t=1}^n \sum_{r=1}^n \frac{\phi^{|t-r|}}{1-\phi^2} \\[6pt] &= \frac{1}{1-\phi^2} \cdot \frac{\sigma^2}{n^2} \sum_{t=1}^n \sum_{r=1}^n \phi^{|t-r|} \\[6pt] &= \frac{1}{1-\phi^2} \cdot \frac{\sigma^2}{n^2} \Bigg[ n + 2 \sum_{k=1}^{n-1} (n-k) \phi^k \Bigg] \\[6pt] &= \frac{n - 2\phi - n\phi^2 + 2\phi^{n+1}}{(1-\phi^2)(1-\phi)^2} \cdot \frac{\sigma^2}{n^2} \\[6pt] &= \frac{1}{n} \cdot \frac{n - 2\phi - n\phi^2 + 2\phi^{n+1}}{n(1-\phi)^2} \cdot \frac{\sigma^2}{1-\phi^2} \\[6pt] &= \frac{1}{n_\text{eff}(\phi)} \cdot \frac{\sigma^2}{1-\phi^2}, \\[6pt] \end{aligned} \end{equation}$$ where the "effective sample size" is defined by: $$n_\text{eff}(\phi) \equiv \frac{n^2(1-\phi)^2}{n - 2\phi - n\phi^2 + 2\phi^{n+1}}.$$ When $n=1$ we have $n_\text{eff}(\phi) = 1$ and as $n \rightarrow \infty$ we have $n_\text{eff}(\phi) \rightarrow n (1-\phi)/(1+\phi)$. The ratio $n_\text{eff}/n$ is decreasing with respect to $n$ if $\phi > 0$ and is increasing with respect to $n$ if $\phi < 0$. Standard error and quasi-pivotal quantity: Now that we have the variance of the sample mean, we have standard error: $$\text{se}(\bar{X}_n) = \frac{1}{\sqrt{n_\text{eff}(\phi)}} \cdot \frac{\sigma}{\sqrt{1-\phi^2}}.$$ The value $\text{se}(X_t) = \sigma / \sqrt{1-\phi^2}$ is the standard error for a single observation in the model. As we increase $n$ we adjust the standard error by dividing through by the effective sample size. If you are willing to incorporate the variability of this function from your estimator of $\phi$ as one lost degree-of-freedom, and otherwise ignore its variability, then you can form the quasi-pivotal quantity with approximate distribution: $$\frac{\bar{X}_n - \mu}{\hat{\text{se}}(X_t) / \sqrt{n_\text{eff}(\phi)}} \sim \text{T}(df = n-2).$$ This allows you to test the hypothesis of zero mean using a standard T-test, with an adjustment for the estimated auto-correlation between the values. Note that this is a fairly crude adjustment which does not take account of the variability in the estimator for $\phi$. Appendix - Some mathematical working: Here is the mathematical working for the last step of the above result. Using the results for sums of geometric sequences we have: $$\begin{equation} \begin{aligned} \sum_{k=1}^{n-1} (n-k) \phi^k &= n \sum_{k=1}^{n-1} \phi^k - \sum_{k=1}^{n-1} k \phi^k \\[6pt] &= n \sum_{k=1}^{n-1} \phi^k - \phi \frac{d}{d\phi} \sum_{k=1}^{n-1} \phi^k \\[6pt] &= n \cdot \frac{\phi-\phi^n}{1-\phi} - \phi \cdot \frac{d}{d\phi} \frac{\phi-\phi^n}{1-\phi} \\[6pt] &= n \phi \cdot \frac{(1-\phi)(1-\phi^{n-1})}{(1-\phi)^2} - \phi \cdot \frac{(1-\phi)(1-n\phi^{n-1}) + (\phi-\phi^n)}{(1-\phi)^2} \\[6pt] &= n \phi \cdot \frac{1-\phi-\phi^{n-1} + \phi^n}{(1-\phi)^2} - \phi \cdot \frac{1 -\phi -n\phi^{n-1} +n\phi^n +\phi -\phi^n}{(1-\phi)^2} \\[6pt] &= \frac{\phi}{(1-\phi)^2} \Big[ n (1-\phi-\phi^{n-1} + \phi^n) - (1 -n\phi^{n-1} +n\phi^n -\phi^n) \Big] \\[6pt] &= \frac{\phi}{(1-\phi)^2} \Big[ n - n\phi - n\phi^{n-1} + n\phi^n - 1 + n\phi^{n-1} - n\phi^n + \phi^n \Big] \\[6pt] &= \frac{\phi}{(1-\phi)^2} \Big[ (n-1) - n\phi +\phi^n \Big]. \\[6pt] \end{aligned} \end{equation}$$ We then have: $$\begin{equation} \begin{aligned} n + 2\sum_{k=1}^{n-1} (n-k) \phi^k &= n + \frac{\phi}{(1-\phi)^2} \Big[ 2(n-1) - 2n\phi + 2\phi^n \Big] \\[6pt] &= \frac{1}{(1-\phi)^2} \Big[ n (1-\phi)^2 + 2(n-1)\phi - 2n\phi^2 + 2\phi^{n+1} \Big] \\[6pt] &= \frac{1}{(1-\phi)^2} \Big[ n -2n\phi +n\phi^2 + 2n\phi - 2\phi - 2n\phi^2 + 2\phi^{n+1} \Big] \\[6pt] &= \frac{1}{(1-\phi)^2} \Big[ n - 2\phi - n\phi^2 + 2\phi^{n+1} \Big] \\[6pt] &= \frac{n - 2\phi - n\phi^2 + 2\phi^{n+1}}{(1-\phi)^2}. \\[6pt] \end{aligned} \end{equation}$$
T-test in the presence of autocorrelation With model variations like this, it is usually possible to adjust the standard pivotal quantity for the T-test, and perform a test analogous to a standard T-test, by calculating the standard error of
28,826
Logit transformation or beta regression for proportion data
They mean that once you transformed your dependent variable (e.g., from $y$ to ${\rm logit}(y)$), the parameters of the regression model tell you how independent variables affect ${\rm logit}(y)$, not $y$ itself. Suppose sex is one of your independent variables and you see a coefficient of 2 for males against females. If you used logit transformation, interpretation of this would be that being a male doubles a logit. If you did not, you can say that it doubles a percentage. EDIT: Beta regression use logit to transform a mean of distribution assumed for data (beta distribution in this case) while linear regression with logit-transformed dependent variable transforms a data. So in beta regression we have ${\rm logit}(E(y))$ modeled while in linear regression with logit-transformed dependent variable we have $E({\rm logit}(y))$. These two are not the same.
Logit transformation or beta regression for proportion data
They mean that once you transformed your dependent variable (e.g., from $y$ to ${\rm logit}(y)$), the parameters of the regression model tell you how independent variables affect ${\rm logit}(y)$, n
Logit transformation or beta regression for proportion data They mean that once you transformed your dependent variable (e.g., from $y$ to ${\rm logit}(y)$), the parameters of the regression model tell you how independent variables affect ${\rm logit}(y)$, not $y$ itself. Suppose sex is one of your independent variables and you see a coefficient of 2 for males against females. If you used logit transformation, interpretation of this would be that being a male doubles a logit. If you did not, you can say that it doubles a percentage. EDIT: Beta regression use logit to transform a mean of distribution assumed for data (beta distribution in this case) while linear regression with logit-transformed dependent variable transforms a data. So in beta regression we have ${\rm logit}(E(y))$ modeled while in linear regression with logit-transformed dependent variable we have $E({\rm logit}(y))$. These two are not the same.
Logit transformation or beta regression for proportion data They mean that once you transformed your dependent variable (e.g., from $y$ to ${\rm logit}(y)$), the parameters of the regression model tell you how independent variables affect ${\rm logit}(y)$, n
28,827
Bootstrapping dataset with imbalanced classes
One method you can try is a form of "stratified"-bootstrap. You can subsample from each group separately, even un-proportionally. Doing so will result in estimation of the empirical distribution of each group, as bootstrap does. Then, to obtain the statistic you want to calculate, you have to weight each sample by its class respective oversampling/undersampling used. That's the general idea. There seem to be a paper tackling this exact problem. Might worth to go over it. Maybe this question might also help you, in case you are okay with sampling each class using the original proportion.
Bootstrapping dataset with imbalanced classes
One method you can try is a form of "stratified"-bootstrap. You can subsample from each group separately, even un-proportionally. Doing so will result in estimation of the empirical distribution of ea
Bootstrapping dataset with imbalanced classes One method you can try is a form of "stratified"-bootstrap. You can subsample from each group separately, even un-proportionally. Doing so will result in estimation of the empirical distribution of each group, as bootstrap does. Then, to obtain the statistic you want to calculate, you have to weight each sample by its class respective oversampling/undersampling used. That's the general idea. There seem to be a paper tackling this exact problem. Might worth to go over it. Maybe this question might also help you, in case you are okay with sampling each class using the original proportion.
Bootstrapping dataset with imbalanced classes One method you can try is a form of "stratified"-bootstrap. You can subsample from each group separately, even un-proportionally. Doing so will result in estimation of the empirical distribution of ea
28,828
How does ResNet or CNN with skip connections solve the gradient exploding problem?
To my understanding, during backprop, skip connection's path will pass gradient update as well. Conceptually this update acts similar to synthetic gradient's purpose. Instead of waiting for gradient to propagate back one layer at a time, skip connection's path allow gradient to reach those beginning nodes with greater magnitude by skipping some layers in between. I personally do not find any improvement nor greater risk of encountering exploding gradient with skip connection.
How does ResNet or CNN with skip connections solve the gradient exploding problem?
To my understanding, during backprop, skip connection's path will pass gradient update as well. Conceptually this update acts similar to synthetic gradient's purpose. Instead of waiting for gradient t
How does ResNet or CNN with skip connections solve the gradient exploding problem? To my understanding, during backprop, skip connection's path will pass gradient update as well. Conceptually this update acts similar to synthetic gradient's purpose. Instead of waiting for gradient to propagate back one layer at a time, skip connection's path allow gradient to reach those beginning nodes with greater magnitude by skipping some layers in between. I personally do not find any improvement nor greater risk of encountering exploding gradient with skip connection.
How does ResNet or CNN with skip connections solve the gradient exploding problem? To my understanding, during backprop, skip connection's path will pass gradient update as well. Conceptually this update acts similar to synthetic gradient's purpose. Instead of waiting for gradient t
28,829
How does ResNet or CNN with skip connections solve the gradient exploding problem?
I'm not 100% sure, but I would guess that this is more referring to normalization like BatchNorm rather than skip connections. It's not like ResNets will not explode without any normalization and not like plain VGG-style network will explode if you properly place BatchNorms. Skip connections, I guess, only help make the function smoother and the logic of the function that neural networks compute less convoluted, but it's pretty unrelated to exploding gradients problem. I found that having an activation function after, for example, BatchNorm, may also be crucial to prevent exploding gradients. Sometimes, when I didn't have it follow BatchNorm, or when I had it precede BatchNorm loss was blowing up.
How does ResNet or CNN with skip connections solve the gradient exploding problem?
I'm not 100% sure, but I would guess that this is more referring to normalization like BatchNorm rather than skip connections. It's not like ResNets will not explode without any normalization and not
How does ResNet or CNN with skip connections solve the gradient exploding problem? I'm not 100% sure, but I would guess that this is more referring to normalization like BatchNorm rather than skip connections. It's not like ResNets will not explode without any normalization and not like plain VGG-style network will explode if you properly place BatchNorms. Skip connections, I guess, only help make the function smoother and the logic of the function that neural networks compute less convoluted, but it's pretty unrelated to exploding gradients problem. I found that having an activation function after, for example, BatchNorm, may also be crucial to prevent exploding gradients. Sometimes, when I didn't have it follow BatchNorm, or when I had it precede BatchNorm loss was blowing up.
How does ResNet or CNN with skip connections solve the gradient exploding problem? I'm not 100% sure, but I would guess that this is more referring to normalization like BatchNorm rather than skip connections. It's not like ResNets will not explode without any normalization and not
28,830
How does ResNet or CNN with skip connections solve the gradient exploding problem?
A possible reasoning is that residual connections reduce the feature space that a network searches for, as mentioned here: A neural network without residual parts explores more of the feature space. This makes it more vulnerable to perturbations that cause it to leave the manifold, and necessitates extra training data to recover. Due to lower sensitivity to perturbations, the network can tend to have smaller loss values, leading to smaller gradients and hence prevent gradient explosion.
How does ResNet or CNN with skip connections solve the gradient exploding problem?
A possible reasoning is that residual connections reduce the feature space that a network searches for, as mentioned here: A neural network without residual parts explores more of the feature space.
How does ResNet or CNN with skip connections solve the gradient exploding problem? A possible reasoning is that residual connections reduce the feature space that a network searches for, as mentioned here: A neural network without residual parts explores more of the feature space. This makes it more vulnerable to perturbations that cause it to leave the manifold, and necessitates extra training data to recover. Due to lower sensitivity to perturbations, the network can tend to have smaller loss values, leading to smaller gradients and hence prevent gradient explosion.
How does ResNet or CNN with skip connections solve the gradient exploding problem? A possible reasoning is that residual connections reduce the feature space that a network searches for, as mentioned here: A neural network without residual parts explores more of the feature space.
28,831
Why is uniform prior on log(x) equal to 1/x prior on x?
When transforming a uniform distribution on $\log(\sigma)$ to a distribution on $\sigma$ you need to take into account the Jacobian of the transformation. This corresponds, as you correctly intuited, to $1/\sigma$. Writing this a little more clearly, let $X=\log(\sigma)$ and the transformation we're after is $T(X)=\sigma=e^{X}=Y$, which has inverse transformation $T^{-1}(Y)=\log(Y)$. The jacobian is then $|\frac{\partial X}{\partial Y}|=1/Y$. So since $p(X)\propto 1$, we have that the induced density for $\sigma$ is the $p(Y)=|\frac{\partial X}{\partial Y}|p(\log(Y))\propto1/Y$.
Why is uniform prior on log(x) equal to 1/x prior on x?
When transforming a uniform distribution on $\log(\sigma)$ to a distribution on $\sigma$ you need to take into account the Jacobian of the transformation. This corresponds, as you correctly intuited,
Why is uniform prior on log(x) equal to 1/x prior on x? When transforming a uniform distribution on $\log(\sigma)$ to a distribution on $\sigma$ you need to take into account the Jacobian of the transformation. This corresponds, as you correctly intuited, to $1/\sigma$. Writing this a little more clearly, let $X=\log(\sigma)$ and the transformation we're after is $T(X)=\sigma=e^{X}=Y$, which has inverse transformation $T^{-1}(Y)=\log(Y)$. The jacobian is then $|\frac{\partial X}{\partial Y}|=1/Y$. So since $p(X)\propto 1$, we have that the induced density for $\sigma$ is the $p(Y)=|\frac{\partial X}{\partial Y}|p(\log(Y))\propto1/Y$.
Why is uniform prior on log(x) equal to 1/x prior on x? When transforming a uniform distribution on $\log(\sigma)$ to a distribution on $\sigma$ you need to take into account the Jacobian of the transformation. This corresponds, as you correctly intuited,
28,832
Why do hypothesis tests on resampled datasets reject the null too often?
When you resample the null, the expected value of the regression coefficient is zero. When you resample some observed dataset, the expected value is the observed coefficient for that data. It's not a type I error if P <= 0.05 when you resample the observed data. In fact, it's a type II error if you have P > 0.05. You can gain some intuition by computing the correlation between the abs(b) and the mean(P). Here is simpler code to replicate what you did plus compute the correlation between b and "type I" error over the set of simulations boot.reps = 1000 n.sims.run = 10 n <- 1000 b <- matrix(NA, nrow=boot.reps, ncol=n.sims.run) p <- matrix(NA, nrow=boot.reps, ncol=n.sims.run) for(sim_j in 1:n.sims.run){ x <- rnorm(n) y <- rnorm(n) inc <- 1:n for(boot_i in 1:boot.reps){ fit <- lm(y[inc] ~ x[inc]) b[boot_i, sim_j] <- abs(coefficients(summary(fit))['x[inc]', 'Estimate']) p[boot_i, sim_j] <- coefficients(summary(fit))['x[inc]', 'Pr(>|t|)'] inc <- sample(1:n, replace=TRUE) } } # note this is not really a type I error but whatever type1 <- apply(p, 2, function(x) sum(x <= 0.05))/boot.reps # correlation between b and "type I" cor(b[1, ], type1) update the answer by grand_chat is not the reason the frequency of P <= 0.05 is > 0.05. The answer is very simple and what I've said above -- the expected value of the mean of each resample is the original, observed mean. This is the whole basis of the bootstrap, which was developed to generate standard errors/confidence limits on an observed mean and not as a hypothesis test. Since the expectation is not zero, of course the "type I error" will be greater than alpha. And this is why there will be a correlation between the magnitude of the coefficient (how far from zero) and the magnitude of the deviation of the "type I error" from alpha.
Why do hypothesis tests on resampled datasets reject the null too often?
When you resample the null, the expected value of the regression coefficient is zero. When you resample some observed dataset, the expected value is the observed coefficient for that data. It's not a
Why do hypothesis tests on resampled datasets reject the null too often? When you resample the null, the expected value of the regression coefficient is zero. When you resample some observed dataset, the expected value is the observed coefficient for that data. It's not a type I error if P <= 0.05 when you resample the observed data. In fact, it's a type II error if you have P > 0.05. You can gain some intuition by computing the correlation between the abs(b) and the mean(P). Here is simpler code to replicate what you did plus compute the correlation between b and "type I" error over the set of simulations boot.reps = 1000 n.sims.run = 10 n <- 1000 b <- matrix(NA, nrow=boot.reps, ncol=n.sims.run) p <- matrix(NA, nrow=boot.reps, ncol=n.sims.run) for(sim_j in 1:n.sims.run){ x <- rnorm(n) y <- rnorm(n) inc <- 1:n for(boot_i in 1:boot.reps){ fit <- lm(y[inc] ~ x[inc]) b[boot_i, sim_j] <- abs(coefficients(summary(fit))['x[inc]', 'Estimate']) p[boot_i, sim_j] <- coefficients(summary(fit))['x[inc]', 'Pr(>|t|)'] inc <- sample(1:n, replace=TRUE) } } # note this is not really a type I error but whatever type1 <- apply(p, 2, function(x) sum(x <= 0.05))/boot.reps # correlation between b and "type I" cor(b[1, ], type1) update the answer by grand_chat is not the reason the frequency of P <= 0.05 is > 0.05. The answer is very simple and what I've said above -- the expected value of the mean of each resample is the original, observed mean. This is the whole basis of the bootstrap, which was developed to generate standard errors/confidence limits on an observed mean and not as a hypothesis test. Since the expectation is not zero, of course the "type I error" will be greater than alpha. And this is why there will be a correlation between the magnitude of the coefficient (how far from zero) and the magnitude of the deviation of the "type I error" from alpha.
Why do hypothesis tests on resampled datasets reject the null too often? When you resample the null, the expected value of the regression coefficient is zero. When you resample some observed dataset, the expected value is the observed coefficient for that data. It's not a
28,833
Why do hypothesis tests on resampled datasets reject the null too often?
If you sample with replacement from your original normal sample, the resulting bootstrap sample isn't normal. The joint distribution of the bootstrap sample follows a gnarly mixture distribution that is very likely to contain duplicate records, whereas duplicate values have probability zero when you take iid samples from a normal distribution. As a simple example, if your original sample is two observations from a univariate normal distribution, then a bootstrap sample with replacement will half the time consist of the original sample, and half the time will consist one of the original values, duplicated. It's clear that the sample variance of the bootstrap sample will on average be less than that of the original -- in fact it will be half the original. The main consequence is that the inference that you're doing based on normal theory returns the wrong $p$-values when applied to the bootstrap sample. In particular the normal theory yields anticonservative decision rules, because your bootstrap sample will produce $t$ statistics whose denominators are smaller than would be expected under normal theory, owing to the presence of duplicates. As a result, the normal theory hypothesis test ends up rejecting the null hypothesis more than expected.
Why do hypothesis tests on resampled datasets reject the null too often?
If you sample with replacement from your original normal sample, the resulting bootstrap sample isn't normal. The joint distribution of the bootstrap sample follows a gnarly mixture distribution that
Why do hypothesis tests on resampled datasets reject the null too often? If you sample with replacement from your original normal sample, the resulting bootstrap sample isn't normal. The joint distribution of the bootstrap sample follows a gnarly mixture distribution that is very likely to contain duplicate records, whereas duplicate values have probability zero when you take iid samples from a normal distribution. As a simple example, if your original sample is two observations from a univariate normal distribution, then a bootstrap sample with replacement will half the time consist of the original sample, and half the time will consist one of the original values, duplicated. It's clear that the sample variance of the bootstrap sample will on average be less than that of the original -- in fact it will be half the original. The main consequence is that the inference that you're doing based on normal theory returns the wrong $p$-values when applied to the bootstrap sample. In particular the normal theory yields anticonservative decision rules, because your bootstrap sample will produce $t$ statistics whose denominators are smaller than would be expected under normal theory, owing to the presence of duplicates. As a result, the normal theory hypothesis test ends up rejecting the null hypothesis more than expected.
Why do hypothesis tests on resampled datasets reject the null too often? If you sample with replacement from your original normal sample, the resulting bootstrap sample isn't normal. The joint distribution of the bootstrap sample follows a gnarly mixture distribution that
28,834
Why do hypothesis tests on resampled datasets reject the null too often?
I totally agree with @JWalker's answer. There's another aspect of this problem. That is in your resampling process. You expect the regression coefficient to be centered around zero because you assume X and Y are independent. However, in your resampling you do ids = sample( 1:nrow(d), replace=TRUE ) b = d[ ids, ] which creates correlation because you are sampling X and Y together. For example, say the first row of dataset d is (x1, y1), In the resampled dataset, P(Y = y1|X = x1) = 1, while if X and Y are independent, P(Y|X = x1) follows a normal distribution. So another way of fix this is to use b = data.frame( X1 = rnorm( n = 1000 ), Y1 = rnorm( n = 1000 ) ) the same code you use to generate d, in order to make X and Y independent from each other. The same reason explains why it works with residual resampling (because X is independent from the new Y).
Why do hypothesis tests on resampled datasets reject the null too often?
I totally agree with @JWalker's answer. There's another aspect of this problem. That is in your resampling process. You expect the regression coefficient to be centered around zero because you assume
Why do hypothesis tests on resampled datasets reject the null too often? I totally agree with @JWalker's answer. There's another aspect of this problem. That is in your resampling process. You expect the regression coefficient to be centered around zero because you assume X and Y are independent. However, in your resampling you do ids = sample( 1:nrow(d), replace=TRUE ) b = d[ ids, ] which creates correlation because you are sampling X and Y together. For example, say the first row of dataset d is (x1, y1), In the resampled dataset, P(Y = y1|X = x1) = 1, while if X and Y are independent, P(Y|X = x1) follows a normal distribution. So another way of fix this is to use b = data.frame( X1 = rnorm( n = 1000 ), Y1 = rnorm( n = 1000 ) ) the same code you use to generate d, in order to make X and Y independent from each other. The same reason explains why it works with residual resampling (because X is independent from the new Y).
Why do hypothesis tests on resampled datasets reject the null too often? I totally agree with @JWalker's answer. There's another aspect of this problem. That is in your resampling process. You expect the regression coefficient to be centered around zero because you assume
28,835
Why do hypothesis tests on resampled datasets reject the null too often?
The biggest issue here is that the model results are spurious and therefore highly unstable, because the model is just fitting noise. In a very literal sense. Y1 is not a dependent variable due to how the sample data was generated. Edit, in response to the comments.Let me make another try at explaining my thinking. With an OLS the general intent is to discover and quantify the underlying relationships in the data. With real-world data, we usually do not know those exactly. But this is an artificial test situation. We do know the EXACT data generating mechanism, it's right there in the code posted by the O.P. It's X1 = rnorm( n = 1000 ), Y1 = rnorm( n = 1000 ) If we express that in the familiar form of an OLS regression, i.e. Y1 = intercept + Beta1 * X1 + Error that becomes Y1 = mean(X1) + 0(X1) + Error So in my mind, this is a model expressed in linear FORM, but it is NOT actually a linear relationship/model, because there is no slope. Beta1=0.000000. When we generate the 1000 random data points, the scatterplot is going to look like the classic shotgun circular spray. There could be some correlation between X1 and Y1 in the specific sample of 1000 random points that was generated, but if so it is random happenstance. If the OLS does find a correlation, i.e., rejects the null hypothesis that there is no slope, since we know definitively that there really isn't any relationship between these two variables, then the OLS has literally found a pattern in the Error Component. I characterized that as "fitting the noise" and "spurious." In addition, one of the std assumptions/requirements of an OLS is that (+/-) "the linear regression model is “linear in parameters.” Given the data, my take is that is we do not satisfy that assumption. Hence the underlying test statistics for significance are inaccurate. My belief is that the violation of the linearity assumption is the direct cause of the non-intuitive results of the bootstrap. When I first read this problem, it did not sink in that the O.P. was intending to test under the null [hypothesis]. But would the same non-intuitive results happen had the dataset been generated as X1 = rnorm( n = 1000 ), Y1 = X1*.4 + rnorm( n = 1000 ) ?
Why do hypothesis tests on resampled datasets reject the null too often?
The biggest issue here is that the model results are spurious and therefore highly unstable, because the model is just fitting noise. In a very literal sense. Y1 is not a dependent variable due to how
Why do hypothesis tests on resampled datasets reject the null too often? The biggest issue here is that the model results are spurious and therefore highly unstable, because the model is just fitting noise. In a very literal sense. Y1 is not a dependent variable due to how the sample data was generated. Edit, in response to the comments.Let me make another try at explaining my thinking. With an OLS the general intent is to discover and quantify the underlying relationships in the data. With real-world data, we usually do not know those exactly. But this is an artificial test situation. We do know the EXACT data generating mechanism, it's right there in the code posted by the O.P. It's X1 = rnorm( n = 1000 ), Y1 = rnorm( n = 1000 ) If we express that in the familiar form of an OLS regression, i.e. Y1 = intercept + Beta1 * X1 + Error that becomes Y1 = mean(X1) + 0(X1) + Error So in my mind, this is a model expressed in linear FORM, but it is NOT actually a linear relationship/model, because there is no slope. Beta1=0.000000. When we generate the 1000 random data points, the scatterplot is going to look like the classic shotgun circular spray. There could be some correlation between X1 and Y1 in the specific sample of 1000 random points that was generated, but if so it is random happenstance. If the OLS does find a correlation, i.e., rejects the null hypothesis that there is no slope, since we know definitively that there really isn't any relationship between these two variables, then the OLS has literally found a pattern in the Error Component. I characterized that as "fitting the noise" and "spurious." In addition, one of the std assumptions/requirements of an OLS is that (+/-) "the linear regression model is “linear in parameters.” Given the data, my take is that is we do not satisfy that assumption. Hence the underlying test statistics for significance are inaccurate. My belief is that the violation of the linearity assumption is the direct cause of the non-intuitive results of the bootstrap. When I first read this problem, it did not sink in that the O.P. was intending to test under the null [hypothesis]. But would the same non-intuitive results happen had the dataset been generated as X1 = rnorm( n = 1000 ), Y1 = X1*.4 + rnorm( n = 1000 ) ?
Why do hypothesis tests on resampled datasets reject the null too often? The biggest issue here is that the model results are spurious and therefore highly unstable, because the model is just fitting noise. In a very literal sense. Y1 is not a dependent variable due to how
28,836
How to avoid 'Catastrophic forgetting'?
Catastrophic forgetting is a inherent problem in neural networks. From Wikipedia, (Catastrophic forgetting) is a radical manifestation of the 'sensitivity-stability' dilemma or the 'stability-plasticity' dilemma. Specifically, these problems refer to the issue of being able to make an artificial neural network that is sensitive to, but not disrupted by, new information. Lookup tables and connectionist networks lie on the opposite sides of the stability plasticity spectrum. The former remains completely stable in the presence of new information but lacks the ability to generalize, i.e. infer general principles, from new inputs. What is Catastrophic forgetting? Let's consider two tasks: Task A and task B. Now, suppose we're using a pre-trained model that is already pretty good on task A (learned weights $\theta_A$), and we want to "fine-tune" it to also fit task B. The common practice is to take the weights of a model trained on task A and use them as initialization for training on task B. This works well in applications in which task B is a "sub-task" of task A (e.g task B is detecting eyeglasses, and task A is detecting faces). When B is not a sub-task of A, there is the fear that catastrophic forgetting will occur: essentially, the network will use the same neurons that were before optimized for task A, for predicting on task B. In doing this, it will completely lose its ability to classify instances of task A correctly. You can actually experiment with this yourself: You can building a small network that can tell whether an MNIST image is a 5 or not a 5 and measure it's accuracy at this task; If you then go on to finetune this model to the task of telling whether an MNIST image is a 4 or not, you will note that the accuracy of the final model on the original task (recognizing 5) has worsened. A Naive Solution. The naive solution to catastrophic forgetting would be to not only initialize the weights of the finetuned model to be $\theta_A$, but also add regularization: penalize the solution of the finetuned model when it get's far from $\theta_A$. Essentially, this means the the objective will be to find the best solution for task B that it still similar to $\theta_A$, the solution to task A. The reason why we call this a naive approach is that it often doesn't work well. The functions learned by neural networks are often very complicated and far from linear, so a small change in parameter values (i.e $\theta_B$ being close to $\theta_A$) can still lead to very different outcomes (i.e $f_{\theta_A}$ is very different from $f_{\theta_B}$). Since it's the outcomes we care about, this is bad for us. Pseudo-rehearsal. A better approach would be to try to be good on task B while simultaneously giving similar answers to the answers given by $f_{\theta_A}$. The good thing is that this approach is very easy to implement: Once you have learned $\theta_A$, we can use that model to generate an infinite number of "labeled" examples $(x,f_{\theta_A}(x))$. Then, when training the fine-tuned model, we will alternate between examples labeled for task B and examples of the form $(x,f_{\theta_A}(x))$. You can think about the latter as "revision exercises" that make sure that our network does not lose it's ability to handle task A while learning to handle task B. An even better approach: add memory. As humans, we are good both in generalizing (plasticity) using new examples and in remembering very rare events, or maintaining skills we didn't use for a while (stability). In many ways the only method to achieve something similar with deep neural networks, as we know them, is to incorporate some form of "memory" into them. This it outside the scope of your question but it is an interesting and active field of research so I though I'd mention it. See this example recent work: LEARNING TO REMEMBER RARE EVENTS.
How to avoid 'Catastrophic forgetting'?
Catastrophic forgetting is a inherent problem in neural networks. From Wikipedia, (Catastrophic forgetting) is a radical manifestation of the 'sensitivity-stability' dilemma or the 'stability-plast
How to avoid 'Catastrophic forgetting'? Catastrophic forgetting is a inherent problem in neural networks. From Wikipedia, (Catastrophic forgetting) is a radical manifestation of the 'sensitivity-stability' dilemma or the 'stability-plasticity' dilemma. Specifically, these problems refer to the issue of being able to make an artificial neural network that is sensitive to, but not disrupted by, new information. Lookup tables and connectionist networks lie on the opposite sides of the stability plasticity spectrum. The former remains completely stable in the presence of new information but lacks the ability to generalize, i.e. infer general principles, from new inputs. What is Catastrophic forgetting? Let's consider two tasks: Task A and task B. Now, suppose we're using a pre-trained model that is already pretty good on task A (learned weights $\theta_A$), and we want to "fine-tune" it to also fit task B. The common practice is to take the weights of a model trained on task A and use them as initialization for training on task B. This works well in applications in which task B is a "sub-task" of task A (e.g task B is detecting eyeglasses, and task A is detecting faces). When B is not a sub-task of A, there is the fear that catastrophic forgetting will occur: essentially, the network will use the same neurons that were before optimized for task A, for predicting on task B. In doing this, it will completely lose its ability to classify instances of task A correctly. You can actually experiment with this yourself: You can building a small network that can tell whether an MNIST image is a 5 or not a 5 and measure it's accuracy at this task; If you then go on to finetune this model to the task of telling whether an MNIST image is a 4 or not, you will note that the accuracy of the final model on the original task (recognizing 5) has worsened. A Naive Solution. The naive solution to catastrophic forgetting would be to not only initialize the weights of the finetuned model to be $\theta_A$, but also add regularization: penalize the solution of the finetuned model when it get's far from $\theta_A$. Essentially, this means the the objective will be to find the best solution for task B that it still similar to $\theta_A$, the solution to task A. The reason why we call this a naive approach is that it often doesn't work well. The functions learned by neural networks are often very complicated and far from linear, so a small change in parameter values (i.e $\theta_B$ being close to $\theta_A$) can still lead to very different outcomes (i.e $f_{\theta_A}$ is very different from $f_{\theta_B}$). Since it's the outcomes we care about, this is bad for us. Pseudo-rehearsal. A better approach would be to try to be good on task B while simultaneously giving similar answers to the answers given by $f_{\theta_A}$. The good thing is that this approach is very easy to implement: Once you have learned $\theta_A$, we can use that model to generate an infinite number of "labeled" examples $(x,f_{\theta_A}(x))$. Then, when training the fine-tuned model, we will alternate between examples labeled for task B and examples of the form $(x,f_{\theta_A}(x))$. You can think about the latter as "revision exercises" that make sure that our network does not lose it's ability to handle task A while learning to handle task B. An even better approach: add memory. As humans, we are good both in generalizing (plasticity) using new examples and in remembering very rare events, or maintaining skills we didn't use for a while (stability). In many ways the only method to achieve something similar with deep neural networks, as we know them, is to incorporate some form of "memory" into them. This it outside the scope of your question but it is an interesting and active field of research so I though I'd mention it. See this example recent work: LEARNING TO REMEMBER RARE EVENTS.
How to avoid 'Catastrophic forgetting'? Catastrophic forgetting is a inherent problem in neural networks. From Wikipedia, (Catastrophic forgetting) is a radical manifestation of the 'sensitivity-stability' dilemma or the 'stability-plast
28,837
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted precision?
I've got a wonderful solution and a perfect understandable solution for this problem as I was looking for same from this Question You can calculate and store accuracy with: (accuracy <- sum(diag(mat)) / sum(mat)) # [1] 0.9333333 Precision for each class (assuming the predictions are on the rows and the true outcomes are on the columns) can be computed with: (precision <- diag(mat) / rowSums(mat)) # setosa versicolor virginica # 1.0000000 0.9090909 0.8750000 If you wanted to grab the precision for a particular class, you could do: (precision.versicolor <- precision["versicolor"]) # versicolor # 0.9090909 Recall for each class (again assuming the predictions are on the rows and the true outcomes are on the columns) can be calculated with: recall <- (diag(mat) / colSums(mat)) # setosa versicolor virginica # 1.0000000 0.8695652 0.9130435 If you wanted recall for a particular class, you could do something like: (recall.virginica <- recall["virginica"]) # virginica # 0.9130435 If instead you had the true outcomes as the rows and the predicted outcomes as the columns, then you would flip the precision and recall definitions. Data: (mat = as.matrix(read.table(text=" setosa versicolor virginica setosa 29 0 0 versicolor 0 20 2 virginica 0 3 21", header=T))) # setosa versicolor virginica # setosa 29 0 0 # versicolor 0 20 2 # virginica 0 3 21
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted
I've got a wonderful solution and a perfect understandable solution for this problem as I was looking for same from this Question You can calculate and store accuracy with: (accuracy <- sum(diag(mat))
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted precision? I've got a wonderful solution and a perfect understandable solution for this problem as I was looking for same from this Question You can calculate and store accuracy with: (accuracy <- sum(diag(mat)) / sum(mat)) # [1] 0.9333333 Precision for each class (assuming the predictions are on the rows and the true outcomes are on the columns) can be computed with: (precision <- diag(mat) / rowSums(mat)) # setosa versicolor virginica # 1.0000000 0.9090909 0.8750000 If you wanted to grab the precision for a particular class, you could do: (precision.versicolor <- precision["versicolor"]) # versicolor # 0.9090909 Recall for each class (again assuming the predictions are on the rows and the true outcomes are on the columns) can be calculated with: recall <- (diag(mat) / colSums(mat)) # setosa versicolor virginica # 1.0000000 0.8695652 0.9130435 If you wanted recall for a particular class, you could do something like: (recall.virginica <- recall["virginica"]) # virginica # 0.9130435 If instead you had the true outcomes as the rows and the predicted outcomes as the columns, then you would flip the precision and recall definitions. Data: (mat = as.matrix(read.table(text=" setosa versicolor virginica setosa 29 0 0 versicolor 0 20 2 virginica 0 3 21", header=T))) # setosa versicolor virginica # setosa 29 0 0 # versicolor 0 20 2 # virginica 0 3 21
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted I've got a wonderful solution and a perfect understandable solution for this problem as I was looking for same from this Question You can calculate and store accuracy with: (accuracy <- sum(diag(mat))
28,838
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted precision?
Accuracy is for the whole model and your formula is correct. Precision for one class 'A' is TP_A / (TP_A + FP_A) as in the mentioned article. Now you can calculate average precision of a model. There are a few ways of averaging (micro, macro, weighted), well explained here: 'weighted': Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; (...)
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted
Accuracy is for the whole model and your formula is correct. Precision for one class 'A' is TP_A / (TP_A + FP_A) as in the mentioned article. Now you can calculate average precision of a model. There
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted precision? Accuracy is for the whole model and your formula is correct. Precision for one class 'A' is TP_A / (TP_A + FP_A) as in the mentioned article. Now you can calculate average precision of a model. There are a few ways of averaging (micro, macro, weighted), well explained here: 'weighted': Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; (...)
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted Accuracy is for the whole model and your formula is correct. Precision for one class 'A' is TP_A / (TP_A + FP_A) as in the mentioned article. Now you can calculate average precision of a model. There
28,839
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted precision?
I think your confusion come from the 3x3 table. But ... the link has an example on precision and recall for Label A. Accuracy is very similar. Accuracy for A = (30 + 60 + 10 + 20 + 80) / (30 + 20 + 10 + 50 + 60 + 10 + 20 + 20 + 80) https://en.wikipedia.org/wiki/Confusion_matrix I don't know what weighted precision is about.
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted
I think your confusion come from the 3x3 table. But ... the link has an example on precision and recall for Label A. Accuracy is very similar. Accuracy for A = (30 + 60 + 10 + 20 + 80) / (30 + 20 + 10
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted precision? I think your confusion come from the 3x3 table. But ... the link has an example on precision and recall for Label A. Accuracy is very similar. Accuracy for A = (30 + 60 + 10 + 20 + 80) / (30 + 20 + 10 + 50 + 60 + 10 + 20 + 20 + 80) https://en.wikipedia.org/wiki/Confusion_matrix I don't know what weighted precision is about.
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted I think your confusion come from the 3x3 table. But ... the link has an example on precision and recall for Label A. Accuracy is very similar. Accuracy for A = (30 + 60 + 10 + 20 + 80) / (30 + 20 + 10
28,840
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted precision?
Try PyCM, it gives you accuracy and other parameters. PyCM is a multi-class confusion matrix library written in Python ... and a proper tool for post-classification model evaluation that supports most classes and overall statistics parameters. Check the html version of output.
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted
Try PyCM, it gives you accuracy and other parameters. PyCM is a multi-class confusion matrix library written in Python ... and a proper tool for post-classification model evaluation that supports m
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted precision? Try PyCM, it gives you accuracy and other parameters. PyCM is a multi-class confusion matrix library written in Python ... and a proper tool for post-classification model evaluation that supports most classes and overall statistics parameters. Check the html version of output.
How to compute accuracy for multi class classification problem and how is accuracy equal to weighted Try PyCM, it gives you accuracy and other parameters. PyCM is a multi-class confusion matrix library written in Python ... and a proper tool for post-classification model evaluation that supports m
28,841
Which elements of a Neural Network can lead to overfitting?
Increasing the number of hidden units and/or layers may lead to overfitting because it will make it easier for the neural network to memorize the training set, that is to learn a function that perfectly separates the training set but that does not generalize to unseen data. Regarding the batch size: combined with the learning rate the batch size determines how fast you learn (converge to a solution) usually bad choices of these parameters lead to slow learning or inability to converge to a solution, not overfitting. The number of epochs is the number of times you iterate over the whole training set, as a result, if your network has a large capacity (a lot of hidden units and hidden layers) the longer you train for the more likely you are to overfit. To address this issue you can use early stopping which is when you train you neural network for as long as the error on an external validation set keeps decreasing instead of a fixed number of epochs. In addition, to prevent overfitting overall you should use regularization some techniques include l1 or l2 regularization on the weights and/or dropout. It is better to have a neural network with more capacity than necessary and use regularization to prevent overfitting than trying to perfectly adjust the number of hidden units and layers.
Which elements of a Neural Network can lead to overfitting?
Increasing the number of hidden units and/or layers may lead to overfitting because it will make it easier for the neural network to memorize the training set, that is to learn a function that perfect
Which elements of a Neural Network can lead to overfitting? Increasing the number of hidden units and/or layers may lead to overfitting because it will make it easier for the neural network to memorize the training set, that is to learn a function that perfectly separates the training set but that does not generalize to unseen data. Regarding the batch size: combined with the learning rate the batch size determines how fast you learn (converge to a solution) usually bad choices of these parameters lead to slow learning or inability to converge to a solution, not overfitting. The number of epochs is the number of times you iterate over the whole training set, as a result, if your network has a large capacity (a lot of hidden units and hidden layers) the longer you train for the more likely you are to overfit. To address this issue you can use early stopping which is when you train you neural network for as long as the error on an external validation set keeps decreasing instead of a fixed number of epochs. In addition, to prevent overfitting overall you should use regularization some techniques include l1 or l2 regularization on the weights and/or dropout. It is better to have a neural network with more capacity than necessary and use regularization to prevent overfitting than trying to perfectly adjust the number of hidden units and layers.
Which elements of a Neural Network can lead to overfitting? Increasing the number of hidden units and/or layers may lead to overfitting because it will make it easier for the neural network to memorize the training set, that is to learn a function that perfect
28,842
Reinforcement Learning on Historical Data
Is RL the right framework under such constraints? It looks possible, but maybe some small detail that you have not given would make other approaches more feasible. For instance, if the notification events can be treated as more or less independent, then a supervised learning approach might be better, or at least more pragmatic. More practically it is not 100% clear what your state, timesteps and action choices will be. These need to be well-defined for RL approaches to work. In addition, you want to be able construct states that have (or nearly have) the Markov property - essentially that anything known and non-random about expected reward and next state is covered by the state. How can we learn the optimal policy offline in such situations You want both an offline (data is historical, not "live") and off-policy (data is generated by a different policy to the one you want to evaluate) learner. In addition, I am guessing that you don't know the behaviour policies that generated your data, so you cannot use importance sampling. Probably you can use a Q-learning approach, and work through your existing data either by replaying each trajectory using Q($\lambda$) in batches, or some variant of DQN using sampled mini-batches. This is not guaranteed to work, as off-policy learning tends to be less stable than on-policy, and may require several attempts to get hyper-parameters that will work. You will need a good number of samples that cover optimal or near optimal choices on each step (not necessarily in the same episodes), because Q-learning relies on bootstrapping - essentially copying value estimates from action choices backwards to earlier timesteps so as to influence which earlier states the agent prefers to take actions to head towards. If your state/action space is small enough (when you fully enumerate the states and actions), you may prefer to use the tabular form of Q-learning as that has some guarantees of convergence. However, for most practical problems this is not really possible, so you will want to look at options for using approximation functions. ... and how do we evaluate the same? If you can get realistic-looking converged action-values from your Q-learning (by inspection), then there are only 2 reasonable ways to assess performance: By running the agent in a simulation (and maybe further refining it there) - I don't expect this is feasible for your scenario, because your environment includes decisions made by your customers. However, this is a good stepping-stone for some scenarios, for instance if the environment is dominated by basic real-world physics. By running the agent for real, maybe on some subset of the workload, and comparing actual rewards to predicted ones over enough time to establish statistical confidence. You could also dry-run the agent alongside an existing operator, and get feedback on whether its suggestions for actions (and predictions of reward) seem realistic. That will be subjective feedback, and hard to assess performance numerically when the actions may or may not be used. However, it would give you a little bit of QA.
Reinforcement Learning on Historical Data
Is RL the right framework under such constraints? It looks possible, but maybe some small detail that you have not given would make other approaches more feasible. For instance, if the notification
Reinforcement Learning on Historical Data Is RL the right framework under such constraints? It looks possible, but maybe some small detail that you have not given would make other approaches more feasible. For instance, if the notification events can be treated as more or less independent, then a supervised learning approach might be better, or at least more pragmatic. More practically it is not 100% clear what your state, timesteps and action choices will be. These need to be well-defined for RL approaches to work. In addition, you want to be able construct states that have (or nearly have) the Markov property - essentially that anything known and non-random about expected reward and next state is covered by the state. How can we learn the optimal policy offline in such situations You want both an offline (data is historical, not "live") and off-policy (data is generated by a different policy to the one you want to evaluate) learner. In addition, I am guessing that you don't know the behaviour policies that generated your data, so you cannot use importance sampling. Probably you can use a Q-learning approach, and work through your existing data either by replaying each trajectory using Q($\lambda$) in batches, or some variant of DQN using sampled mini-batches. This is not guaranteed to work, as off-policy learning tends to be less stable than on-policy, and may require several attempts to get hyper-parameters that will work. You will need a good number of samples that cover optimal or near optimal choices on each step (not necessarily in the same episodes), because Q-learning relies on bootstrapping - essentially copying value estimates from action choices backwards to earlier timesteps so as to influence which earlier states the agent prefers to take actions to head towards. If your state/action space is small enough (when you fully enumerate the states and actions), you may prefer to use the tabular form of Q-learning as that has some guarantees of convergence. However, for most practical problems this is not really possible, so you will want to look at options for using approximation functions. ... and how do we evaluate the same? If you can get realistic-looking converged action-values from your Q-learning (by inspection), then there are only 2 reasonable ways to assess performance: By running the agent in a simulation (and maybe further refining it there) - I don't expect this is feasible for your scenario, because your environment includes decisions made by your customers. However, this is a good stepping-stone for some scenarios, for instance if the environment is dominated by basic real-world physics. By running the agent for real, maybe on some subset of the workload, and comparing actual rewards to predicted ones over enough time to establish statistical confidence. You could also dry-run the agent alongside an existing operator, and get feedback on whether its suggestions for actions (and predictions of reward) seem realistic. That will be subjective feedback, and hard to assess performance numerically when the actions may or may not be used. However, it would give you a little bit of QA.
Reinforcement Learning on Historical Data Is RL the right framework under such constraints? It looks possible, but maybe some small detail that you have not given would make other approaches more feasible. For instance, if the notification
28,843
Reinforcement Learning on Historical Data
The short answer is: No. Now you already have the historical action and performance, this is a classical supervised learning problem that maps your (customer profile, action) tuple to a performance score. The reasons below are why reinforcement learning will be a bad choice for your task: Reinforcement learning makes very INEFFICIENT use of data, so usually it requires kind of infinite amount of supplied data either from simulator or real experience. I would think neither of these cases apply to you, since you will not want your untrained model to send random notifications to your customers at the beginning state of training, and your problem will be considered solved if you already have a simulator. Reinforcement learning is usually used to deal with long sequences of actions, and the early action could have drastic influence on the final outcome, such as in chess. In that case, there is no clear partition of the final reward received at the end to each step of your actions, hence the Bellman equation is used explicitly or implicitly in reinforcement learning to solve this reward attribution problem. On the other hand, your problem does not seem to have this sequential nature (unless I misunderstood or your system is emailing back and forth with a customer), and each sample from your data is a single-step IID.
Reinforcement Learning on Historical Data
The short answer is: No. Now you already have the historical action and performance, this is a classical supervised learning problem that maps your (customer profile, action) tuple to a performance s
Reinforcement Learning on Historical Data The short answer is: No. Now you already have the historical action and performance, this is a classical supervised learning problem that maps your (customer profile, action) tuple to a performance score. The reasons below are why reinforcement learning will be a bad choice for your task: Reinforcement learning makes very INEFFICIENT use of data, so usually it requires kind of infinite amount of supplied data either from simulator or real experience. I would think neither of these cases apply to you, since you will not want your untrained model to send random notifications to your customers at the beginning state of training, and your problem will be considered solved if you already have a simulator. Reinforcement learning is usually used to deal with long sequences of actions, and the early action could have drastic influence on the final outcome, such as in chess. In that case, there is no clear partition of the final reward received at the end to each step of your actions, hence the Bellman equation is used explicitly or implicitly in reinforcement learning to solve this reward attribution problem. On the other hand, your problem does not seem to have this sequential nature (unless I misunderstood or your system is emailing back and forth with a customer), and each sample from your data is a single-step IID.
Reinforcement Learning on Historical Data The short answer is: No. Now you already have the historical action and performance, this is a classical supervised learning problem that maps your (customer profile, action) tuple to a performance s
28,844
Reinforcement Learning on Historical Data
These papers provides a method called Fitted Q Iteration for batch reinforcement learning (i.e learning a policy from past experiences) https://pdfs.semanticscholar.org/2820/01869bd502c7917db8b32b75593addfbbc68.pdf https://pdfs.semanticscholar.org/03fd/37aba0c900e232550cf8cc7f66e9465fae94.pdf You will need a clearly defined reward function, states and actions. For testing, the best is to use a small user cohort and A/B test with regards to your metrics.
Reinforcement Learning on Historical Data
These papers provides a method called Fitted Q Iteration for batch reinforcement learning (i.e learning a policy from past experiences) https://pdfs.semanticscholar.org/2820/01869bd502c7917db8b32b755
Reinforcement Learning on Historical Data These papers provides a method called Fitted Q Iteration for batch reinforcement learning (i.e learning a policy from past experiences) https://pdfs.semanticscholar.org/2820/01869bd502c7917db8b32b75593addfbbc68.pdf https://pdfs.semanticscholar.org/03fd/37aba0c900e232550cf8cc7f66e9465fae94.pdf You will need a clearly defined reward function, states and actions. For testing, the best is to use a small user cohort and A/B test with regards to your metrics.
Reinforcement Learning on Historical Data These papers provides a method called Fitted Q Iteration for batch reinforcement learning (i.e learning a policy from past experiences) https://pdfs.semanticscholar.org/2820/01869bd502c7917db8b32b755
28,845
Is multiple comparisons a problem for exploratory analyses?
However, how do you correct between tests, if you have several separate hypotheses? There is a lot of debate going on right now about this. There is almost a consensus now that you shouldn't just go fishing around in your data for significant p-values, as this inflates Type I error (obviously). However, it is also true that effect sizes are not a product of any previous analyses you have done—they are only products of the data. So why would a test you do 20th be less believable than the one you happened to do 1st because of your a priori thinking? Obviously, there is a tension here. Should I still correct for multiple comparisons and how do you generally do that between say 4 t-tests and 6 ANOVAs? There is no straightforward or agreed upon way of doing this. There are any number of corrections for post hoc comparisons (Bonferroni, HSD, Holm, whatever). A professor of mine had really great advice in saying that: "If there are many ways of doing something, it is because there is no best way of doing it." Or, at least, there is no widely agreed-upon way of doing it. You could simply divide .05 by 10 (the number of tests you did total, similar to a Bonferroni correction) to be very conservative. However, this might give you a Type II error simply because you were wrong a priori, which we don't want either! Is doing this many tests fishing and general bad practice? Collecting data, not finding what you wanted a priori, and then digging around, doing dozens of tests, and then finding something significant is not bad practice. HOWEVER, it becomes bad practice when you communicate the finding to an audience like it was the very first test you did, like you had predicted it the whole time, and hiding any other tests you did beforehand. This becomes a disastrous practice if you are hiding a test you did beforehand that might contradict the significant finding you found. Should I ignore it completely No, but be more skeptical than you generally would (I hope you are always skeptical of what one study finds). could I simply call it exploratory? Yes. It is OK to include this analysis in a paper. However, be very upfront about how many tests you did beforehand, that you hadn't initially predicted it, and report any analyses that you did in the data that could contradict whatever you found. Also, do not take one study too seriously. I forgot who said it first—I believe it has been attributed to many people—but the saying goes that, "An ounce of replication is worth a ton of inferential statistics." Instead of getting so in a fuss about how to correct a p-value, why not take the interesting finding you had and replicate it with a well-powered sample? If I were reviewing a paper and someone says, "We dug around in the data and found X. Here is the study that shows X," I would likely reject. However, if someone said, "We dug around in the data and found X. Here is a study that shows X. Now here are Studies 2, 3, and 4 that directly and conceptually replicate X." I would say, awesome! Who cares that you didn't predict it at first? You have now done a number of tests replicating it, showing less doubt that it was Type I error. Overall: Dig around all you want. Get to know your data. But don't find p < .05, go run off and tell people only about that, pretending like you had the idea the whole time. If you find something interesting, think about it, put it in your back pocket, and try to replicate it if you think it is really worth something. This is a hot-button issue, and I'm sure some may not agree with me. But I think what I've proposed is a reasonable way to approach exploratory analyses when correcting p-values isn't within a test (i.e., I just do HSD on post-hoc comparisons for an ANOVA), but between a number of different, conceptually different statistical tests.
Is multiple comparisons a problem for exploratory analyses?
However, how do you correct between tests, if you have several separate hypotheses? There is a lot of debate going on right now about this. There is almost a consensus now that you shouldn't just go
Is multiple comparisons a problem for exploratory analyses? However, how do you correct between tests, if you have several separate hypotheses? There is a lot of debate going on right now about this. There is almost a consensus now that you shouldn't just go fishing around in your data for significant p-values, as this inflates Type I error (obviously). However, it is also true that effect sizes are not a product of any previous analyses you have done—they are only products of the data. So why would a test you do 20th be less believable than the one you happened to do 1st because of your a priori thinking? Obviously, there is a tension here. Should I still correct for multiple comparisons and how do you generally do that between say 4 t-tests and 6 ANOVAs? There is no straightforward or agreed upon way of doing this. There are any number of corrections for post hoc comparisons (Bonferroni, HSD, Holm, whatever). A professor of mine had really great advice in saying that: "If there are many ways of doing something, it is because there is no best way of doing it." Or, at least, there is no widely agreed-upon way of doing it. You could simply divide .05 by 10 (the number of tests you did total, similar to a Bonferroni correction) to be very conservative. However, this might give you a Type II error simply because you were wrong a priori, which we don't want either! Is doing this many tests fishing and general bad practice? Collecting data, not finding what you wanted a priori, and then digging around, doing dozens of tests, and then finding something significant is not bad practice. HOWEVER, it becomes bad practice when you communicate the finding to an audience like it was the very first test you did, like you had predicted it the whole time, and hiding any other tests you did beforehand. This becomes a disastrous practice if you are hiding a test you did beforehand that might contradict the significant finding you found. Should I ignore it completely No, but be more skeptical than you generally would (I hope you are always skeptical of what one study finds). could I simply call it exploratory? Yes. It is OK to include this analysis in a paper. However, be very upfront about how many tests you did beforehand, that you hadn't initially predicted it, and report any analyses that you did in the data that could contradict whatever you found. Also, do not take one study too seriously. I forgot who said it first—I believe it has been attributed to many people—but the saying goes that, "An ounce of replication is worth a ton of inferential statistics." Instead of getting so in a fuss about how to correct a p-value, why not take the interesting finding you had and replicate it with a well-powered sample? If I were reviewing a paper and someone says, "We dug around in the data and found X. Here is the study that shows X," I would likely reject. However, if someone said, "We dug around in the data and found X. Here is a study that shows X. Now here are Studies 2, 3, and 4 that directly and conceptually replicate X." I would say, awesome! Who cares that you didn't predict it at first? You have now done a number of tests replicating it, showing less doubt that it was Type I error. Overall: Dig around all you want. Get to know your data. But don't find p < .05, go run off and tell people only about that, pretending like you had the idea the whole time. If you find something interesting, think about it, put it in your back pocket, and try to replicate it if you think it is really worth something. This is a hot-button issue, and I'm sure some may not agree with me. But I think what I've proposed is a reasonable way to approach exploratory analyses when correcting p-values isn't within a test (i.e., I just do HSD on post-hoc comparisons for an ANOVA), but between a number of different, conceptually different statistical tests.
Is multiple comparisons a problem for exploratory analyses? However, how do you correct between tests, if you have several separate hypotheses? There is a lot of debate going on right now about this. There is almost a consensus now that you shouldn't just go
28,846
Why can't we use $R^2$ for transformations of dependent variables?
It's a good question, because "different quantities" doesn't seem to be much of an explanation. There are two important reasons to be wary of using $R^2$ to compare these models: it is too crude (it doesn't really assess goodness of fit) and it is going to be inappropriate for at least one of the models. This reply addresses that second issue. Theoretical Treatment $R^2$ compares the variance of the model residuals to the variance of the responses. Variance is a mean square additive deviation from a fit. As such, we may understand $R^2$ as comparing two models of the response $y$. The "base" model is $$y_i = \mu + \delta_i\tag{1}$$ where $\mu$ is a parameter (the theoretical mean response) and the $\delta_i$ are independent random "errors," each with zero mean and a common variance of $\tau^2$. The linear regression model introduces the vectors $x_i$ as explanatory variables: $$y_i = \beta_0 + x_i \beta + \varepsilon_i.\tag{2}$$ The number $\beta_0$ and the vector $\beta$ are the parameters (the intercept and the "slopes"). The $\varepsilon_i$ again are independent random errors, each with zero mean and common variance $\sigma^2$. $R^2$ estimates the reduction in variance, $\tau^2-\sigma^2$, compared to the original variance $\tau^2$. When you take logarithms and use least squares to fit the model, you implicitly are comparing a relationship of the form $$\log(y_i) = \nu + \zeta_i\tag{1a}$$ to one of the form $$\log(y_i) = \gamma_0 + x_i\gamma + \eta_i.\tag{2a}$$ These are just like models $(1)$ and $(2)$ but with log responses. They are not equivalent to the first two models, though. For instance, exponentiating both sides of $(2\text{a})$ would give $$y_i = \exp(\log(y_i)) = \exp(\gamma_0 + x_i\gamma)\exp(\eta_i).$$ The error terms $\exp(\eta_i)$ now multiply the underlying relationship $y_i = \exp(\gamma_0 + x_i\gamma)$. Conseqently the variances of the responses are $$\operatorname{Var}(y_i) = \exp(\gamma_0 + x_i\gamma)^2\operatorname{Var}(e^{\eta_i}).$$ The variances depend on the $x_i$. That's not model $(2)$, which supposes the variances are all equal to a constant $\sigma^2$. Usually, only one of these sets of models can be a reasonable description of the data. Applying the second set $(1\text{a})$ and $(2\text{a})$ when the first set $(1)$ and $(2)$ is a good model, or the first when the second is good, amounts to working with a nonlinear, heteroscedastic dataset, which therefore ought to be fit poorly with a linear regression. When either of these situations is the case, we might expect the better model to exhibit the larger $R^2$. However, what about if neither is the case? Can we still expect the larger $R^2$ to help us identify the better model? Analysis In some sense this isn't a good question, because if neither model is appropriate, we ought to find a third model. However, the issue before us concerns the utility of $R^2$ in helping us make this determination. Moreover, many people think first about the shape of the relationship between $x$ and $y$--is it linear, is it logarithmic, is it something else--without being concerned about the characteristics of the regression errors $\varepsilon_i$ or $\eta_i$. Let us therefore consider a situation where our model gets the relationship right but is wrong about its error structure, or vice versa. Such a model (which commonly occurs) is a least-squares fit to an exponential relationship, $$y_i = \exp\left(\alpha_0 + x_i\alpha\right) + \theta_i.\tag{3}$$ Now the logarithm of $y$ is a linear function of $x$, as in $(2\text{a})$, but the error terms $\theta_i$ are additive, as in $(2)$. In such cases $R^2$ might mislead us into choosing the model with the wrong relationship between $x$ and $y$. Here is an illustration of model $(3)$. There are $300$ observations for $x_i$ (a 1-vector equally distributed between $1.0$ and $1.6$). The left panel shows the original $(x,y)$ data while the right panel shows the $(x,\log(y))$ transformed data. The dashed red lines plot the true underlying relationship, while the solid blue lines show the least-squares fits. The data and the true relationship are the same in both panels: only the models and their fits differ. The fit to the log responses at the right clearly is good: it nearly coincides with the true relationship and both are linear. The fit to the original responses at the left clearly is worse: it is linear while the true relationship is exponential. Unfortunately, it has a notably larger value of $R^2$: $0.70$ compared to $0.56$. That's why we should not trust $R^2$ to lead us to the better model. That's why we should not be satisfied with the fit even when $R^2$ is "high" (and in many applications, a value of $0.70$ would be considered high indeed). Incidentally, a better way to assess these models includes goodness of fit tests (which would indicate the superiority of the log model at the right) and diagnostic plots for stationarity of the residuals (which would highlight problems with both models). Such assessments would naturally lead one either to a weighted least-squares fit of $\log(y)$ or directly to model $(3)$ itself, which would have to be fit using maximum likelihood or nonlinear least squares methods.
Why can't we use $R^2$ for transformations of dependent variables?
It's a good question, because "different quantities" doesn't seem to be much of an explanation. There are two important reasons to be wary of using $R^2$ to compare these models: it is too crude (it d
Why can't we use $R^2$ for transformations of dependent variables? It's a good question, because "different quantities" doesn't seem to be much of an explanation. There are two important reasons to be wary of using $R^2$ to compare these models: it is too crude (it doesn't really assess goodness of fit) and it is going to be inappropriate for at least one of the models. This reply addresses that second issue. Theoretical Treatment $R^2$ compares the variance of the model residuals to the variance of the responses. Variance is a mean square additive deviation from a fit. As such, we may understand $R^2$ as comparing two models of the response $y$. The "base" model is $$y_i = \mu + \delta_i\tag{1}$$ where $\mu$ is a parameter (the theoretical mean response) and the $\delta_i$ are independent random "errors," each with zero mean and a common variance of $\tau^2$. The linear regression model introduces the vectors $x_i$ as explanatory variables: $$y_i = \beta_0 + x_i \beta + \varepsilon_i.\tag{2}$$ The number $\beta_0$ and the vector $\beta$ are the parameters (the intercept and the "slopes"). The $\varepsilon_i$ again are independent random errors, each with zero mean and common variance $\sigma^2$. $R^2$ estimates the reduction in variance, $\tau^2-\sigma^2$, compared to the original variance $\tau^2$. When you take logarithms and use least squares to fit the model, you implicitly are comparing a relationship of the form $$\log(y_i) = \nu + \zeta_i\tag{1a}$$ to one of the form $$\log(y_i) = \gamma_0 + x_i\gamma + \eta_i.\tag{2a}$$ These are just like models $(1)$ and $(2)$ but with log responses. They are not equivalent to the first two models, though. For instance, exponentiating both sides of $(2\text{a})$ would give $$y_i = \exp(\log(y_i)) = \exp(\gamma_0 + x_i\gamma)\exp(\eta_i).$$ The error terms $\exp(\eta_i)$ now multiply the underlying relationship $y_i = \exp(\gamma_0 + x_i\gamma)$. Conseqently the variances of the responses are $$\operatorname{Var}(y_i) = \exp(\gamma_0 + x_i\gamma)^2\operatorname{Var}(e^{\eta_i}).$$ The variances depend on the $x_i$. That's not model $(2)$, which supposes the variances are all equal to a constant $\sigma^2$. Usually, only one of these sets of models can be a reasonable description of the data. Applying the second set $(1\text{a})$ and $(2\text{a})$ when the first set $(1)$ and $(2)$ is a good model, or the first when the second is good, amounts to working with a nonlinear, heteroscedastic dataset, which therefore ought to be fit poorly with a linear regression. When either of these situations is the case, we might expect the better model to exhibit the larger $R^2$. However, what about if neither is the case? Can we still expect the larger $R^2$ to help us identify the better model? Analysis In some sense this isn't a good question, because if neither model is appropriate, we ought to find a third model. However, the issue before us concerns the utility of $R^2$ in helping us make this determination. Moreover, many people think first about the shape of the relationship between $x$ and $y$--is it linear, is it logarithmic, is it something else--without being concerned about the characteristics of the regression errors $\varepsilon_i$ or $\eta_i$. Let us therefore consider a situation where our model gets the relationship right but is wrong about its error structure, or vice versa. Such a model (which commonly occurs) is a least-squares fit to an exponential relationship, $$y_i = \exp\left(\alpha_0 + x_i\alpha\right) + \theta_i.\tag{3}$$ Now the logarithm of $y$ is a linear function of $x$, as in $(2\text{a})$, but the error terms $\theta_i$ are additive, as in $(2)$. In such cases $R^2$ might mislead us into choosing the model with the wrong relationship between $x$ and $y$. Here is an illustration of model $(3)$. There are $300$ observations for $x_i$ (a 1-vector equally distributed between $1.0$ and $1.6$). The left panel shows the original $(x,y)$ data while the right panel shows the $(x,\log(y))$ transformed data. The dashed red lines plot the true underlying relationship, while the solid blue lines show the least-squares fits. The data and the true relationship are the same in both panels: only the models and their fits differ. The fit to the log responses at the right clearly is good: it nearly coincides with the true relationship and both are linear. The fit to the original responses at the left clearly is worse: it is linear while the true relationship is exponential. Unfortunately, it has a notably larger value of $R^2$: $0.70$ compared to $0.56$. That's why we should not trust $R^2$ to lead us to the better model. That's why we should not be satisfied with the fit even when $R^2$ is "high" (and in many applications, a value of $0.70$ would be considered high indeed). Incidentally, a better way to assess these models includes goodness of fit tests (which would indicate the superiority of the log model at the right) and diagnostic plots for stationarity of the residuals (which would highlight problems with both models). Such assessments would naturally lead one either to a weighted least-squares fit of $\log(y)$ or directly to model $(3)$ itself, which would have to be fit using maximum likelihood or nonlinear least squares methods.
Why can't we use $R^2$ for transformations of dependent variables? It's a good question, because "different quantities" doesn't seem to be much of an explanation. There are two important reasons to be wary of using $R^2$ to compare these models: it is too crude (it d
28,847
Why can't we use $R^2$ for transformations of dependent variables?
I'll give a very non-technical and intuitive answer to this, imagine you have both the linear and log model, and let's say the assumptions of linear regression hold on this model with homoskedastic error terms, the holding of these assumptions imply that your model assumption about the true relationship between the regressor and regressands is verified, however, when you modify this model by taking log of the dependent variable as the new dependent variable, the changing of error terms in inevitable and they may no longer remain homoskedastic, implying that maybe you assumed the wrong true relation, so that even if you have a better R squared on your sample in the log version, in the long term your model may output very erroneous predictions as we use more extreme values owing to the wrong assumption of the true relationship between the variables. Hence, it is wiser to check the assumptions before the R squared. Hope that helps, This is my first answer on this community so I apologise if it does not follow the standards.
Why can't we use $R^2$ for transformations of dependent variables?
I'll give a very non-technical and intuitive answer to this, imagine you have both the linear and log model, and let's say the assumptions of linear regression hold on this model with homoskedastic er
Why can't we use $R^2$ for transformations of dependent variables? I'll give a very non-technical and intuitive answer to this, imagine you have both the linear and log model, and let's say the assumptions of linear regression hold on this model with homoskedastic error terms, the holding of these assumptions imply that your model assumption about the true relationship between the regressor and regressands is verified, however, when you modify this model by taking log of the dependent variable as the new dependent variable, the changing of error terms in inevitable and they may no longer remain homoskedastic, implying that maybe you assumed the wrong true relation, so that even if you have a better R squared on your sample in the log version, in the long term your model may output very erroneous predictions as we use more extreme values owing to the wrong assumption of the true relationship between the variables. Hence, it is wiser to check the assumptions before the R squared. Hope that helps, This is my first answer on this community so I apologise if it does not follow the standards.
Why can't we use $R^2$ for transformations of dependent variables? I'll give a very non-technical and intuitive answer to this, imagine you have both the linear and log model, and let's say the assumptions of linear regression hold on this model with homoskedastic er
28,848
What is the probability distribution of this random sum of non-iid Bernoulli variables?
The calls (that is, the $X_i$) arrive according to a Poisson process. The total number of calls $N$ follows a Poisson distribution. Divide the calls into two types, e.g. whether $X_i = 1$ or $X_i = 0$. The goal is to determine the process that generates the $1$s. This is trivial if $X_i = 1$ with a fixed probability $p$: by the superposition principle of Poisson processes, the full process thinned to just the $1$s would also be a Poisson process, with rate $p\mu$. In fact this is the case, we just require an additional step to get there. Marginalize over $p_i$, so that $$\mathrm{Pr}(X_i|\alpha, \beta) = \int_0^1 p_i^{X_i} (1-p_i)^{1-X_i} \frac{p_i^{\alpha-1} (1-p_i)^{\beta-1}}{\mathcal{B}(\alpha, \beta)} dp_i = \frac{\mathcal{B}(X_i + \alpha, 1 - X_i + \beta)}{\mathcal{B}(\alpha, \beta)}$$ Where $\mathcal{B}(a, b) = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a + b)}$ is the beta function. Using the fact that $\Gamma(x+1) = x\Gamma(x)$, the above simplifies to; $$\mathrm{Pr}(X_i = 1|\alpha, \beta) = \frac{\Gamma(1+\alpha)\Gamma(\beta)}{\Gamma(1+\alpha+\beta)} \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} = \frac{\alpha}{\alpha+\beta}$$ In other words, $X_i \sim \mathrm{Bernoulli}(\frac{\alpha}{\alpha+\beta})$. By the superposition property, $Y$ is Poisson distributed with rate $\frac{\alpha \mu}{\alpha+\beta}$. A numerical example (with R) ... in the figure, the vertical lines are from simulation and red points are the pmf derived above: draw <- function(alpha, beta, mu) { N <- rpois(1, mu); p = rbeta(N, alpha, beta); sum(rbinom(N, size=1, prob=p)) } pmf <- function(y, alpha, beta, mu) dpois(y, alpha*mu/(alpha+beta)) y <- replicate(30000,draw(4,5,10)) tb <- table(y) # simulated pmf plot(tb/sum(tb), type="h", xlab="Y", ylab="Probability") # analytic pmf points(0:max(y), pmf(0:max(y), 4, 5, 10), col="red")
What is the probability distribution of this random sum of non-iid Bernoulli variables?
The calls (that is, the $X_i$) arrive according to a Poisson process. The total number of calls $N$ follows a Poisson distribution. Divide the calls into two types, e.g. whether $X_i = 1$ or $X_i = 0$
What is the probability distribution of this random sum of non-iid Bernoulli variables? The calls (that is, the $X_i$) arrive according to a Poisson process. The total number of calls $N$ follows a Poisson distribution. Divide the calls into two types, e.g. whether $X_i = 1$ or $X_i = 0$. The goal is to determine the process that generates the $1$s. This is trivial if $X_i = 1$ with a fixed probability $p$: by the superposition principle of Poisson processes, the full process thinned to just the $1$s would also be a Poisson process, with rate $p\mu$. In fact this is the case, we just require an additional step to get there. Marginalize over $p_i$, so that $$\mathrm{Pr}(X_i|\alpha, \beta) = \int_0^1 p_i^{X_i} (1-p_i)^{1-X_i} \frac{p_i^{\alpha-1} (1-p_i)^{\beta-1}}{\mathcal{B}(\alpha, \beta)} dp_i = \frac{\mathcal{B}(X_i + \alpha, 1 - X_i + \beta)}{\mathcal{B}(\alpha, \beta)}$$ Where $\mathcal{B}(a, b) = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a + b)}$ is the beta function. Using the fact that $\Gamma(x+1) = x\Gamma(x)$, the above simplifies to; $$\mathrm{Pr}(X_i = 1|\alpha, \beta) = \frac{\Gamma(1+\alpha)\Gamma(\beta)}{\Gamma(1+\alpha+\beta)} \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} = \frac{\alpha}{\alpha+\beta}$$ In other words, $X_i \sim \mathrm{Bernoulli}(\frac{\alpha}{\alpha+\beta})$. By the superposition property, $Y$ is Poisson distributed with rate $\frac{\alpha \mu}{\alpha+\beta}$. A numerical example (with R) ... in the figure, the vertical lines are from simulation and red points are the pmf derived above: draw <- function(alpha, beta, mu) { N <- rpois(1, mu); p = rbeta(N, alpha, beta); sum(rbinom(N, size=1, prob=p)) } pmf <- function(y, alpha, beta, mu) dpois(y, alpha*mu/(alpha+beta)) y <- replicate(30000,draw(4,5,10)) tb <- table(y) # simulated pmf plot(tb/sum(tb), type="h", xlab="Y", ylab="Probability") # analytic pmf points(0:max(y), pmf(0:max(y), 4, 5, 10), col="red")
What is the probability distribution of this random sum of non-iid Bernoulli variables? The calls (that is, the $X_i$) arrive according to a Poisson process. The total number of calls $N$ follows a Poisson distribution. Divide the calls into two types, e.g. whether $X_i = 1$ or $X_i = 0$
28,849
What is the probability distribution of this random sum of non-iid Bernoulli variables?
Since $p_i$ is a random variable with a $\operatorname{Beta}(\alpha,\beta)$ you have $\mathbb{E}[p_i]= \dfrac{\alpha}{\alpha+\beta}$ and this is in fact the probability that John actually solves the $i$th problem, independently of all the others. Since the total number of problems in a day has a Poisson distribution with parameter $\mu$ and each will be solved with probability $\dfrac{\alpha}{\alpha+\beta}$, the number John solves each day has a Poisson distribution with parameter $\dfrac{\mu\alpha}{\alpha+\beta}$ Your calculation of the probability he does not solve any problems should be $\mathbb{P}(Y=0) = e^{-{\mu\alpha}/({\alpha+\beta})}$
What is the probability distribution of this random sum of non-iid Bernoulli variables?
Since $p_i$ is a random variable with a $\operatorname{Beta}(\alpha,\beta)$ you have $\mathbb{E}[p_i]= \dfrac{\alpha}{\alpha+\beta}$ and this is in fact the probability that John actually solves the $
What is the probability distribution of this random sum of non-iid Bernoulli variables? Since $p_i$ is a random variable with a $\operatorname{Beta}(\alpha,\beta)$ you have $\mathbb{E}[p_i]= \dfrac{\alpha}{\alpha+\beta}$ and this is in fact the probability that John actually solves the $i$th problem, independently of all the others. Since the total number of problems in a day has a Poisson distribution with parameter $\mu$ and each will be solved with probability $\dfrac{\alpha}{\alpha+\beta}$, the number John solves each day has a Poisson distribution with parameter $\dfrac{\mu\alpha}{\alpha+\beta}$ Your calculation of the probability he does not solve any problems should be $\mathbb{P}(Y=0) = e^{-{\mu\alpha}/({\alpha+\beta})}$
What is the probability distribution of this random sum of non-iid Bernoulli variables? Since $p_i$ is a random variable with a $\operatorname{Beta}(\alpha,\beta)$ you have $\mathbb{E}[p_i]= \dfrac{\alpha}{\alpha+\beta}$ and this is in fact the probability that John actually solves the $
28,850
Determining statistical significance of linear regression coefficient in the presence of multicollinearity
I would regress the "DUI per capita" (Y) on "liquer stores per capita" (X) and "population size" (Z). This way your Y reflects the propensity to drunk driving of urban people, while X is the population characteristic of a given city. Z is a control variable just in case if there's size effect on Y. I don't think you are going to see multicollinearity issue in this setup. This setup is more interesting than your model 1. Here, your base is to assume that the number of DUIs is proportional to population, while $\beta_Z$ would capture nonlinearity, e.g. people in larger cities are more prone to drunk driving. Also X reflects cultural and legal environment directly, already adjusted to size. You may end up with roughly the same X for cities of different sizes in Sough. This also allows you introduce other control variables such as Red/Blue state, Coastal/Continental etc.
Determining statistical significance of linear regression coefficient in the presence of multicollin
I would regress the "DUI per capita" (Y) on "liquer stores per capita" (X) and "population size" (Z). This way your Y reflects the propensity to drunk driving of urban people, while X is the populatio
Determining statistical significance of linear regression coefficient in the presence of multicollinearity I would regress the "DUI per capita" (Y) on "liquer stores per capita" (X) and "population size" (Z). This way your Y reflects the propensity to drunk driving of urban people, while X is the population characteristic of a given city. Z is a control variable just in case if there's size effect on Y. I don't think you are going to see multicollinearity issue in this setup. This setup is more interesting than your model 1. Here, your base is to assume that the number of DUIs is proportional to population, while $\beta_Z$ would capture nonlinearity, e.g. people in larger cities are more prone to drunk driving. Also X reflects cultural and legal environment directly, already adjusted to size. You may end up with roughly the same X for cities of different sizes in Sough. This also allows you introduce other control variables such as Red/Blue state, Coastal/Continental etc.
Determining statistical significance of linear regression coefficient in the presence of multicollin I would regress the "DUI per capita" (Y) on "liquer stores per capita" (X) and "population size" (Z). This way your Y reflects the propensity to drunk driving of urban people, while X is the populatio
28,851
Determining statistical significance of linear regression coefficient in the presence of multicollinearity
If estimating your model with ordinary least squares, your second regression is rather problematic. And you may want to think about how the variance of your error term varies with city size. Regression (2) is equivalent to your regression (1) where observations are weighted by the square of the city's population: For each city $i$, let $y_i$ be drunk driving incidents per capita, let $x_i$ be liquor stores per capita, and let $n_i$ be the city's population. Regression (1) is: $$y_i = a + b x_i + \epsilon_i $$ If you run regression (2) without a constant, you've essentially scaled each observation of regression (1) by the population, that is, you're running: $$ n_i y_i = a n_i + b n_i x_i + u_i $$ This is weighted least squares, and the weights you're applying are the square of the city's population. That's a lot of weight you're giving the largest cities?! Note that if you had an observation for each individual in a city and assigned each individual the average value for the city, that would be equivalent to running a regression where you are weighting each city by population (not population squared).
Determining statistical significance of linear regression coefficient in the presence of multicollin
If estimating your model with ordinary least squares, your second regression is rather problematic. And you may want to think about how the variance of your error term varies with city size. Regressi
Determining statistical significance of linear regression coefficient in the presence of multicollinearity If estimating your model with ordinary least squares, your second regression is rather problematic. And you may want to think about how the variance of your error term varies with city size. Regression (2) is equivalent to your regression (1) where observations are weighted by the square of the city's population: For each city $i$, let $y_i$ be drunk driving incidents per capita, let $x_i$ be liquor stores per capita, and let $n_i$ be the city's population. Regression (1) is: $$y_i = a + b x_i + \epsilon_i $$ If you run regression (2) without a constant, you've essentially scaled each observation of regression (1) by the population, that is, you're running: $$ n_i y_i = a n_i + b n_i x_i + u_i $$ This is weighted least squares, and the weights you're applying are the square of the city's population. That's a lot of weight you're giving the largest cities?! Note that if you had an observation for each individual in a city and assigned each individual the average value for the city, that would be equivalent to running a regression where you are weighting each city by population (not population squared).
Determining statistical significance of linear regression coefficient in the presence of multicollin If estimating your model with ordinary least squares, your second regression is rather problematic. And you may want to think about how the variance of your error term varies with city size. Regressi
28,852
Determining statistical significance of linear regression coefficient in the presence of multicollinearity
I ran a few experiments on simulated data to see which method works best. Please read my findings below. Lets look at two different scenarios - First where there is no direct relationship between DUI & Liquor stores & Second where we do have a direct relationship. Then examine each of the methods to see which method works best. Case 1: No Direct relationship but both are related to the population library(rmutil) ############ ## Simulating Data set.seed(111) # Simulating city populations popln <- rpareto(n=10000,m=10000,s=1.2) # Simulating DUI numbers e1 <- rnorm(10000,mean=0,sd=15) DUI = 100 + popln * 0.04 + e1 summary(DUI) truehist(log(DUI)) # Simulating Nbr of Liquor stores e2 <- rnorm(100,mean=0,sd=5) Nbr_Liquor_Stores = 20 + popln * 0.009 + e2 summary(Nbr_Liquor_Stores) truehist(log(Nbr_Liquor_Stores)) dat <- data.frame(popln,DUI,Nbr_Liquor_Stores) Now that the data is simulated, lets see how each of the methods fare. ## Method 0: Simple OLS fit0 <- lm(DUI~Nbr_Liquor_Stores,data=dat) summary(fit0) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 9.4353630 0.2801544 33.68 <2e-16 *** Nbr_Liquor_Stores 4.4444207 0.0001609 27617.49 <2e-16 *** Nbr_Liquor_Stores highly significant, as expected. Although the relationship is indirect. ## Method 1: Divide Liquor Stores by population and then regress fit1 <- lm( I(DUI/popln) ~ Nbr_Liquor_Stores, data=dat) summary(fit1) Estimate Std. Error t value Pr(>|t|) (Intercept) 4.981e-01 4.143e-02 12.022 <2e-16 *** Nbr_Liquor_Stores -1.325e-05 2.380e-05 -0.557 0.578 Nbr_Liquor_Stores has no significance. Seems to work, but lets not jump to conclusions yet. ## Method 2: Divide Liquor Stores by population and then regress fit2 <- lm( DUI ~ Nbr_Liquor_Stores + popln, data=dat) summary(fit2) Estimate Std. Error t value Pr(>|t|) (Intercept) 1.003e+02 6.022e-01 166.569 <2e-16 *** Nbr_Liquor_Stores -1.603e-02 3.042e-02 -0.527 0.598 popln 4.014e-02 2.738e-04 146.618 <2e-16 *** Nbr_Liquor_Stores not significant, p-value is also quite close to Method 1. ## Method 3: "DUI per capita" on "liquer stores per capita" and "population size" fit3 <- lm( I(DUI/popln) ~ I(Nbr_Liquor_Stores/popln) + popln, data=dat) summary(fit3) Estimate Std. Error t value Pr(>|t|) (Intercept) 2.841e-02 1.300e-02 2.187 0.0288 * I(Nbr_Liquor_Stores/popln) 4.886e+00 1.603e-02 304.867 <2e-16 *** popln -8.426e-09 6.675e-08 -0.126 0.8996 (Nbr_Liquor_Stores/popln) highly significant! Didn't expect that, maybe this method isn't the best for your problem statement. Case 2: Direct relationship with both Population & Nbr_Liquor_Stores ### Simulating Data set.seed(111) # Simulating city populations popln <- rpareto(n=10000,m=10000,s=1.2) # Simulating Nbr of Liquor stores e2 <- rnorm(100,mean=0,sd=5) Nbr_Liquor_Stores = 20 + popln * 0.009 + e2 summary(Nbr_Liquor_Stores) truehist(log(Nbr_Liquor_Stores)) # Simulating DUI numbers e1 <- rnorm(10000,mean=0,sd=15) DUI = 100 + popln * 0.021 + Nbr_Liquor_Stores * 0.01 + e1 summary(DUI) truehist(log(DUI)) dat <- data.frame(popln,DUI,Nbr_Liquor_Stores) Let's see the performance of each of the methods in this scenario. ## Method 0: Simple OLS fit0 <- lm(DUI~Nbr_Liquor_Stores,data=dat) summary(fit0) Estimate Std. Error t value Pr(>|t|) (Intercept) 5.244e+01 1.951e-01 268.8 <2e-16 *** Nbr_Liquor_Stores 2.343e+00 1.121e-04 20908.9 <2e-16 *** Expected, but not a great method to make causal inferences. ## Method 1: Divide Liquor Stores by population and then regress fit1 <- lm( I(DUI/popln) ~ Nbr_Liquor_Stores, data=dat) summary(fit1) Estimate Std. Error t value Pr(>|t|) (Intercept) 4.705e-01 4.005e-02 11.747 <2e-16 *** Nbr_Liquor_Stores -1.294e-05 2.301e-05 -0.562 0.574 That is a surprise for me, I was expecting this method to capture the relationship but it doesn't pick it up. So this method fails in this scenario! ## Method 2: Divide Liquor Stores by population and then regress fit2 <- lm( DUI ~ Nbr_Liquor_Stores + popln, data=dat) summary(fit2) Estimate Std. Error t value Pr(>|t|) (Intercept) 1.013e+02 5.945e-01 170.391 <2e-16 *** Nbr_Liquor_Stores -5.484e-02 2.825e-02 -1.941 0.0523 . popln 2.158e-02 2.543e-04 84.875 <2e-16 *** Nbr_Liquor_Stores is significant, p-value make a lot of sense. A clear winner for me. ## Method 3: "DUI per capita" on "liquer stores per capita" and "population size" fit3 <- lm( I(DUI/popln) ~ I(Nbr_Liquor_Stores/popln) + popln, data=dat) summary(fit3) Estimate Std. Error t value Pr(>|t|) (Intercept) 6.540e-02 1.485e-02 4.405 1.07e-05 *** I(Nbr_Liquor_Stores/popln) 3.915e+00 1.553e-02 252.063 < 2e-16 *** popln -2.056e-08 7.635e-08 -0.269 0.788 TLDR; Method 2 produces most accurate p-values across different scenarios.
Determining statistical significance of linear regression coefficient in the presence of multicollin
I ran a few experiments on simulated data to see which method works best. Please read my findings below. Lets look at two different scenarios - First where there is no direct relationship between DUI
Determining statistical significance of linear regression coefficient in the presence of multicollinearity I ran a few experiments on simulated data to see which method works best. Please read my findings below. Lets look at two different scenarios - First where there is no direct relationship between DUI & Liquor stores & Second where we do have a direct relationship. Then examine each of the methods to see which method works best. Case 1: No Direct relationship but both are related to the population library(rmutil) ############ ## Simulating Data set.seed(111) # Simulating city populations popln <- rpareto(n=10000,m=10000,s=1.2) # Simulating DUI numbers e1 <- rnorm(10000,mean=0,sd=15) DUI = 100 + popln * 0.04 + e1 summary(DUI) truehist(log(DUI)) # Simulating Nbr of Liquor stores e2 <- rnorm(100,mean=0,sd=5) Nbr_Liquor_Stores = 20 + popln * 0.009 + e2 summary(Nbr_Liquor_Stores) truehist(log(Nbr_Liquor_Stores)) dat <- data.frame(popln,DUI,Nbr_Liquor_Stores) Now that the data is simulated, lets see how each of the methods fare. ## Method 0: Simple OLS fit0 <- lm(DUI~Nbr_Liquor_Stores,data=dat) summary(fit0) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 9.4353630 0.2801544 33.68 <2e-16 *** Nbr_Liquor_Stores 4.4444207 0.0001609 27617.49 <2e-16 *** Nbr_Liquor_Stores highly significant, as expected. Although the relationship is indirect. ## Method 1: Divide Liquor Stores by population and then regress fit1 <- lm( I(DUI/popln) ~ Nbr_Liquor_Stores, data=dat) summary(fit1) Estimate Std. Error t value Pr(>|t|) (Intercept) 4.981e-01 4.143e-02 12.022 <2e-16 *** Nbr_Liquor_Stores -1.325e-05 2.380e-05 -0.557 0.578 Nbr_Liquor_Stores has no significance. Seems to work, but lets not jump to conclusions yet. ## Method 2: Divide Liquor Stores by population and then regress fit2 <- lm( DUI ~ Nbr_Liquor_Stores + popln, data=dat) summary(fit2) Estimate Std. Error t value Pr(>|t|) (Intercept) 1.003e+02 6.022e-01 166.569 <2e-16 *** Nbr_Liquor_Stores -1.603e-02 3.042e-02 -0.527 0.598 popln 4.014e-02 2.738e-04 146.618 <2e-16 *** Nbr_Liquor_Stores not significant, p-value is also quite close to Method 1. ## Method 3: "DUI per capita" on "liquer stores per capita" and "population size" fit3 <- lm( I(DUI/popln) ~ I(Nbr_Liquor_Stores/popln) + popln, data=dat) summary(fit3) Estimate Std. Error t value Pr(>|t|) (Intercept) 2.841e-02 1.300e-02 2.187 0.0288 * I(Nbr_Liquor_Stores/popln) 4.886e+00 1.603e-02 304.867 <2e-16 *** popln -8.426e-09 6.675e-08 -0.126 0.8996 (Nbr_Liquor_Stores/popln) highly significant! Didn't expect that, maybe this method isn't the best for your problem statement. Case 2: Direct relationship with both Population & Nbr_Liquor_Stores ### Simulating Data set.seed(111) # Simulating city populations popln <- rpareto(n=10000,m=10000,s=1.2) # Simulating Nbr of Liquor stores e2 <- rnorm(100,mean=0,sd=5) Nbr_Liquor_Stores = 20 + popln * 0.009 + e2 summary(Nbr_Liquor_Stores) truehist(log(Nbr_Liquor_Stores)) # Simulating DUI numbers e1 <- rnorm(10000,mean=0,sd=15) DUI = 100 + popln * 0.021 + Nbr_Liquor_Stores * 0.01 + e1 summary(DUI) truehist(log(DUI)) dat <- data.frame(popln,DUI,Nbr_Liquor_Stores) Let's see the performance of each of the methods in this scenario. ## Method 0: Simple OLS fit0 <- lm(DUI~Nbr_Liquor_Stores,data=dat) summary(fit0) Estimate Std. Error t value Pr(>|t|) (Intercept) 5.244e+01 1.951e-01 268.8 <2e-16 *** Nbr_Liquor_Stores 2.343e+00 1.121e-04 20908.9 <2e-16 *** Expected, but not a great method to make causal inferences. ## Method 1: Divide Liquor Stores by population and then regress fit1 <- lm( I(DUI/popln) ~ Nbr_Liquor_Stores, data=dat) summary(fit1) Estimate Std. Error t value Pr(>|t|) (Intercept) 4.705e-01 4.005e-02 11.747 <2e-16 *** Nbr_Liquor_Stores -1.294e-05 2.301e-05 -0.562 0.574 That is a surprise for me, I was expecting this method to capture the relationship but it doesn't pick it up. So this method fails in this scenario! ## Method 2: Divide Liquor Stores by population and then regress fit2 <- lm( DUI ~ Nbr_Liquor_Stores + popln, data=dat) summary(fit2) Estimate Std. Error t value Pr(>|t|) (Intercept) 1.013e+02 5.945e-01 170.391 <2e-16 *** Nbr_Liquor_Stores -5.484e-02 2.825e-02 -1.941 0.0523 . popln 2.158e-02 2.543e-04 84.875 <2e-16 *** Nbr_Liquor_Stores is significant, p-value make a lot of sense. A clear winner for me. ## Method 3: "DUI per capita" on "liquer stores per capita" and "population size" fit3 <- lm( I(DUI/popln) ~ I(Nbr_Liquor_Stores/popln) + popln, data=dat) summary(fit3) Estimate Std. Error t value Pr(>|t|) (Intercept) 6.540e-02 1.485e-02 4.405 1.07e-05 *** I(Nbr_Liquor_Stores/popln) 3.915e+00 1.553e-02 252.063 < 2e-16 *** popln -2.056e-08 7.635e-08 -0.269 0.788 TLDR; Method 2 produces most accurate p-values across different scenarios.
Determining statistical significance of linear regression coefficient in the presence of multicollin I ran a few experiments on simulated data to see which method works best. Please read my findings below. Lets look at two different scenarios - First where there is no direct relationship between DUI
28,853
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$
In many mathematical applications, the motivation becomes clearer after deriving the result. So let's start off with the algebra. Suppose we were to run GD for $T$ iterations. This will give us the set ${(w_k)}_{k=1}^T$. Let's do a change of basis: $w^k = Qx^k + w^*$ $\iff$ $x^k = Q^T(w^k-w^*) $ Now we have ${(x_k)}_{k=1}^T$. What can we say about them? Let's look at each coordinate separately. By substituting the above and using the update step of GD, $x_i^{k+1}= (Q^T(w^{k+1}-w^*))_i = (Q^T(w^k-\alpha (Aw^k-b)-w^*))_i $ Arranging, $x_i^{k+1}=(Q^T(w^k-w^*))_i-\alpha \cdot (Q^T(Aw^k-b))_i$ The first term is exactly $x_i^k$. For the second term, we substitute $A=Qdiag(\lambda _1 \dots \lambda _n)Q^T$. This yields, $x_i^{k+1}=x_i^k-\alpha \lambda _i x_i^k=(1-\alpha \lambda _i)x_i^k$ Which was a single step. Repeating until we get all the way to $x_0$, we get $x_i^{k+1}=(1-\alpha \lambda _i)^{k+1}x_i^0$ All this seems really useless at this point. Let's go back to our initial concern, the ${w}$s. From our original change of basis, we know that $w^k-w^*=Qx^k$. Another way of writing the multiplication of the matrix $Q$ by the vector $x^k$ is as $\sum_i x_i^kq_i$. But we've shown above that $x_i^{k}=(1-\alpha \lambda _i)^{k}x_i^0$. Plugging everything together, we have obtained the desired "closed form" formula for the GD update step: $w^k-w^*=\sum_i x_i^0(1-\alpha \lambda _i)^{k} q_i$ This is essentially an expression for the "error" at iteration $k$ of GD (how far we are from the optimal solution, $w^*$). Since we're interested in evaluating the performance of GD, this is the expression we want to analyze. There are two immediate observations. The first is that this term goes to 0 as $k$ goes to infinity, which is of course good news. The second is that the error decomposes very nicely into the separate elements of $x_0$, which is even nicer for the sake of our analysis. Here I quote from the original post, since I think they explain it nicely: Each element of $x^0$ is the component of the error in the initial guess in the $Q$-basis. There are $n$ such errors, and each of these errors follows its own, solitary path to the minimum, decreasing exponentially with a compounding rate of $1-\alpha \lambda_i $. The closer that number is to 1, the slower it converges. I hope this clears things up for you enough that you can go on to continue reading the post. It's a really good one!
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$
In many mathematical applications, the motivation becomes clearer after deriving the result. So let's start off with the algebra. Suppose we were to run GD for $T$ iterations. This will give us the se
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$ In many mathematical applications, the motivation becomes clearer after deriving the result. So let's start off with the algebra. Suppose we were to run GD for $T$ iterations. This will give us the set ${(w_k)}_{k=1}^T$. Let's do a change of basis: $w^k = Qx^k + w^*$ $\iff$ $x^k = Q^T(w^k-w^*) $ Now we have ${(x_k)}_{k=1}^T$. What can we say about them? Let's look at each coordinate separately. By substituting the above and using the update step of GD, $x_i^{k+1}= (Q^T(w^{k+1}-w^*))_i = (Q^T(w^k-\alpha (Aw^k-b)-w^*))_i $ Arranging, $x_i^{k+1}=(Q^T(w^k-w^*))_i-\alpha \cdot (Q^T(Aw^k-b))_i$ The first term is exactly $x_i^k$. For the second term, we substitute $A=Qdiag(\lambda _1 \dots \lambda _n)Q^T$. This yields, $x_i^{k+1}=x_i^k-\alpha \lambda _i x_i^k=(1-\alpha \lambda _i)x_i^k$ Which was a single step. Repeating until we get all the way to $x_0$, we get $x_i^{k+1}=(1-\alpha \lambda _i)^{k+1}x_i^0$ All this seems really useless at this point. Let's go back to our initial concern, the ${w}$s. From our original change of basis, we know that $w^k-w^*=Qx^k$. Another way of writing the multiplication of the matrix $Q$ by the vector $x^k$ is as $\sum_i x_i^kq_i$. But we've shown above that $x_i^{k}=(1-\alpha \lambda _i)^{k}x_i^0$. Plugging everything together, we have obtained the desired "closed form" formula for the GD update step: $w^k-w^*=\sum_i x_i^0(1-\alpha \lambda _i)^{k} q_i$ This is essentially an expression for the "error" at iteration $k$ of GD (how far we are from the optimal solution, $w^*$). Since we're interested in evaluating the performance of GD, this is the expression we want to analyze. There are two immediate observations. The first is that this term goes to 0 as $k$ goes to infinity, which is of course good news. The second is that the error decomposes very nicely into the separate elements of $x_0$, which is even nicer for the sake of our analysis. Here I quote from the original post, since I think they explain it nicely: Each element of $x^0$ is the component of the error in the initial guess in the $Q$-basis. There are $n$ such errors, and each of these errors follows its own, solitary path to the minimum, decreasing exponentially with a compounding rate of $1-\alpha \lambda_i $. The closer that number is to 1, the slower it converges. I hope this clears things up for you enough that you can go on to continue reading the post. It's a really good one!
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$ In many mathematical applications, the motivation becomes clearer after deriving the result. So let's start off with the algebra. Suppose we were to run GD for $T$ iterations. This will give us the se
28,854
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$
I have read the same paper, got stuck at the exact same place and worked through with the help of galoosh33's answer. I just didn't find obvious the step: \begin{equation} \begin{split} x_{i}^{k+1} & = (Q^{T}(w^{k} - w^{*}))_{i} - \alpha (Q^{T}(Aw^{k} - b))_{i} \\ & = x_{i} - \alpha \lambda_{i} x_{i}^{k} \end{split} \end{equation} So for those, who do not want to work through the algebra and do not immediately see, how we got rid of $b$, it is from substitution $w^{k} = Qx^{k} + w^{*}$ and $w^{*} = A^{-1}b$ and the fact that eigenvectors are orthogonal $Q^{-1} = Q^{T}$. \begin{equation} \begin{split} (Q^{T} A w_{k} - Q^{T}b)_{i} & = (Q^{T} A Q x^{k} + Q^{T} A \overbrace{w^{*}}^{A^{-1}b} - Q^{T}b)_{i} \\ & = (\underbrace{Q^{T} Q}_{I} \text{diag}(\lambda_1, \ldots, \lambda_n) \underbrace{Q^T Q}_{I} x^{k} \underbrace{+ Q^{T} \underbrace{A A^{-1}}_{I} b - Q^{T} b}_{0})_{i} \\ & = \lambda_{i} x_{i}^{k} \end{split} \end{equation}
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$
I have read the same paper, got stuck at the exact same place and worked through with the help of galoosh33's answer. I just didn't find obvious the step: \begin{equation} \begin{split} x_{i}^{k+1} &
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$ I have read the same paper, got stuck at the exact same place and worked through with the help of galoosh33's answer. I just didn't find obvious the step: \begin{equation} \begin{split} x_{i}^{k+1} & = (Q^{T}(w^{k} - w^{*}))_{i} - \alpha (Q^{T}(Aw^{k} - b))_{i} \\ & = x_{i} - \alpha \lambda_{i} x_{i}^{k} \end{split} \end{equation} So for those, who do not want to work through the algebra and do not immediately see, how we got rid of $b$, it is from substitution $w^{k} = Qx^{k} + w^{*}$ and $w^{*} = A^{-1}b$ and the fact that eigenvectors are orthogonal $Q^{-1} = Q^{T}$. \begin{equation} \begin{split} (Q^{T} A w_{k} - Q^{T}b)_{i} & = (Q^{T} A Q x^{k} + Q^{T} A \overbrace{w^{*}}^{A^{-1}b} - Q^{T}b)_{i} \\ & = (\underbrace{Q^{T} Q}_{I} \text{diag}(\lambda_1, \ldots, \lambda_n) \underbrace{Q^T Q}_{I} x^{k} \underbrace{+ Q^{T} \underbrace{A A^{-1}}_{I} b - Q^{T} b}_{0})_{i} \\ & = \lambda_{i} x_{i}^{k} \end{split} \end{equation}
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$ I have read the same paper, got stuck at the exact same place and worked through with the help of galoosh33's answer. I just didn't find obvious the step: \begin{equation} \begin{split} x_{i}^{k+1} &
28,855
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$
I'll provide a few comments in the language of machine learning that will hopefully lead you to a helpful logical conclusion. First, minimizing that quadratic objective is like solving a least squares problem (if this is not obvious, try proving it as an exercise). Second, for any least squares problem, if the features are orthogonal, then estimating the coefficients seperately or sequentially (like doing exactly one round of coordinate descent) is equivalent to estimating them jointly. (If this isn't obvious, then suppose the features are orthogonal. Do you see this means $A$ must be diagonal? That means each entry of the solution does not depend on the others). So now the question is: How can we solve the same problem, but with a diagonal matrix in place of $A$? Third, the $\ell_2$ norm is orthogonally-invariant, so if you left or right multiply whatever is inside the norm by an orthogonal matrix (which is interpreted as a rotation), you can just solve that problem then back out that orthogonal transformation at the end. Since $A$ is symmetric positive semi-definite, we can get those orthogonal matrices from the eigenvalue decomposition of $A$ (aka by "diagonalizing" $A$). Back to statistics: This process is sometimes referred to as whitening or pre-whitening though I believe that there is a lack of concesus as to the usage of this term. Put simply and loosely, in the eigenspace of $A$, the columns/rows of $A$ can be viewed as totally separate and unrelated pieces of information.
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$
I'll provide a few comments in the language of machine learning that will hopefully lead you to a helpful logical conclusion. First, minimizing that quadratic objective is like solving a least squares
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$ I'll provide a few comments in the language of machine learning that will hopefully lead you to a helpful logical conclusion. First, minimizing that quadratic objective is like solving a least squares problem (if this is not obvious, try proving it as an exercise). Second, for any least squares problem, if the features are orthogonal, then estimating the coefficients seperately or sequentially (like doing exactly one round of coordinate descent) is equivalent to estimating them jointly. (If this isn't obvious, then suppose the features are orthogonal. Do you see this means $A$ must be diagonal? That means each entry of the solution does not depend on the others). So now the question is: How can we solve the same problem, but with a diagonal matrix in place of $A$? Third, the $\ell_2$ norm is orthogonally-invariant, so if you left or right multiply whatever is inside the norm by an orthogonal matrix (which is interpreted as a rotation), you can just solve that problem then back out that orthogonal transformation at the end. Since $A$ is symmetric positive semi-definite, we can get those orthogonal matrices from the eigenvalue decomposition of $A$ (aka by "diagonalizing" $A$). Back to statistics: This process is sometimes referred to as whitening or pre-whitening though I believe that there is a lack of concesus as to the usage of this term. Put simply and loosely, in the eigenspace of $A$, the columns/rows of $A$ can be viewed as totally separate and unrelated pieces of information.
Gradient descent of $f(w)=\frac12w^TAw-b^Tw$ viewed in the space of Eigenvectors of $A$ I'll provide a few comments in the language of machine learning that will hopefully lead you to a helpful logical conclusion. First, minimizing that quadratic objective is like solving a least squares
28,856
Recommendations for Non-Proportional Hazards
Fantastic question fantastic answers. I'll add that you should consider a model making much different assumptions such as the lognormal survival model. Use the normal inverse function for the y_axis instead of log-log. Still need to covariate adjust. So also look at normality of residuals stratified by treatment. This is covered in a case study near the end of my course notes at https://hbiostat.org/rmsc
Recommendations for Non-Proportional Hazards
Fantastic question fantastic answers. I'll add that you should consider a model making much different assumptions such as the lognormal survival model. Use the normal inverse function for the y_axis
Recommendations for Non-Proportional Hazards Fantastic question fantastic answers. I'll add that you should consider a model making much different assumptions such as the lognormal survival model. Use the normal inverse function for the y_axis instead of log-log. Still need to covariate adjust. So also look at normality of residuals stratified by treatment. This is covered in a case study near the end of my course notes at https://hbiostat.org/rmsc
Recommendations for Non-Proportional Hazards Fantastic question fantastic answers. I'll add that you should consider a model making much different assumptions such as the lognormal survival model. Use the normal inverse function for the y_axis
28,857
Recommendations for Non-Proportional Hazards
You certainly don't have marginal proportional hazards. That does not mean you don't have conditional proportional hazards! To explain in more depth, consider the following situation: let's suppose we have group 1, which is very homogeneous and has constant hazard = 1. Now in group two, we have a heterogeneous population; 50% are at lower risk than group 1 (hazard = 0.5) and the rest are at higher risk than group 1 (hazard = 3). Clearly, if we knew whether everyone in group 2 was a higher or lower risk subject, then everyone would have proportional hazards. This is the conditional hazards. But let's suppose we don't know (or ignore) whether someone in group 2 is at high or low risk. Then the marginal distribution for them is that of a mixture model: 50% chance they have hazard = 0.5, 50% they have hazard = 3. Below, I provide some R-code along with a plot of the two hazards. # Function for computing the hazards from # a 50/50 heterogenious population mix_hazard <- function(x, hzd1 = 0.5, hzd2 = 3){ x_dens <- 0.5 * dexp(x, hzd1) + 0.5 * dexp(x, hzd2) x_s <- 1 - ( 0.5 * pexp(x, hzd1) + 0.5 * pexp(x, hzd2)) hzd <- x_dens/x_s return(hzd) } x <- 0:100/20 plot(x, mix_hazard(x), type = 'l', col = 'purple', ylim = c(0, 2), xlab = 'Time', ylab = 'Hazard', lwd = 2) lines(x, rep(1, length(x)), col = 'red', lwd = 2) legend('topright', legend = c('Homogeneous', 'Heterogeneous'), lwd = 2, col = c('red', 'purple')) We see clearly non-proportional marginal hazards! But note that if we knew whether the subjects in group 2 were high risk or low risk subjects, we would have proportional hazards. So how does this affect you? Well, you mentioned you have a lot of other covariates about these subjects. It is very possible that when we ignore these covariates, the hazards are non-proportional, but after adjusting for them, you may capture the causes of the heterogeneity in the different groups, and fix up your non-proportional hazards issue.
Recommendations for Non-Proportional Hazards
You certainly don't have marginal proportional hazards. That does not mean you don't have conditional proportional hazards! To explain in more depth, consider the following situation: let's suppose w
Recommendations for Non-Proportional Hazards You certainly don't have marginal proportional hazards. That does not mean you don't have conditional proportional hazards! To explain in more depth, consider the following situation: let's suppose we have group 1, which is very homogeneous and has constant hazard = 1. Now in group two, we have a heterogeneous population; 50% are at lower risk than group 1 (hazard = 0.5) and the rest are at higher risk than group 1 (hazard = 3). Clearly, if we knew whether everyone in group 2 was a higher or lower risk subject, then everyone would have proportional hazards. This is the conditional hazards. But let's suppose we don't know (or ignore) whether someone in group 2 is at high or low risk. Then the marginal distribution for them is that of a mixture model: 50% chance they have hazard = 0.5, 50% they have hazard = 3. Below, I provide some R-code along with a plot of the two hazards. # Function for computing the hazards from # a 50/50 heterogenious population mix_hazard <- function(x, hzd1 = 0.5, hzd2 = 3){ x_dens <- 0.5 * dexp(x, hzd1) + 0.5 * dexp(x, hzd2) x_s <- 1 - ( 0.5 * pexp(x, hzd1) + 0.5 * pexp(x, hzd2)) hzd <- x_dens/x_s return(hzd) } x <- 0:100/20 plot(x, mix_hazard(x), type = 'l', col = 'purple', ylim = c(0, 2), xlab = 'Time', ylab = 'Hazard', lwd = 2) lines(x, rep(1, length(x)), col = 'red', lwd = 2) legend('topright', legend = c('Homogeneous', 'Heterogeneous'), lwd = 2, col = c('red', 'purple')) We see clearly non-proportional marginal hazards! But note that if we knew whether the subjects in group 2 were high risk or low risk subjects, we would have proportional hazards. So how does this affect you? Well, you mentioned you have a lot of other covariates about these subjects. It is very possible that when we ignore these covariates, the hazards are non-proportional, but after adjusting for them, you may capture the causes of the heterogeneity in the different groups, and fix up your non-proportional hazards issue.
Recommendations for Non-Proportional Hazards You certainly don't have marginal proportional hazards. That does not mean you don't have conditional proportional hazards! To explain in more depth, consider the following situation: let's suppose w
28,858
Why is Zellner's g prior "unacceptable"?
In our book, Bayesian Essentials with R, we state almost the same thing: Zellner's prior somehow appears as a data-dependent prior through its dependence on $X$, but this is not really a problem since the whole model is conditional on $X$. Zellner's prior writes down as $$ \beta|\sigma \sim \mathscr{N}_p\left(\tilde\beta,g\sigma^2(X^\text{T}X)^{-1}\right)\qquad \sigma\sim\pi(\sigma)=1/\sigma $$ and its major inconvenient is the dependence on the constant $g$, that impacts in a significant manner the resulting inference. This is illustrated in the book. A way out of this problem is to associate $g$ with a prior distribution, as detailed in Bayesian Essentials with R. A more expedite way out is settle for $g=n$. A second issue with the Zellner prior is that this is an improper prior (because of $\sigma$) hence faces difficulties for model comparison as in variable selection. A somewhat dirty trick bypasses this difficulty: again quoting from the book: we are compelled to denote by $\sigma^2$ and $\alpha$ the variance and intercept terms common to all models, respectively. Although this is more of a mathematical trick than a true modeling reason, the prior independence of $(\alpha; \sigma2)$ and the model index allows for the simultaneous use of Bayes factors and an improper prior on those nuisance parameters. Therefore, it does not seem right to call Zellner's inacceptable. In my opinion, the only inacceptable priors are those that conflict with prior information. In a non-informative situation, any prior should be acceptable, at least a priori. (It may be that the data reveals a conflict between the prior and the parameter that could have been behind the data.)
Why is Zellner's g prior "unacceptable"?
In our book, Bayesian Essentials with R, we state almost the same thing: Zellner's prior somehow appears as a data-dependent prior through its dependence on $X$, but this is not really a problem si
Why is Zellner's g prior "unacceptable"? In our book, Bayesian Essentials with R, we state almost the same thing: Zellner's prior somehow appears as a data-dependent prior through its dependence on $X$, but this is not really a problem since the whole model is conditional on $X$. Zellner's prior writes down as $$ \beta|\sigma \sim \mathscr{N}_p\left(\tilde\beta,g\sigma^2(X^\text{T}X)^{-1}\right)\qquad \sigma\sim\pi(\sigma)=1/\sigma $$ and its major inconvenient is the dependence on the constant $g$, that impacts in a significant manner the resulting inference. This is illustrated in the book. A way out of this problem is to associate $g$ with a prior distribution, as detailed in Bayesian Essentials with R. A more expedite way out is settle for $g=n$. A second issue with the Zellner prior is that this is an improper prior (because of $\sigma$) hence faces difficulties for model comparison as in variable selection. A somewhat dirty trick bypasses this difficulty: again quoting from the book: we are compelled to denote by $\sigma^2$ and $\alpha$ the variance and intercept terms common to all models, respectively. Although this is more of a mathematical trick than a true modeling reason, the prior independence of $(\alpha; \sigma2)$ and the model index allows for the simultaneous use of Bayes factors and an improper prior on those nuisance parameters. Therefore, it does not seem right to call Zellner's inacceptable. In my opinion, the only inacceptable priors are those that conflict with prior information. In a non-informative situation, any prior should be acceptable, at least a priori. (It may be that the data reveals a conflict between the prior and the parameter that could have been behind the data.)
Why is Zellner's g prior "unacceptable"? In our book, Bayesian Essentials with R, we state almost the same thing: Zellner's prior somehow appears as a data-dependent prior through its dependence on $X$, but this is not really a problem si
28,859
Neural network for multi label classification with large number of classes outputs only zero
Tensorflow has a loss function weighted_cross_entropy_with_logits, which can be used to give more weight to the 1's. So it should be applicable to a sparse multi-label classification setting like yours. From the documentation: This is like sigmoid_cross_entropy_with_logits() except that pos_weight, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error. The argument pos_weight is used as a multiplier for the positive targets If you use the tensorflow backend in Keras, you can use the loss function like this (Keras 2.1.1): import tensorflow as tf import keras.backend.tensorflow_backend as tfb POS_WEIGHT = 10 # multiplier for positive targets, needs to be tuned def weighted_binary_crossentropy(target, output): """ Weighted binary crossentropy between an output tensor and a target tensor. POS_WEIGHT is used as a multiplier for the positive targets. Combination of the following functions: * keras.losses.binary_crossentropy * keras.backend.tensorflow_backend.binary_crossentropy * tf.nn.weighted_cross_entropy_with_logits """ # transform back to logits _epsilon = tfb._to_tensor(tfb.epsilon(), output.dtype.base_dtype) output = tf.clip_by_value(output, _epsilon, 1 - _epsilon) output = tf.log(output / (1 - output)) # compute weighted loss loss = tf.nn.weighted_cross_entropy_with_logits(targets=target, logits=output, pos_weight=POS_WEIGHT) return tf.reduce_mean(loss, axis=-1) Then in your model: model.compile(loss=weighted_binary_crossentropy, ...) I have not found many resources yet which report well working values for the pos_weight in relation to the number of classes, average active classes, etc.
Neural network for multi label classification with large number of classes outputs only zero
Tensorflow has a loss function weighted_cross_entropy_with_logits, which can be used to give more weight to the 1's. So it should be applicable to a sparse multi-label classification setting like your
Neural network for multi label classification with large number of classes outputs only zero Tensorflow has a loss function weighted_cross_entropy_with_logits, which can be used to give more weight to the 1's. So it should be applicable to a sparse multi-label classification setting like yours. From the documentation: This is like sigmoid_cross_entropy_with_logits() except that pos_weight, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error. The argument pos_weight is used as a multiplier for the positive targets If you use the tensorflow backend in Keras, you can use the loss function like this (Keras 2.1.1): import tensorflow as tf import keras.backend.tensorflow_backend as tfb POS_WEIGHT = 10 # multiplier for positive targets, needs to be tuned def weighted_binary_crossentropy(target, output): """ Weighted binary crossentropy between an output tensor and a target tensor. POS_WEIGHT is used as a multiplier for the positive targets. Combination of the following functions: * keras.losses.binary_crossentropy * keras.backend.tensorflow_backend.binary_crossentropy * tf.nn.weighted_cross_entropy_with_logits """ # transform back to logits _epsilon = tfb._to_tensor(tfb.epsilon(), output.dtype.base_dtype) output = tf.clip_by_value(output, _epsilon, 1 - _epsilon) output = tf.log(output / (1 - output)) # compute weighted loss loss = tf.nn.weighted_cross_entropy_with_logits(targets=target, logits=output, pos_weight=POS_WEIGHT) return tf.reduce_mean(loss, axis=-1) Then in your model: model.compile(loss=weighted_binary_crossentropy, ...) I have not found many resources yet which report well working values for the pos_weight in relation to the number of classes, average active classes, etc.
Neural network for multi label classification with large number of classes outputs only zero Tensorflow has a loss function weighted_cross_entropy_with_logits, which can be used to give more weight to the 1's. So it should be applicable to a sparse multi-label classification setting like your
28,860
Neural network for multi label classification with large number of classes outputs only zero
Update for tensorflow 2.6.0: I was going to write a comment but there are many things that needs to be changed for @tobigue answer to work. And I am not entirely sure if everything is correct with my answer. To make things work: You need to replace import keras.backend.tensorflow_backend as tfb with import keras.backend as tfb The target parameter in tf.nn.weighted_cross_entropy_with_logits needs to be changed to labels tf.log needs to be called like this: tf.math.log To make this custom loss function to work with keras, you need to import get_custom_objects and define the custom loss function as a loss function. So, from keras.utils.generic_utils import get_custom_objects and then before you compile the model you need to: get_custom_objects().update({"weighted_binary_crossentropy": weighted_binary_crossentropy}) I also encountered this error but it may not be the same for everyone. The error is: TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int32 of argument 'x'. To fix this error, I have converted the target to float32 like this: target = tf.cast(target, tf.float32) So, the final code that I am using is this: import tensorflow as tf import keras.backend as tfb from keras.utils.generic_utils import get_custom_objects POS_WEIGHT = 10 # multiplier for positive targets, needs to be tuned def weighted_binary_crossentropy(target, output): """ Weighted binary crossentropy between an output tensor and a target tensor. POS_WEIGHT is used as a multiplier for the positive targets. Combination of the following functions: * keras.losses.binary_crossentropy * keras.backend.tensorflow_backend.binary_crossentropy * tf.nn.weighted_cross_entropy_with_logits """ # transform back to logits _epsilon = tfb._to_tensor(tfb.epsilon(), output.dtype.base_dtype) output = tf.clip_by_value(output, _epsilon, 1 - _epsilon) output = tf.math.log(output / (1 - output)) # compute weighted loss target = tf.cast(target, tf.float32) loss = tf.nn.weighted_cross_entropy_with_logits(labels=target, logits=output, pos_weight=POS_WEIGHT) return tf.reduce_mean(loss, axis=-1) Then in your model get_custom_objects().update({"weighted_binary_crossentropy": weighted_binary_crossentropy}) model.compile(loss='weighted_binary_crossentropy', ...)
Neural network for multi label classification with large number of classes outputs only zero
Update for tensorflow 2.6.0: I was going to write a comment but there are many things that needs to be changed for @tobigue answer to work. And I am not entirely sure if everything is correct with my
Neural network for multi label classification with large number of classes outputs only zero Update for tensorflow 2.6.0: I was going to write a comment but there are many things that needs to be changed for @tobigue answer to work. And I am not entirely sure if everything is correct with my answer. To make things work: You need to replace import keras.backend.tensorflow_backend as tfb with import keras.backend as tfb The target parameter in tf.nn.weighted_cross_entropy_with_logits needs to be changed to labels tf.log needs to be called like this: tf.math.log To make this custom loss function to work with keras, you need to import get_custom_objects and define the custom loss function as a loss function. So, from keras.utils.generic_utils import get_custom_objects and then before you compile the model you need to: get_custom_objects().update({"weighted_binary_crossentropy": weighted_binary_crossentropy}) I also encountered this error but it may not be the same for everyone. The error is: TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int32 of argument 'x'. To fix this error, I have converted the target to float32 like this: target = tf.cast(target, tf.float32) So, the final code that I am using is this: import tensorflow as tf import keras.backend as tfb from keras.utils.generic_utils import get_custom_objects POS_WEIGHT = 10 # multiplier for positive targets, needs to be tuned def weighted_binary_crossentropy(target, output): """ Weighted binary crossentropy between an output tensor and a target tensor. POS_WEIGHT is used as a multiplier for the positive targets. Combination of the following functions: * keras.losses.binary_crossentropy * keras.backend.tensorflow_backend.binary_crossentropy * tf.nn.weighted_cross_entropy_with_logits """ # transform back to logits _epsilon = tfb._to_tensor(tfb.epsilon(), output.dtype.base_dtype) output = tf.clip_by_value(output, _epsilon, 1 - _epsilon) output = tf.math.log(output / (1 - output)) # compute weighted loss target = tf.cast(target, tf.float32) loss = tf.nn.weighted_cross_entropy_with_logits(labels=target, logits=output, pos_weight=POS_WEIGHT) return tf.reduce_mean(loss, axis=-1) Then in your model get_custom_objects().update({"weighted_binary_crossentropy": weighted_binary_crossentropy}) model.compile(loss='weighted_binary_crossentropy', ...)
Neural network for multi label classification with large number of classes outputs only zero Update for tensorflow 2.6.0: I was going to write a comment but there are many things that needs to be changed for @tobigue answer to work. And I am not entirely sure if everything is correct with my
28,861
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm
There is no difference in the calculation. I was wondering the same thing and verified in my own TensorFlow DDPG implementation by trying both and asserting that the numerical values are identical. As expected, they are. I noticed that most tutorial-like implementations (e.g. Patrick Emami's) explicitly show the multiplication. However, OpenAI's baselines implementation $does$ directly compute $\nabla_{\theta^\mu} Q$. (They do this by defining a loss on the actor network equal to $-\nabla_{\theta^\mu} Q$, averaged across the batch). There is one reason that you'd want to separate out $\nabla_a Q$ from $\nabla_{\theta^\mu} \mu$ and multiply them. This is if you want to directly manipulate one of the terms. For example, Hausknecht and Stone do "inverting gradients" on $\nabla_a Q$ to coerce actions to stay within the environment's range.
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm
There is no difference in the calculation. I was wondering the same thing and verified in my own TensorFlow DDPG implementation by trying both and asserting that the numerical values are identical. As
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm There is no difference in the calculation. I was wondering the same thing and verified in my own TensorFlow DDPG implementation by trying both and asserting that the numerical values are identical. As expected, they are. I noticed that most tutorial-like implementations (e.g. Patrick Emami's) explicitly show the multiplication. However, OpenAI's baselines implementation $does$ directly compute $\nabla_{\theta^\mu} Q$. (They do this by defining a loss on the actor network equal to $-\nabla_{\theta^\mu} Q$, averaged across the batch). There is one reason that you'd want to separate out $\nabla_a Q$ from $\nabla_{\theta^\mu} \mu$ and multiply them. This is if you want to directly manipulate one of the terms. For example, Hausknecht and Stone do "inverting gradients" on $\nabla_a Q$ to coerce actions to stay within the environment's range.
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm There is no difference in the calculation. I was wondering the same thing and verified in my own TensorFlow DDPG implementation by trying both and asserting that the numerical values are identical. As
28,862
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm
This way you can define two independent networks. Otherwise, you may have to define a large network and distinguish which part belongs to the policy and which one to the state-action value function.
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm
This way you can define two independent networks. Otherwise, you may have to define a large network and distinguish which part belongs to the policy and which one to the state-action value function.
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm This way you can define two independent networks. Otherwise, you may have to define a large network and distinguish which part belongs to the policy and which one to the state-action value function.
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm This way you can define two independent networks. Otherwise, you may have to define a large network and distinguish which part belongs to the policy and which one to the state-action value function.
28,863
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm
I am not sure to understand this multiplication between the two gradient terms.. When you compute this using say tensorflow: J_grad = gradients( Q(s, mu(s|theta)), theta ) It applies the chain-rule and therefore computes the gadients of Q w.r.t the output of the policy network mu(s|theta) and then backpropoagates those "errors" through the policy network to obtain the sampled gradients w.r.t theta (the parameters of every layer of your policy network). However, when you do: Q_grad = gradients( Q(s, mu(s|theta)), mu(s|theta) ) mu_grad = gradients( mu(s|theta), theta ) J_grad = Q_grad * mu_grad Then in my understanding, it (1) computes the gradient of Q w.r.t the ouput of the policy network mu(s|theta) and (2) the gradient of the output of the policy network mu(s|theta) w.r.t the policy parameters theta, but this time SEPARATELY. What I don't understand is that now, you have on one hand your first gradient which is a vector of size (1, action_dim) and on the other, you have your second gradient which is a vector of size (1, theta_dim). To apply your update, you need a gradient w.r.t to theta which would be a vector of size (1, theta_dim). So what exactly is this multiplication doing in the third line, and how is it equivalent to backpropagating the first gradient through the policy network: J_grad = Q_grad * mu_grad Question: Does it just perform an outer-product creating a matrix of shape (action_dim, theta_dim) and then reduced by summing over the dimension to obtain our update vector of shape (1, theta) ? If so, why is this valid (equivalent to backpropagating the first gradient through the policy network)?
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm
I am not sure to understand this multiplication between the two gradient terms.. When you compute this using say tensorflow: J_grad = gradients( Q(s, mu(s|theta)), theta ) It applies the chain-rule a
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm I am not sure to understand this multiplication between the two gradient terms.. When you compute this using say tensorflow: J_grad = gradients( Q(s, mu(s|theta)), theta ) It applies the chain-rule and therefore computes the gadients of Q w.r.t the output of the policy network mu(s|theta) and then backpropoagates those "errors" through the policy network to obtain the sampled gradients w.r.t theta (the parameters of every layer of your policy network). However, when you do: Q_grad = gradients( Q(s, mu(s|theta)), mu(s|theta) ) mu_grad = gradients( mu(s|theta), theta ) J_grad = Q_grad * mu_grad Then in my understanding, it (1) computes the gradient of Q w.r.t the ouput of the policy network mu(s|theta) and (2) the gradient of the output of the policy network mu(s|theta) w.r.t the policy parameters theta, but this time SEPARATELY. What I don't understand is that now, you have on one hand your first gradient which is a vector of size (1, action_dim) and on the other, you have your second gradient which is a vector of size (1, theta_dim). To apply your update, you need a gradient w.r.t to theta which would be a vector of size (1, theta_dim). So what exactly is this multiplication doing in the third line, and how is it equivalent to backpropagating the first gradient through the policy network: J_grad = Q_grad * mu_grad Question: Does it just perform an outer-product creating a matrix of shape (action_dim, theta_dim) and then reduced by summing over the dimension to obtain our update vector of shape (1, theta) ? If so, why is this valid (equivalent to backpropagating the first gradient through the policy network)?
Computing the Actor Gradient Update in the Deep Deterministic Policy Gradient (DDPG) algorithm I am not sure to understand this multiplication between the two gradient terms.. When you compute this using say tensorflow: J_grad = gradients( Q(s, mu(s|theta)), theta ) It applies the chain-rule a
28,864
How to estimate confidence interval of a least-squares fit parameters by means of numerical Jacobian
If I am understanding the question properly, you are asking about the non-linear least squares (NLS) model. Sticking to your notation, the NLS model is: \begin{align} y_n &= f(x_n,P) + \epsilon_n \end{align} Various regularity conditions are required on $f$, $X$, and $\epsilon$. The parameters are $P$, a $k$-vector. They are estimated by solving: \begin{align} min_P \sum_{n=1}^N \left(y_n-f(x_n,P) \right)^2 \end{align} Assuming that you have found the global minimum and that it is interior to whatever the feasible set is for $P$ ($\mathbb{R}^k$, I guess?), the necessary FOC are: \begin{align} J_f\left(Y-f(X,P)\right)=0 \end{align} The estimator, $P'$, defined as the interior solution to the minimization problem above and therefore solving the first-order condition immediately above, is consistent and asymptotically normal. It has an asymptotic variance of $\sigma^2_rH^{-1}$. The variance of the error in the original model is assumed to be $Var(\epsilon_n)=\sigma^2_r$, and it may be consistently estimated as $\hat{\sigma}_r^2=\frac{1}{N-k}\sum_{n=1}^N\left(y_n-f(x_n,P')\right)^2$. These are standard results for the non-linear least squares model. My favorite reference for them is Amemiya, T (1985) Advanced Econometrics, section 4.3. The upshot of all this is that, if you want a 95% confidence interval for, say, the $3^{rd}$ element of $P$, you can use: \begin{align} P_{3}' \pm 1.96\sqrt{\hat{\sigma}_r^2\left(H^{-1}\right)_{33}} \end{align} Notice that one of the regularity conditions is that all the variances of the $\epsilon$ are equal and that all of the covariances among elements of the $\epsilon$ are zero. The model in the link you provided contemplates that different $\epsilon_n$ have different variances (i.e. heteroskedasticity). If that characterizes your application, then you need to modify your variance matrix. You can use a so-called sandwich estimator, like this: \begin{align} C &= \left( J_f^T J_f \right)^{-1}J_f^T \hat{\Sigma} J_f \left( J_f^T J_f \right)^{-1}\\ \hat{\Sigma} &= diag \left(\left(y_n-f(x_n,P') \right)^2 \right) \end{align} Then, the 95% confidence interval for the $3^{rd}$ element of $P$, would be: \begin{align} P_{3}' \pm 1.96\sqrt{C_{33}} \end{align} Further modifications would be required if there were to be correlations among the various elements of $\epsilon$.
How to estimate confidence interval of a least-squares fit parameters by means of numerical Jacobian
If I am understanding the question properly, you are asking about the non-linear least squares (NLS) model. Sticking to your notation, the NLS model is: \begin{align} y_n &= f(x_n,P) + \epsilon_n \en
How to estimate confidence interval of a least-squares fit parameters by means of numerical Jacobian If I am understanding the question properly, you are asking about the non-linear least squares (NLS) model. Sticking to your notation, the NLS model is: \begin{align} y_n &= f(x_n,P) + \epsilon_n \end{align} Various regularity conditions are required on $f$, $X$, and $\epsilon$. The parameters are $P$, a $k$-vector. They are estimated by solving: \begin{align} min_P \sum_{n=1}^N \left(y_n-f(x_n,P) \right)^2 \end{align} Assuming that you have found the global minimum and that it is interior to whatever the feasible set is for $P$ ($\mathbb{R}^k$, I guess?), the necessary FOC are: \begin{align} J_f\left(Y-f(X,P)\right)=0 \end{align} The estimator, $P'$, defined as the interior solution to the minimization problem above and therefore solving the first-order condition immediately above, is consistent and asymptotically normal. It has an asymptotic variance of $\sigma^2_rH^{-1}$. The variance of the error in the original model is assumed to be $Var(\epsilon_n)=\sigma^2_r$, and it may be consistently estimated as $\hat{\sigma}_r^2=\frac{1}{N-k}\sum_{n=1}^N\left(y_n-f(x_n,P')\right)^2$. These are standard results for the non-linear least squares model. My favorite reference for them is Amemiya, T (1985) Advanced Econometrics, section 4.3. The upshot of all this is that, if you want a 95% confidence interval for, say, the $3^{rd}$ element of $P$, you can use: \begin{align} P_{3}' \pm 1.96\sqrt{\hat{\sigma}_r^2\left(H^{-1}\right)_{33}} \end{align} Notice that one of the regularity conditions is that all the variances of the $\epsilon$ are equal and that all of the covariances among elements of the $\epsilon$ are zero. The model in the link you provided contemplates that different $\epsilon_n$ have different variances (i.e. heteroskedasticity). If that characterizes your application, then you need to modify your variance matrix. You can use a so-called sandwich estimator, like this: \begin{align} C &= \left( J_f^T J_f \right)^{-1}J_f^T \hat{\Sigma} J_f \left( J_f^T J_f \right)^{-1}\\ \hat{\Sigma} &= diag \left(\left(y_n-f(x_n,P') \right)^2 \right) \end{align} Then, the 95% confidence interval for the $3^{rd}$ element of $P$, would be: \begin{align} P_{3}' \pm 1.96\sqrt{C_{33}} \end{align} Further modifications would be required if there were to be correlations among the various elements of $\epsilon$.
How to estimate confidence interval of a least-squares fit parameters by means of numerical Jacobian If I am understanding the question properly, you are asking about the non-linear least squares (NLS) model. Sticking to your notation, the NLS model is: \begin{align} y_n &= f(x_n,P) + \epsilon_n \en
28,865
Probability of five children in the same class having the same given name
All data can be found here. Each value in the table represents the probability that given a 25-person sample from that location and birth year, 5 of them will share a name. Method: I used the Binomial PDF on on each name to find the probability that any given 25-person class would have 5 people who shared a name: n = class size k = 5,6,...,n p_i = (# of name[i]'s) / (total # of kids) $$P_n(5+\ kids\ share\ name) = \sum_{\forall\ names}\sum_{k=5}^n{n \choose k}p_i^k(1-p_i)^{n-k} $$ For example, if there are 4,000,000 total kids, and 21,393 Emily's, then the probability that there are 5 Emily's in any given class with 25 students is Binomial(25, 5, 0.0053) = 0.0000002. Summing over all names does not give an exact answer, because by the Inclusion/Exclusion Principle, we must also account for the possibility of having multiple groups of 5 people who share names. However, since these probabilities are for all practical purposes nearly zero, I've assumed them to be negligible, and thus $P(\bigcup A_i) \approx \sum P(A_i)$. Update: As many people pointed out, there is considerable variance over time, and between states. So I ran the same program, on a STATE BY STATE basis, and over time. Here are the results (nation-wide probability is red, individual states are black): Interestingly, Vermont (my home state) has been consistently one of the most likely places for this to happen for the past several decades.
Probability of five children in the same class having the same given name
All data can be found here. Each value in the table represents the probability that given a 25-person sample from that location and birth year, 5 of them will share a name. Method: I used the Binomia
Probability of five children in the same class having the same given name All data can be found here. Each value in the table represents the probability that given a 25-person sample from that location and birth year, 5 of them will share a name. Method: I used the Binomial PDF on on each name to find the probability that any given 25-person class would have 5 people who shared a name: n = class size k = 5,6,...,n p_i = (# of name[i]'s) / (total # of kids) $$P_n(5+\ kids\ share\ name) = \sum_{\forall\ names}\sum_{k=5}^n{n \choose k}p_i^k(1-p_i)^{n-k} $$ For example, if there are 4,000,000 total kids, and 21,393 Emily's, then the probability that there are 5 Emily's in any given class with 25 students is Binomial(25, 5, 0.0053) = 0.0000002. Summing over all names does not give an exact answer, because by the Inclusion/Exclusion Principle, we must also account for the possibility of having multiple groups of 5 people who share names. However, since these probabilities are for all practical purposes nearly zero, I've assumed them to be negligible, and thus $P(\bigcup A_i) \approx \sum P(A_i)$. Update: As many people pointed out, there is considerable variance over time, and between states. So I ran the same program, on a STATE BY STATE basis, and over time. Here are the results (nation-wide probability is red, individual states are black): Interestingly, Vermont (my home state) has been consistently one of the most likely places for this to happen for the past several decades.
Probability of five children in the same class having the same given name All data can be found here. Each value in the table represents the probability that given a 25-person sample from that location and birth year, 5 of them will share a name. Method: I used the Binomia
28,866
Probability of five children in the same class having the same given name
please see the following Python-script for Python2. Answer is inspired by David C's answer. My final answer would be, the probability of finding at least five Jacobs in one class, with Jacob being the most probable name according to the data from https://www.ssa.gov/oact/babynames/limits.html "National Data" from 2006. The probability is calculated according to a binomial distribution with Jacob-Probability being the probability of success. import pandas as pd from scipy.stats import binom data = pd.read_csv(r"yob2006.txt", header=None, names=["Name", "Sex", "Count"]) # count of children in the dataset: sumCount = data.Count.sum() # do calculation for every name: for i, row in data.iterrows(): # relative counts of each name being interpreted as probabily of occurrence data.loc[i, "probability"] = data.loc[i, "Count"]/float(sumCount) # Probabilites being five or more children with that name in a class of size n=25,50 or 100 data.loc[i, "atleast5_class25"] = 1 - binom.cdf(4,25,data.loc[i, "probability"]) data.loc[i, "atleast5_class50"] = 1 - binom.cdf(4,50,data.loc[i, "probability"]) data.loc[i, "atleast5_class100"] = 1 - binom.cdf(4,100,data.loc[i, "probability"]) maxP25 = data["atleast5_class25"].max() maxP50 = data["atleast5_class50"].max() maxP100 = data["atleast5_class100"].max() print ("""Max. probability for at least five kids with same name out of 25: {:.2} for name {}""" .format(maxP25, data.loc[data.atleast5_class25==maxP25,"Name"].values[0])) print print ("""Max. probability for at least five kids with same name out of 50: {:.2} for name {}, of course.""" .format(maxP50, data.loc[data.atleast5_class50==maxP50,"Name"].values[0])) print print ("""Max. probability for at least five kids with same name out of 100: {:.2} for name {}, of course.""" .format(maxP100, data.loc[data.atleast5_class100==maxP100,"Name"].values[0])) Max. probability for at least five kids with same name out of 25: 4.7e-07 for name Jacob Max. probability for at least five kids with same name out of 50: 1.6e-05 for name Jacob, of course. Max. probability for at least five kids with same name out of 100: 0.00045 for name Jacob, of course. By a factor of 10 same result as David C's. Thanks. (My answer does not sum all the names, should may be discussed)
Probability of five children in the same class having the same given name
please see the following Python-script for Python2. Answer is inspired by David C's answer. My final answer would be, the probability of finding at least five Jacobs in one class, with Jacob being the
Probability of five children in the same class having the same given name please see the following Python-script for Python2. Answer is inspired by David C's answer. My final answer would be, the probability of finding at least five Jacobs in one class, with Jacob being the most probable name according to the data from https://www.ssa.gov/oact/babynames/limits.html "National Data" from 2006. The probability is calculated according to a binomial distribution with Jacob-Probability being the probability of success. import pandas as pd from scipy.stats import binom data = pd.read_csv(r"yob2006.txt", header=None, names=["Name", "Sex", "Count"]) # count of children in the dataset: sumCount = data.Count.sum() # do calculation for every name: for i, row in data.iterrows(): # relative counts of each name being interpreted as probabily of occurrence data.loc[i, "probability"] = data.loc[i, "Count"]/float(sumCount) # Probabilites being five or more children with that name in a class of size n=25,50 or 100 data.loc[i, "atleast5_class25"] = 1 - binom.cdf(4,25,data.loc[i, "probability"]) data.loc[i, "atleast5_class50"] = 1 - binom.cdf(4,50,data.loc[i, "probability"]) data.loc[i, "atleast5_class100"] = 1 - binom.cdf(4,100,data.loc[i, "probability"]) maxP25 = data["atleast5_class25"].max() maxP50 = data["atleast5_class50"].max() maxP100 = data["atleast5_class100"].max() print ("""Max. probability for at least five kids with same name out of 25: {:.2} for name {}""" .format(maxP25, data.loc[data.atleast5_class25==maxP25,"Name"].values[0])) print print ("""Max. probability for at least five kids with same name out of 50: {:.2} for name {}, of course.""" .format(maxP50, data.loc[data.atleast5_class50==maxP50,"Name"].values[0])) print print ("""Max. probability for at least five kids with same name out of 100: {:.2} for name {}, of course.""" .format(maxP100, data.loc[data.atleast5_class100==maxP100,"Name"].values[0])) Max. probability for at least five kids with same name out of 25: 4.7e-07 for name Jacob Max. probability for at least five kids with same name out of 50: 1.6e-05 for name Jacob, of course. Max. probability for at least five kids with same name out of 100: 0.00045 for name Jacob, of course. By a factor of 10 same result as David C's. Thanks. (My answer does not sum all the names, should may be discussed)
Probability of five children in the same class having the same given name please see the following Python-script for Python2. Answer is inspired by David C's answer. My final answer would be, the probability of finding at least five Jacobs in one class, with Jacob being the
28,867
Does an optimally designed neural network contain zero "dead" ReLU neurons when trained?
There's a difference between dead ReLUs and ReLUs that are silent on many--but not all--inputs. Dead ReLUs are to be avoided, whereas mostly-silent ReLUs can be useful because of the sparsity they induce. Dead ReLUs have entered a parameter regime where they're always in the negative domain of the activation function. This could happen, for example, if the bias is set to a large negative value. Because the activation function is zero for negative values, these units are silent for all inputs. When a ReLU is silent, the gradient of the loss function with respect to the parameters is zero, so no parameter updates will occur with gradient-based learning. Because dead ReLUs are silent for all inputs, they're trapped in this regime. Contrast this with a ReLU that's silent on many but not all inputs. In this case, the gradient is still zero when the unit is silent. If we're using an online learning procedure like minibatch/stochastic gradient descent, no parameter updates will occur for inputs that cause the unit to be silent. But, updates are still possible for other inputs, where the unit is active and the gradient is nonzero. Because dead ReLUs are silent for all inputs, they contribute nothing to the network, and are wasted. From an information theoretic perspective, any unit that has the same output value for all inputs (whether zero or not) carries no information about the input. Mostly-silent ReLUs behave differently for different inputs, and therefore maintain the ability to carry useful information.
Does an optimally designed neural network contain zero "dead" ReLU neurons when trained?
There's a difference between dead ReLUs and ReLUs that are silent on many--but not all--inputs. Dead ReLUs are to be avoided, whereas mostly-silent ReLUs can be useful because of the sparsity they ind
Does an optimally designed neural network contain zero "dead" ReLU neurons when trained? There's a difference between dead ReLUs and ReLUs that are silent on many--but not all--inputs. Dead ReLUs are to be avoided, whereas mostly-silent ReLUs can be useful because of the sparsity they induce. Dead ReLUs have entered a parameter regime where they're always in the negative domain of the activation function. This could happen, for example, if the bias is set to a large negative value. Because the activation function is zero for negative values, these units are silent for all inputs. When a ReLU is silent, the gradient of the loss function with respect to the parameters is zero, so no parameter updates will occur with gradient-based learning. Because dead ReLUs are silent for all inputs, they're trapped in this regime. Contrast this with a ReLU that's silent on many but not all inputs. In this case, the gradient is still zero when the unit is silent. If we're using an online learning procedure like minibatch/stochastic gradient descent, no parameter updates will occur for inputs that cause the unit to be silent. But, updates are still possible for other inputs, where the unit is active and the gradient is nonzero. Because dead ReLUs are silent for all inputs, they contribute nothing to the network, and are wasted. From an information theoretic perspective, any unit that has the same output value for all inputs (whether zero or not) carries no information about the input. Mostly-silent ReLUs behave differently for different inputs, and therefore maintain the ability to carry useful information.
Does an optimally designed neural network contain zero "dead" ReLU neurons when trained? There's a difference between dead ReLUs and ReLUs that are silent on many--but not all--inputs. Dead ReLUs are to be avoided, whereas mostly-silent ReLUs can be useful because of the sparsity they ind
28,868
Interpreting multinomial logistic regression in scikit-learn
As the probabilities of each class must sum to one, we can either define n-1 independent coefficients vectors, or n coefficients vectors that are linked by the equation \sum_c p(y=c) = 1. The two parametrization are equivalent. See also in Wikipedia Multinomial logistic regression - As a log-linear model. For a class c, we have a probability P(y=c) = e^{b_c.X} / Z, with Z a normalization that accounts for the equation \sum_c P(y=c) = 1. These probabilities are the expected probabilities of a class given the coefficients. They can be computed with predict_proba To have better insight of the coefficients, please consider the left plot in this example. example http://scikit-learn.org/dev/_images/plot_logistic_multinomial_001.png In this example there are 3 classes a, b, c and 2 features x0, x1. The class is noted y. After the fit of a multinomial logistic, each class as a coefficients vector C with 2 components (for the 2 features): (C_a0, C_a1), (C_b0, C_b1), (C_c0, C_c1) There is also an intercept (aka biais) I for each class, which are always unidimensional: I_a, I_b, I_c The dash line represents the hyperplane defined by C and I: example: for class a, the hyperplane is defined by the equation x0 * C_a0 + x1 * C_a1 + I_a = 0 This is the hyperplane where P(y=a) = e^{x0 * C_a0 + x1 * C_a1 + I_a} / Z = 1 / Z. If C_a0 is positive, when x0 increases P(y=a) increases. If C_a0 is negative, when x0 increases P(y=a) decreases. However this is not the decision boundary. The decision boundary between classes a and b is defined by the equation: p(y=a) = p(y=b) which is e^{x0 * C_a0 + x1 * C_a1 + I_a} = e^{x0 * C_b0 + x1 * C_b1 + I_b} or again x0 * C_a0 + x1 * C_a1 + I_a = x0 * C_b0 + x1 * C_b1 + I_b. This boundary hyperplane is visible in the plot by the background colors. If C_a0 - C_b0 is positive, when x0 increases P(y=a) / P(y=b) increases. If C_a0 - C_b0 is negative, when x0 increases P(y=a) / P(y=b) decreases.
Interpreting multinomial logistic regression in scikit-learn
As the probabilities of each class must sum to one, we can either define n-1 independent coefficients vectors, or n coefficients vectors that are linked by the equation \sum_c p(y=c) = 1. The two para
Interpreting multinomial logistic regression in scikit-learn As the probabilities of each class must sum to one, we can either define n-1 independent coefficients vectors, or n coefficients vectors that are linked by the equation \sum_c p(y=c) = 1. The two parametrization are equivalent. See also in Wikipedia Multinomial logistic regression - As a log-linear model. For a class c, we have a probability P(y=c) = e^{b_c.X} / Z, with Z a normalization that accounts for the equation \sum_c P(y=c) = 1. These probabilities are the expected probabilities of a class given the coefficients. They can be computed with predict_proba To have better insight of the coefficients, please consider the left plot in this example. example http://scikit-learn.org/dev/_images/plot_logistic_multinomial_001.png In this example there are 3 classes a, b, c and 2 features x0, x1. The class is noted y. After the fit of a multinomial logistic, each class as a coefficients vector C with 2 components (for the 2 features): (C_a0, C_a1), (C_b0, C_b1), (C_c0, C_c1) There is also an intercept (aka biais) I for each class, which are always unidimensional: I_a, I_b, I_c The dash line represents the hyperplane defined by C and I: example: for class a, the hyperplane is defined by the equation x0 * C_a0 + x1 * C_a1 + I_a = 0 This is the hyperplane where P(y=a) = e^{x0 * C_a0 + x1 * C_a1 + I_a} / Z = 1 / Z. If C_a0 is positive, when x0 increases P(y=a) increases. If C_a0 is negative, when x0 increases P(y=a) decreases. However this is not the decision boundary. The decision boundary between classes a and b is defined by the equation: p(y=a) = p(y=b) which is e^{x0 * C_a0 + x1 * C_a1 + I_a} = e^{x0 * C_b0 + x1 * C_b1 + I_b} or again x0 * C_a0 + x1 * C_a1 + I_a = x0 * C_b0 + x1 * C_b1 + I_b. This boundary hyperplane is visible in the plot by the background colors. If C_a0 - C_b0 is positive, when x0 increases P(y=a) / P(y=b) increases. If C_a0 - C_b0 is negative, when x0 increases P(y=a) / P(y=b) decreases.
Interpreting multinomial logistic regression in scikit-learn As the probabilities of each class must sum to one, we can either define n-1 independent coefficients vectors, or n coefficients vectors that are linked by the equation \sum_c p(y=c) = 1. The two para
28,869
Interpreting multinomial logistic regression in scikit-learn
Let W = array of coefficients(6x4 matrix) , b = intercepts, then y = W*X + $b^T$ gives a 6x1 vector of probabilities corresponding to each class, of which the class having highest probability is your prediction. Note: X can be a 4xm vector of features, where 'm' is the number of inputs. In that case y is a 6xm vector, where each column gives the prediction corresponding to each of the 'm' inputs.
Interpreting multinomial logistic regression in scikit-learn
Let W = array of coefficients(6x4 matrix) , b = intercepts, then y = W*X + $b^T$ gives a 6x1 vector of probabilities corresponding to each class, of which the class having highest probability is your
Interpreting multinomial logistic regression in scikit-learn Let W = array of coefficients(6x4 matrix) , b = intercepts, then y = W*X + $b^T$ gives a 6x1 vector of probabilities corresponding to each class, of which the class having highest probability is your prediction. Note: X can be a 4xm vector of features, where 'm' is the number of inputs. In that case y is a 6xm vector, where each column gives the prediction corresponding to each of the 'm' inputs.
Interpreting multinomial logistic regression in scikit-learn Let W = array of coefficients(6x4 matrix) , b = intercepts, then y = W*X + $b^T$ gives a 6x1 vector of probabilities corresponding to each class, of which the class having highest probability is your
28,870
Gaussian with a Gaussian mean
Yes (assuming they are independent). Changing the mean of a variable from $0$ to $x$ is equivalent to adding $x$ to that variable by linearity of expectation. And if $x$ is normally distributed, the addition of two independent normal variables is still normal. To be more precise, let $X \sim \mathcal{N}(\mu, \sigma_1^2)$ and $Y \sim \mathcal{N}(0, \sigma_2^2)$, and then the variable of interest is: $$X + Y \sim \mathcal{N}(\mu, \sigma_1^2 + \sigma_2^2)$$ which is still normal. If they are dependent, then I don't believe we can conclude very much about the resulting distribution. Edit: To address the revised question, let $n$ be the number of times we sample $X$ in step 1, and then $m$ be the number of times we sample from $x_i + Y$ in step 3. My previous response addressed the case where $m = 1$. For $m > 1$, each of the $m$ "subsamples" depends on the same realization $x_i$. You can think of this as $n$ separate normal distributions, each shifted by amounts that are normally distributed after being sampled $m$ times. There may not be any nicer description, but this depends on what you want to analyze and what questions you want to answer about the resulting distribution. Also note that depending on the relative sizes of $n$ and $m$ (e.g. if $m$ tends to infinity while $n$ stays constant, or $n$ tends to infinity while $m$ stays constant) or the relative sizes of $\sigma_1^2$ and $\sigma_2^2$, you may be able to come up with a suitable approximation. For example, the following histogram depicts a random sample of $nm$ points according to your scheme where: $n = 2$, $m = 10000$, $\mu = 0$, $\sigma_1^2 = 50$, and $\sigma_2^2 = 1$.
Gaussian with a Gaussian mean
Yes (assuming they are independent). Changing the mean of a variable from $0$ to $x$ is equivalent to adding $x$ to that variable by linearity of expectation. And if $x$ is normally distributed, the a
Gaussian with a Gaussian mean Yes (assuming they are independent). Changing the mean of a variable from $0$ to $x$ is equivalent to adding $x$ to that variable by linearity of expectation. And if $x$ is normally distributed, the addition of two independent normal variables is still normal. To be more precise, let $X \sim \mathcal{N}(\mu, \sigma_1^2)$ and $Y \sim \mathcal{N}(0, \sigma_2^2)$, and then the variable of interest is: $$X + Y \sim \mathcal{N}(\mu, \sigma_1^2 + \sigma_2^2)$$ which is still normal. If they are dependent, then I don't believe we can conclude very much about the resulting distribution. Edit: To address the revised question, let $n$ be the number of times we sample $X$ in step 1, and then $m$ be the number of times we sample from $x_i + Y$ in step 3. My previous response addressed the case where $m = 1$. For $m > 1$, each of the $m$ "subsamples" depends on the same realization $x_i$. You can think of this as $n$ separate normal distributions, each shifted by amounts that are normally distributed after being sampled $m$ times. There may not be any nicer description, but this depends on what you want to analyze and what questions you want to answer about the resulting distribution. Also note that depending on the relative sizes of $n$ and $m$ (e.g. if $m$ tends to infinity while $n$ stays constant, or $n$ tends to infinity while $m$ stays constant) or the relative sizes of $\sigma_1^2$ and $\sigma_2^2$, you may be able to come up with a suitable approximation. For example, the following histogram depicts a random sample of $nm$ points according to your scheme where: $n = 2$, $m = 10000$, $\mu = 0$, $\sigma_1^2 = 50$, and $\sigma_2^2 = 1$.
Gaussian with a Gaussian mean Yes (assuming they are independent). Changing the mean of a variable from $0$ to $x$ is equivalent to adding $x$ to that variable by linearity of expectation. And if $x$ is normally distributed, the a
28,871
How to improve running time for R MICE data imputation
You can use quickpred() from mice package using which you can limit the predictors by specifying the mincor (Minimum correlation) and minpuc (proportion of usable cases). Also you can use the exclude and include parameters for controlling the predictors.
How to improve running time for R MICE data imputation
You can use quickpred() from mice package using which you can limit the predictors by specifying the mincor (Minimum correlation) and minpuc (proportion of usable cases). Also you can use the exclude
How to improve running time for R MICE data imputation You can use quickpred() from mice package using which you can limit the predictors by specifying the mincor (Minimum correlation) and minpuc (proportion of usable cases). Also you can use the exclude and include parameters for controlling the predictors.
How to improve running time for R MICE data imputation You can use quickpred() from mice package using which you can limit the predictors by specifying the mincor (Minimum correlation) and minpuc (proportion of usable cases). Also you can use the exclude
28,872
How to improve running time for R MICE data imputation
I made a wrapper for the mice function that includes one extra argument, droplist, where you can pass a character vector of predictor variables that you do not want used in the right-hand-side of the imputation formulas. This was for speed, as I found that factor variables with many levels would slow down the imputation considerably. I wasn't aware of the quickpred function referenced by @Aanish, and perhaps you could use both concepts together. Below is the function as it appears in my glmmplus package. If you find it useful, I may open a pull request in the actual mice package. ImputeData <- function(data, m = 10, maxit = 15, droplist = NULL) { if (length(intersect(names(data), droplist)) < length(droplist)) { stop("Droplist variables not found in data set") } predictorMatrix <- (1 - diag(1, ncol(data))) for (term in droplist) { drop.index <- which(names(data) == term) predictorMatrix[, drop.index] <- 0 } mids.out <- mice(data, m = m, maxit = maxit, predictorMatrix = predictorMatrix) return(mids.out) }
How to improve running time for R MICE data imputation
I made a wrapper for the mice function that includes one extra argument, droplist, where you can pass a character vector of predictor variables that you do not want used in the right-hand-side of the
How to improve running time for R MICE data imputation I made a wrapper for the mice function that includes one extra argument, droplist, where you can pass a character vector of predictor variables that you do not want used in the right-hand-side of the imputation formulas. This was for speed, as I found that factor variables with many levels would slow down the imputation considerably. I wasn't aware of the quickpred function referenced by @Aanish, and perhaps you could use both concepts together. Below is the function as it appears in my glmmplus package. If you find it useful, I may open a pull request in the actual mice package. ImputeData <- function(data, m = 10, maxit = 15, droplist = NULL) { if (length(intersect(names(data), droplist)) < length(droplist)) { stop("Droplist variables not found in data set") } predictorMatrix <- (1 - diag(1, ncol(data))) for (term in droplist) { drop.index <- which(names(data) == term) predictorMatrix[, drop.index] <- 0 } mids.out <- mice(data, m = m, maxit = maxit, predictorMatrix = predictorMatrix) return(mids.out) }
How to improve running time for R MICE data imputation I made a wrapper for the mice function that includes one extra argument, droplist, where you can pass a character vector of predictor variables that you do not want used in the right-hand-side of the
28,873
Fitting negative binomial distribution to large count data
Firstly, goodness of fitness tests or tests for particular distributions will typically reject the null hypothesis given a sufficiently large sample size, because we are hardly ever in the situation, where data exactly arises from a particular distribution and we did also take into account all relevant (possibly unmeasured) covariates that explain further differences between subject/units. However, in practice such deviations can be pretty irrelevant and it is well known that many models can be used, even if their are some deviations from distributional assumptions (most famously regarding the normality of residuals in regression models with normal error terms). Secondly, a negative binomial model is a relatively logical default choice for count data (that can only be $\geq 0$). We do not have that many details though and there might be obvious features of the data (e.g. regarding how it arises) that would suggest something more sophisticated. E.g. accounting for key covariates using negative binomial regression could be considered.
Fitting negative binomial distribution to large count data
Firstly, goodness of fitness tests or tests for particular distributions will typically reject the null hypothesis given a sufficiently large sample size, because we are hardly ever in the situation,
Fitting negative binomial distribution to large count data Firstly, goodness of fitness tests or tests for particular distributions will typically reject the null hypothesis given a sufficiently large sample size, because we are hardly ever in the situation, where data exactly arises from a particular distribution and we did also take into account all relevant (possibly unmeasured) covariates that explain further differences between subject/units. However, in practice such deviations can be pretty irrelevant and it is well known that many models can be used, even if their are some deviations from distributional assumptions (most famously regarding the normality of residuals in regression models with normal error terms). Secondly, a negative binomial model is a relatively logical default choice for count data (that can only be $\geq 0$). We do not have that many details though and there might be obvious features of the data (e.g. regarding how it arises) that would suggest something more sophisticated. E.g. accounting for key covariates using negative binomial regression could be considered.
Fitting negative binomial distribution to large count data Firstly, goodness of fitness tests or tests for particular distributions will typically reject the null hypothesis given a sufficiently large sample size, because we are hardly ever in the situation,
28,874
How much is too much overfitting?
It's clear that if your model is doing a couple percent better on your training set than your test set, you are overfitting. It is not true. Your model has learned based on the training and hasn't "seen" before the test set, so obviously it should perform better on the training set. The fact that it performs (a little bit) worse on test set does not mean that the model is overfitting -- the "noticeable" difference can suggest it. Check the definition and description from Wikipedia: Overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfit will generally have poor predictive performance, as it can exaggerate minor fluctuations in the data. The possibility of overfitting exists because the criterion used for training the model is not the same as the criterion used to judge the efficacy of a model. In particular, a model is typically trained by maximizing its performance on some set of training data. However, its efficacy is determined not by its performance on the training data but by its ability to perform well on unseen data. Overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from trend. In extreme case, overfitting model fits perfectly to the training data and poorly to the test data. However in most of the real life examples this is much more subtle and it can be much harder to judge overfitting. Finally, it can happen that the data you have for your training and test set are similar, so model seems to perform fine on both sets, but when you use it on some new dataset it performs poorly because of overfitting, as in Google flu trends example. Imagine you have data about some $Y$ and its time trend (plotted below). You have data about it on time from 0 to 30, and decide to use 0-20 part of the data as a training set and 21-30 as a hold-out sample. It performs very well on both samples, there is an obvious linear trend, however when you make predictions on new unseen before data for times higher than 30, the good fit appears to be illusory. This is an abstract example, but imagine a real-life one: you have a model that predicts sales of some product, it performs very well in summer, but autumn comes and the performance drops. Your model is overfitting to summer data -- maybe it's good only for the summer data, maybe it performed good only on this years summer data, maybe this autumn is an outlier and the model is fine...
How much is too much overfitting?
It's clear that if your model is doing a couple percent better on your training set than your test set, you are overfitting. It is not true. Your model has learned based on the training and hasn't
How much is too much overfitting? It's clear that if your model is doing a couple percent better on your training set than your test set, you are overfitting. It is not true. Your model has learned based on the training and hasn't "seen" before the test set, so obviously it should perform better on the training set. The fact that it performs (a little bit) worse on test set does not mean that the model is overfitting -- the "noticeable" difference can suggest it. Check the definition and description from Wikipedia: Overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfit will generally have poor predictive performance, as it can exaggerate minor fluctuations in the data. The possibility of overfitting exists because the criterion used for training the model is not the same as the criterion used to judge the efficacy of a model. In particular, a model is typically trained by maximizing its performance on some set of training data. However, its efficacy is determined not by its performance on the training data but by its ability to perform well on unseen data. Overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from trend. In extreme case, overfitting model fits perfectly to the training data and poorly to the test data. However in most of the real life examples this is much more subtle and it can be much harder to judge overfitting. Finally, it can happen that the data you have for your training and test set are similar, so model seems to perform fine on both sets, but when you use it on some new dataset it performs poorly because of overfitting, as in Google flu trends example. Imagine you have data about some $Y$ and its time trend (plotted below). You have data about it on time from 0 to 30, and decide to use 0-20 part of the data as a training set and 21-30 as a hold-out sample. It performs very well on both samples, there is an obvious linear trend, however when you make predictions on new unseen before data for times higher than 30, the good fit appears to be illusory. This is an abstract example, but imagine a real-life one: you have a model that predicts sales of some product, it performs very well in summer, but autumn comes and the performance drops. Your model is overfitting to summer data -- maybe it's good only for the summer data, maybe it performed good only on this years summer data, maybe this autumn is an outlier and the model is fine...
How much is too much overfitting? It's clear that if your model is doing a couple percent better on your training set than your test set, you are overfitting. It is not true. Your model has learned based on the training and hasn't
28,875
What's the physical meaning of the eigenvectors of the Gram/Kernel matrix?
The eigenvalues are actually the same as those of the covariance matrix. Let $X = U \Sigma V^T$ be the singular value decomposition; then $$X X^T = U \Sigma \underbrace{V^T V}_{I} \Sigma U^T = U \Sigma^2 U^T$$ and similarly $X^T X = V \Sigma^2 V^T$. Note that in the typical case where $X$ is $n \times p$ with $n \gg p$, most of the eigenvalues of the Gram matrix will be zero. If you're using say an RBF kernel, none will be zero (though some will probably be incredibly small). The eigenvectors of the Gram matrix are thus seen to be the left singular values of $X$, $U$. One way to interpret these is: The right singular vectors (columns of $V$, the eigenvectors of the covariance matrix) give the directions that data tends to lie on in the feature space. The singular values (diagonal of $\Sigma$, square root of the eigenvalues of either matrix) give how important each component is to the dataset as a whole. The left singular vectors (columns of $U$, the eigenvectors of the Gram matrix) give the representation of how much each data point is represented by each of the components, relative to how much they're used in the whole dataset. (Columns of $U \Sigma$ give the scores, the linear coefficient of each component when representing the data in the basis $V$.) If you take only the first few columns of $U$ (and the corresponding block of $\Sigma$), you get the data projected as well as possible onto the most frequent components (PCA). If a data point has high norm of its row of $U$, that means that it uses components much more than other components do, ie it has high leverage / "sticks out." If $p > n$, these will all be one, in which case you can either look only at the first $k$ values (leverage scores corresponding to the best rank-k approximation) or do some kind of soft-thresholding instead. Doing that with $k=1$, which is computationally easier, gives you PageRank. See also this thread and the links therein for everything you'd ever want to know about SVD/PCA, which you perhaps didn't realize was really your question but it was.
What's the physical meaning of the eigenvectors of the Gram/Kernel matrix?
The eigenvalues are actually the same as those of the covariance matrix. Let $X = U \Sigma V^T$ be the singular value decomposition; then $$X X^T = U \Sigma \underbrace{V^T V}_{I} \Sigma U^T = U \Sigm
What's the physical meaning of the eigenvectors of the Gram/Kernel matrix? The eigenvalues are actually the same as those of the covariance matrix. Let $X = U \Sigma V^T$ be the singular value decomposition; then $$X X^T = U \Sigma \underbrace{V^T V}_{I} \Sigma U^T = U \Sigma^2 U^T$$ and similarly $X^T X = V \Sigma^2 V^T$. Note that in the typical case where $X$ is $n \times p$ with $n \gg p$, most of the eigenvalues of the Gram matrix will be zero. If you're using say an RBF kernel, none will be zero (though some will probably be incredibly small). The eigenvectors of the Gram matrix are thus seen to be the left singular values of $X$, $U$. One way to interpret these is: The right singular vectors (columns of $V$, the eigenvectors of the covariance matrix) give the directions that data tends to lie on in the feature space. The singular values (diagonal of $\Sigma$, square root of the eigenvalues of either matrix) give how important each component is to the dataset as a whole. The left singular vectors (columns of $U$, the eigenvectors of the Gram matrix) give the representation of how much each data point is represented by each of the components, relative to how much they're used in the whole dataset. (Columns of $U \Sigma$ give the scores, the linear coefficient of each component when representing the data in the basis $V$.) If you take only the first few columns of $U$ (and the corresponding block of $\Sigma$), you get the data projected as well as possible onto the most frequent components (PCA). If a data point has high norm of its row of $U$, that means that it uses components much more than other components do, ie it has high leverage / "sticks out." If $p > n$, these will all be one, in which case you can either look only at the first $k$ values (leverage scores corresponding to the best rank-k approximation) or do some kind of soft-thresholding instead. Doing that with $k=1$, which is computationally easier, gives you PageRank. See also this thread and the links therein for everything you'd ever want to know about SVD/PCA, which you perhaps didn't realize was really your question but it was.
What's the physical meaning of the eigenvectors of the Gram/Kernel matrix? The eigenvalues are actually the same as those of the covariance matrix. Let $X = U \Sigma V^T$ be the singular value decomposition; then $$X X^T = U \Sigma \underbrace{V^T V}_{I} \Sigma U^T = U \Sigm
28,876
Interpreting output of dredge
The function MuMIn::dredge simply returns a list of models with every possible combination of predictor variable. As for your results, allow me to disagree with what you said: I understand that model 2 is the best model and shows lND to have a negative effect on diversity. that's partially right, 1ND indeed has a negative effect on diversity, but from the delta (=delta AIC) you cannot distinguish model 2 from 3, 1, and 5 since (using the common thumb rule) they have dAIC < 2 No value means no effect. That's not correct. dredge returns a list with every possible combination of variables, if a variable doesn't have a value, it means it was not included in the model. For example, model 3 only has 1NN, besides intercept obviously. AIC values show that these model are not very informative. Is this interpretation correct or am I missing something? Since the first four models have similar support (notice also that their Akaike weight, wich varies from 0 to 1, are not relatively high) I strongly suggest that you use model averaging, take a look at MuMIn::model.avg and also read Chapter 4 of Burnham & Anderson (2002). I hope this are clearly enough, but feel free to ask again
Interpreting output of dredge
The function MuMIn::dredge simply returns a list of models with every possible combination of predictor variable. As for your results, allow me to disagree with what you said: I understand that model
Interpreting output of dredge The function MuMIn::dredge simply returns a list of models with every possible combination of predictor variable. As for your results, allow me to disagree with what you said: I understand that model 2 is the best model and shows lND to have a negative effect on diversity. that's partially right, 1ND indeed has a negative effect on diversity, but from the delta (=delta AIC) you cannot distinguish model 2 from 3, 1, and 5 since (using the common thumb rule) they have dAIC < 2 No value means no effect. That's not correct. dredge returns a list with every possible combination of variables, if a variable doesn't have a value, it means it was not included in the model. For example, model 3 only has 1NN, besides intercept obviously. AIC values show that these model are not very informative. Is this interpretation correct or am I missing something? Since the first four models have similar support (notice also that their Akaike weight, wich varies from 0 to 1, are not relatively high) I strongly suggest that you use model averaging, take a look at MuMIn::model.avg and also read Chapter 4 of Burnham & Anderson (2002). I hope this are clearly enough, but feel free to ask again
Interpreting output of dredge The function MuMIn::dredge simply returns a list of models with every possible combination of predictor variable. As for your results, allow me to disagree with what you said: I understand that model
28,877
Correction for multiple testing on a modest number of tests (10-20) with FDR?
I see people confusing this all the time, also in this forum. I think this is caused to a large extent because in practice Benjamini-Hochberg's procedure is used as a synonym of False Discovery Rate (and as a black-box for "adjusting" p-values as requested by reviewers for their papers). One has to clearly separate the FDR concept from Benjamini-Hochberg's method. The first one is a generalized type-I error, while the second one is a multiple testing procedure which controls that error. This is very analogous for example to FWER and Bonferroni's procedure. Indeed, there is no immediate reason why the number of hypotheses should matter when you want to use FDR controlling methods. It just depends on your goal. In particular, assume you are testing $m$ hypotheses and your procedure rejects $R$ of them with $V$ false rejections. Now you use a FWER ($= \Pr[V \geq 1]$) controlling procedure if you want to make no type I errors. On the other hand, you use the $\text{FDR}$, when it is acceptable to make a few errors, as long as they are relatively few compared to all the rejections $R$ you made, i.e. $$ \text{FDR} = \mathbb E\left[\frac{V}{\max{R,1}}\right]$$ Thus, the answer to your question completely depends on what you want to achieve and there is no intrinsic reason why small $m$ would be problematic. Just to illustrate a bit further: The data analysis example in Benjamini-Hochberg's seminal 1995 paper just included $m=15$ hypotheses, and of course it is also valid for that case! Of course, there is a caveat to my answer: The BH procedure only got popular after "massive" (e.g. Microarrays) datasets started becoming available. And as you mention it is typically used for such "Big data" application. But this is just because in such cases the $\text{FDR}$ as a criterion makes more sense, e.g. because it is scalable and adaptive and facilitates exploratory research. The FWER on the other hand is very stringent, as required by clinical studies etc. and punishes you too much for exploring too many options simultaneously (i.e. not well suited to exploratory work). Now, let's assume you have decided that the FDR is the appropriate criterion for your application. Is Benjamini Hochberg the right choice to control the FDR when the number of hypotheses is low? I would say yes, since it is statistically valid also for low $m$. But for low $m$ you could for example also use another procedure, namely Benjamini and Liu's procedure, which also controls the FDR. In fact, the authors suggest its use (over Benjamini-Hochberg) when $m \leq 14$ and most hypotheses are expected to be false. So you see that there are alternative choices for FDR control! In practice, I'd still use BH just because it is so well established and because the benefits of using Benjamini-Liu will be marginal in most cases if at all existent. On a final related note, there are indeed some FDR controlling procedures which you should not use for low $m$! These include all local-fdr based procedures, for example as implemented in the R packages "fdrtool" and "locfdr".
Correction for multiple testing on a modest number of tests (10-20) with FDR?
I see people confusing this all the time, also in this forum. I think this is caused to a large extent because in practice Benjamini-Hochberg's procedure is used as a synonym of False Discovery Rate (
Correction for multiple testing on a modest number of tests (10-20) with FDR? I see people confusing this all the time, also in this forum. I think this is caused to a large extent because in practice Benjamini-Hochberg's procedure is used as a synonym of False Discovery Rate (and as a black-box for "adjusting" p-values as requested by reviewers for their papers). One has to clearly separate the FDR concept from Benjamini-Hochberg's method. The first one is a generalized type-I error, while the second one is a multiple testing procedure which controls that error. This is very analogous for example to FWER and Bonferroni's procedure. Indeed, there is no immediate reason why the number of hypotheses should matter when you want to use FDR controlling methods. It just depends on your goal. In particular, assume you are testing $m$ hypotheses and your procedure rejects $R$ of them with $V$ false rejections. Now you use a FWER ($= \Pr[V \geq 1]$) controlling procedure if you want to make no type I errors. On the other hand, you use the $\text{FDR}$, when it is acceptable to make a few errors, as long as they are relatively few compared to all the rejections $R$ you made, i.e. $$ \text{FDR} = \mathbb E\left[\frac{V}{\max{R,1}}\right]$$ Thus, the answer to your question completely depends on what you want to achieve and there is no intrinsic reason why small $m$ would be problematic. Just to illustrate a bit further: The data analysis example in Benjamini-Hochberg's seminal 1995 paper just included $m=15$ hypotheses, and of course it is also valid for that case! Of course, there is a caveat to my answer: The BH procedure only got popular after "massive" (e.g. Microarrays) datasets started becoming available. And as you mention it is typically used for such "Big data" application. But this is just because in such cases the $\text{FDR}$ as a criterion makes more sense, e.g. because it is scalable and adaptive and facilitates exploratory research. The FWER on the other hand is very stringent, as required by clinical studies etc. and punishes you too much for exploring too many options simultaneously (i.e. not well suited to exploratory work). Now, let's assume you have decided that the FDR is the appropriate criterion for your application. Is Benjamini Hochberg the right choice to control the FDR when the number of hypotheses is low? I would say yes, since it is statistically valid also for low $m$. But for low $m$ you could for example also use another procedure, namely Benjamini and Liu's procedure, which also controls the FDR. In fact, the authors suggest its use (over Benjamini-Hochberg) when $m \leq 14$ and most hypotheses are expected to be false. So you see that there are alternative choices for FDR control! In practice, I'd still use BH just because it is so well established and because the benefits of using Benjamini-Liu will be marginal in most cases if at all existent. On a final related note, there are indeed some FDR controlling procedures which you should not use for low $m$! These include all local-fdr based procedures, for example as implemented in the R packages "fdrtool" and "locfdr".
Correction for multiple testing on a modest number of tests (10-20) with FDR? I see people confusing this all the time, also in this forum. I think this is caused to a large extent because in practice Benjamini-Hochberg's procedure is used as a synonym of False Discovery Rate (
28,878
Input Normalisation for ReLU neurons
To the best of my knowledge, the closest thing to what you might be looking for is this recent article by Google researchers: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Batch Normalization Consider a layer $l$'s activation output $y_l = f(Wx+b)$ where $f$ is the nonlinearity (ReLU, tanh, etc), $W,b$ are the weights and biases respectively and $x$ is the minibatch of data. What Batch Normalization (BN) does is the following: Standardize $Wx+b$ to have mean zero and variance one. We do it across the minibatch. Let $\hat{x}$ denote the standardized intermediate activation values, i.e. $\hat{x}$ is the normalized version of $Wx+b$. Apply a parameterized (learnable) affine transformation $\hat{x} \rightarrow \gamma \hat{x} + \beta.$ Apply the nonlinearity: $\hat{y}_l = f(\gamma \hat{x} + \beta)$. So, BN standardizes the "raw" (read: before we apply the nonlinearity) activation outputs to have mean zero, variance 1, and then we apply a learned affine transformation, and then finally we apply the nonlinearity. In some sense we may interpret this as allowing the neural network to learn an appropriate parameterized input distribution to the nonlinearity. As every operation is differentiable, we may learn $\gamma, \beta$ parameters via backpropagation. Affine Transformation Motivation If we did not perform a parameterized affine transformation, every nonlinearity would have as input distribution a mean zero and variance 1 distribution. This may or may not be sub-optimal. Note that if the mean zero, variance 1 input distribution is optimal, then the affine transformation can theoretically recover it by setting $\beta$ equal to the batch mean and $\gamma$ equal to the batch standard deviation. Having this parameterized affine transformation also has the added bonus of increasing the representation capacity of the network (more learnable parameters). Standardizing First Why standardize first? Why not just apply the affine transformation? Theoretically speaking, there is no distinction. However, there may be a conditioning issue here. By first standardizing the activation values, perhaps it becomes easier to learn optimal $\gamma, \beta$ parameters. This is purely conjecture on my part, but there have been similar analogues in other recent state of the art conv net architectures. For example, in the recent Microsoft Research technical report Deep Residual Learning for Image Recognition, they in effect learned a transformation where they used the identity transformation as a reference or baseline for comparison. The Microsoft co-authors believed that having this reference or baseline helped pre-condition the problem. I do not believe that it is too far-fetched to wonder if something similar is occurring here with BN and the initial standardization step. BN Applications A particularly interesting result is that using Batch Normalization, the Google team was able to get a tanh Inception network to train on ImageNet and get pretty competitive results. Tanh is a saturating nonlinearity and it has been difficult to get these types of networks to learn due to their saturation/vanishing gradients problem. However, using Batch Normalization, one may assume that the network was able to learn a transformation which maps the activation output values into the non-saturating regime of tanh nonlinearities. Final Notes They even reference the same Yann LeCun factoid you mentioned as motivation for Batch Normalization.
Input Normalisation for ReLU neurons
To the best of my knowledge, the closest thing to what you might be looking for is this recent article by Google researchers: Batch Normalization: Accelerating Deep Network Training by Reducing Intern
Input Normalisation for ReLU neurons To the best of my knowledge, the closest thing to what you might be looking for is this recent article by Google researchers: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Batch Normalization Consider a layer $l$'s activation output $y_l = f(Wx+b)$ where $f$ is the nonlinearity (ReLU, tanh, etc), $W,b$ are the weights and biases respectively and $x$ is the minibatch of data. What Batch Normalization (BN) does is the following: Standardize $Wx+b$ to have mean zero and variance one. We do it across the minibatch. Let $\hat{x}$ denote the standardized intermediate activation values, i.e. $\hat{x}$ is the normalized version of $Wx+b$. Apply a parameterized (learnable) affine transformation $\hat{x} \rightarrow \gamma \hat{x} + \beta.$ Apply the nonlinearity: $\hat{y}_l = f(\gamma \hat{x} + \beta)$. So, BN standardizes the "raw" (read: before we apply the nonlinearity) activation outputs to have mean zero, variance 1, and then we apply a learned affine transformation, and then finally we apply the nonlinearity. In some sense we may interpret this as allowing the neural network to learn an appropriate parameterized input distribution to the nonlinearity. As every operation is differentiable, we may learn $\gamma, \beta$ parameters via backpropagation. Affine Transformation Motivation If we did not perform a parameterized affine transformation, every nonlinearity would have as input distribution a mean zero and variance 1 distribution. This may or may not be sub-optimal. Note that if the mean zero, variance 1 input distribution is optimal, then the affine transformation can theoretically recover it by setting $\beta$ equal to the batch mean and $\gamma$ equal to the batch standard deviation. Having this parameterized affine transformation also has the added bonus of increasing the representation capacity of the network (more learnable parameters). Standardizing First Why standardize first? Why not just apply the affine transformation? Theoretically speaking, there is no distinction. However, there may be a conditioning issue here. By first standardizing the activation values, perhaps it becomes easier to learn optimal $\gamma, \beta$ parameters. This is purely conjecture on my part, but there have been similar analogues in other recent state of the art conv net architectures. For example, in the recent Microsoft Research technical report Deep Residual Learning for Image Recognition, they in effect learned a transformation where they used the identity transformation as a reference or baseline for comparison. The Microsoft co-authors believed that having this reference or baseline helped pre-condition the problem. I do not believe that it is too far-fetched to wonder if something similar is occurring here with BN and the initial standardization step. BN Applications A particularly interesting result is that using Batch Normalization, the Google team was able to get a tanh Inception network to train on ImageNet and get pretty competitive results. Tanh is a saturating nonlinearity and it has been difficult to get these types of networks to learn due to their saturation/vanishing gradients problem. However, using Batch Normalization, one may assume that the network was able to learn a transformation which maps the activation output values into the non-saturating regime of tanh nonlinearities. Final Notes They even reference the same Yann LeCun factoid you mentioned as motivation for Batch Normalization.
Input Normalisation for ReLU neurons To the best of my knowledge, the closest thing to what you might be looking for is this recent article by Google researchers: Batch Normalization: Accelerating Deep Network Training by Reducing Intern
28,879
How to simulate data to demonstrate mixed effects with R (lme4)?
If you prefer a blog article format, Hierarchical linear models and lmer is an article I wrote that features a simulation with random slopes and intercepts. Here's the simulation code I used: rm(list = ls()) set.seed(2345) N <- 30 unit.df <- data.frame(unit = c(1:N), a = rnorm(N)) head(unit.df, 3) unit.df <- within(unit.df, { E.alpha.given.a <- 1 - 0.15 * a E.beta.given.a <- 3 + 0.3 * a }) head(unit.df, 3) library(mvtnorm) q = 0.2 r = 0.9 s = 0.5 cov.matrix <- matrix(c(q^2, r * q * s, r * q * s, s^2), nrow = 2, byrow = TRUE) random.effects <- rmvnorm(N, mean = c(0, 0), sigma = cov.matrix) unit.df$alpha <- unit.df$E.alpha.given.a + random.effects[, 1] unit.df$beta <- unit.df$E.beta.given.a + random.effects[, 2] head(unit.df, 3) J <- 30 M = J * N #Total number of observations x.grid = seq(-4, 4, by = 8/J)[0:30] within.unit.df <- data.frame(unit = sort(rep(c(1:N), J)), j = rep(c(1:J), N), x =rep(x.grid, N)) flat.df = merge(unit.df, within.unit.df) flat.df <- within(flat.df, y <- alpha + x * beta + 0.75 * rnorm(n = M)) simple.df <- flat.df[, c("unit", "a", "x", "y")] head(simple.df, 3) library(lme4) my.lmer <- lmer(y ~ x + (1 + x | unit), data = simple.df) cat("AIC =", AIC(my.lmer)) my.lmer <- lmer(y ~ x + a + x * a + (1 + x | unit), data = simple.df) summary(my.lmer)
How to simulate data to demonstrate mixed effects with R (lme4)?
If you prefer a blog article format, Hierarchical linear models and lmer is an article I wrote that features a simulation with random slopes and intercepts. Here's the simulation code I used: rm(list
How to simulate data to demonstrate mixed effects with R (lme4)? If you prefer a blog article format, Hierarchical linear models and lmer is an article I wrote that features a simulation with random slopes and intercepts. Here's the simulation code I used: rm(list = ls()) set.seed(2345) N <- 30 unit.df <- data.frame(unit = c(1:N), a = rnorm(N)) head(unit.df, 3) unit.df <- within(unit.df, { E.alpha.given.a <- 1 - 0.15 * a E.beta.given.a <- 3 + 0.3 * a }) head(unit.df, 3) library(mvtnorm) q = 0.2 r = 0.9 s = 0.5 cov.matrix <- matrix(c(q^2, r * q * s, r * q * s, s^2), nrow = 2, byrow = TRUE) random.effects <- rmvnorm(N, mean = c(0, 0), sigma = cov.matrix) unit.df$alpha <- unit.df$E.alpha.given.a + random.effects[, 1] unit.df$beta <- unit.df$E.beta.given.a + random.effects[, 2] head(unit.df, 3) J <- 30 M = J * N #Total number of observations x.grid = seq(-4, 4, by = 8/J)[0:30] within.unit.df <- data.frame(unit = sort(rep(c(1:N), J)), j = rep(c(1:J), N), x =rep(x.grid, N)) flat.df = merge(unit.df, within.unit.df) flat.df <- within(flat.df, y <- alpha + x * beta + 0.75 * rnorm(n = M)) simple.df <- flat.df[, c("unit", "a", "x", "y")] head(simple.df, 3) library(lme4) my.lmer <- lmer(y ~ x + (1 + x | unit), data = simple.df) cat("AIC =", AIC(my.lmer)) my.lmer <- lmer(y ~ x + a + x * a + (1 + x | unit), data = simple.df) summary(my.lmer)
How to simulate data to demonstrate mixed effects with R (lme4)? If you prefer a blog article format, Hierarchical linear models and lmer is an article I wrote that features a simulation with random slopes and intercepts. Here's the simulation code I used: rm(list
28,880
How to simulate data to demonstrate mixed effects with R (lme4)?
The data is completely fictional and the code that I used to generate it can be found here. The idea is that we would take measurements on glucose concentrations on a group of 30 athletes at the completion of 15 races in relation to the concentration of the made-up amino acid A (AAA) in these athletes blood. The model is: lmer(glucose ~ AAA + (1 + AAA | athletes) There is a fixed effect slope (glucose ~ amino acid A concentration); however, the slopes also vary between different athletes with a mean = 0 and sd = 0.5, while the intercepts for the different athletes are spread a random effects around 0 with sd = 0.2. Further there is a correlation between intercepts and slopes of 0.8 within the same athlete. These random effects are added to a chosen intercept = 1 for fixed effects, and slope = 2. The values of the concentration of glucose were calculated as alpha + AAA * beta + 0.75 * rnorm(observations), meaning the intercept for every athlete (i.e. 1 + random effects changes in the intercept) $+$ the concentration of amino acid, AAA $*$ the slope for every athlete (i.e. 2 + random effect changes in slopes for each athlete) $+$ noise ($\epsilon$), which we set up to have a sd = 0.75. So the data look like: athletes races AAA glucose 1 1 1 51.79364 104.26708 2 1 2 49.94477 101.72392 3 1 3 45.29675 92.49860 4 1 4 49.42087 100.53029 5 1 5 45.92516 92.54637 6 1 6 51.21132 103.97573 ... Unrealistic levels of glucose, but still... The summary returns: Random effects: Groups Name Variance Std.Dev. Corr athletes (Intercept) 0.006045 0.07775 AAA 0.204471 0.45218 1.00 Residual 0.545651 0.73868 Number of obs: 450, groups: athletes, 30 Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 1.31146 0.35845 401.90000 3.659 0.000287 *** AAA 1.93785 0.08286 29.00000 23.386 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 The random effects correlation is 1 instead of 0.8. The sd = 2 for the random variation in intercepts is interpreted as 0.07775. The standard deviation of 0.5 for random changes in slopes among athletes is calculated as 0.45218. The noise set up with a standard deviation 0.75 was returned as 0.73868. The intercept of fixed effects was supposed to be 1, and we got 1.31146. For the slope it was supposed to be 2, and the estimate was 1.93785. Fairly close!
How to simulate data to demonstrate mixed effects with R (lme4)?
The data is completely fictional and the code that I used to generate it can be found here. The idea is that we would take measurements on glucose concentrations on a group of 30 athletes at the compl
How to simulate data to demonstrate mixed effects with R (lme4)? The data is completely fictional and the code that I used to generate it can be found here. The idea is that we would take measurements on glucose concentrations on a group of 30 athletes at the completion of 15 races in relation to the concentration of the made-up amino acid A (AAA) in these athletes blood. The model is: lmer(glucose ~ AAA + (1 + AAA | athletes) There is a fixed effect slope (glucose ~ amino acid A concentration); however, the slopes also vary between different athletes with a mean = 0 and sd = 0.5, while the intercepts for the different athletes are spread a random effects around 0 with sd = 0.2. Further there is a correlation between intercepts and slopes of 0.8 within the same athlete. These random effects are added to a chosen intercept = 1 for fixed effects, and slope = 2. The values of the concentration of glucose were calculated as alpha + AAA * beta + 0.75 * rnorm(observations), meaning the intercept for every athlete (i.e. 1 + random effects changes in the intercept) $+$ the concentration of amino acid, AAA $*$ the slope for every athlete (i.e. 2 + random effect changes in slopes for each athlete) $+$ noise ($\epsilon$), which we set up to have a sd = 0.75. So the data look like: athletes races AAA glucose 1 1 1 51.79364 104.26708 2 1 2 49.94477 101.72392 3 1 3 45.29675 92.49860 4 1 4 49.42087 100.53029 5 1 5 45.92516 92.54637 6 1 6 51.21132 103.97573 ... Unrealistic levels of glucose, but still... The summary returns: Random effects: Groups Name Variance Std.Dev. Corr athletes (Intercept) 0.006045 0.07775 AAA 0.204471 0.45218 1.00 Residual 0.545651 0.73868 Number of obs: 450, groups: athletes, 30 Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 1.31146 0.35845 401.90000 3.659 0.000287 *** AAA 1.93785 0.08286 29.00000 23.386 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 The random effects correlation is 1 instead of 0.8. The sd = 2 for the random variation in intercepts is interpreted as 0.07775. The standard deviation of 0.5 for random changes in slopes among athletes is calculated as 0.45218. The noise set up with a standard deviation 0.75 was returned as 0.73868. The intercept of fixed effects was supposed to be 1, and we got 1.31146. For the slope it was supposed to be 2, and the estimate was 1.93785. Fairly close!
How to simulate data to demonstrate mixed effects with R (lme4)? The data is completely fictional and the code that I used to generate it can be found here. The idea is that we would take measurements on glucose concentrations on a group of 30 athletes at the compl
28,881
Input vector representation vs output vector representation in word2vec
Garten et al. {1} compared word vectors obtained by adding input word vectors with output word vectors, vs. word vectors obtained by concatenating input word vectors with output word vectors. In their experiments, concatenating yield significantly better results: The video lecture {2} recommends to average input word vectors with output word vectors, but doesn't compare against concatenating input word vectors with output word vectors. References: {1} Garten, J., Sagae, K., Ustun, V., & Dehghani, M. (2015, June). Combining Distributed Vector Representations for Words. In Proceedings of NAACL-HLT (pp. 95-101). {2} Stanford CS224N: NLP with Deep Learning by Christopher Manning | Winter 2019 | Lecture 2 – Word Vectors and Word Senses. https://youtu.be/kEMJRjEdNzM?t=1565 (mirror)
Input vector representation vs output vector representation in word2vec
Garten et al. {1} compared word vectors obtained by adding input word vectors with output word vectors, vs. word vectors obtained by concatenating input word vectors with output word vectors. In their
Input vector representation vs output vector representation in word2vec Garten et al. {1} compared word vectors obtained by adding input word vectors with output word vectors, vs. word vectors obtained by concatenating input word vectors with output word vectors. In their experiments, concatenating yield significantly better results: The video lecture {2} recommends to average input word vectors with output word vectors, but doesn't compare against concatenating input word vectors with output word vectors. References: {1} Garten, J., Sagae, K., Ustun, V., & Dehghani, M. (2015, June). Combining Distributed Vector Representations for Words. In Proceedings of NAACL-HLT (pp. 95-101). {2} Stanford CS224N: NLP with Deep Learning by Christopher Manning | Winter 2019 | Lecture 2 – Word Vectors and Word Senses. https://youtu.be/kEMJRjEdNzM?t=1565 (mirror)
Input vector representation vs output vector representation in word2vec Garten et al. {1} compared word vectors obtained by adding input word vectors with output word vectors, vs. word vectors obtained by concatenating input word vectors with output word vectors. In their
28,882
What is Cronbach's Alpha intuitively?
You can see what it means by studying the formula: $$ \alpha = \frac{K}{K-1}\left(1-\frac{\sum \sigma^2_{x_i}}{\sigma^2_T}\right) $$ where $T=x_1 + x_2 + ... x_K$. $T$ is the total score of a test with $K$ items, each scores $x_i$, respectively. Unpack the formula, using what we know about the covariance of a sum of RV's. If the test items are independent (think $K$ random, Trivial Pursuit questions), then the variance of $T$ is the sum of the variances of the $x_i$ and $\alpha=0$. Suppose that the $x_i$ are actually the same question repeated $K$ times. Then $\sigma^2_T=K^2 \sigma^2_x$, and a little algebra shows that $\alpha=1$. These are the extreme cases. Normally, there will be some positive correlations between the items (assuming that everything is coded in the same direction), so the ratio of the variances will be smaller than 1. The greater the covariances, the larger the value of $\alpha$. Remember that there are $K(K-1)/2$ covariances in the $x_i$ to get the variance of $T$, so you need for most things to be reasonably correlated with most other variables to get a healthy $\alpha$. It is, as @ttnphns pointed out, an almost normalized average covariance. $$ \sigma^2_T = \sum \sigma^2_{x_i} + 2 \sum_{i < j}^K {\rm cov}(x_i,x_j) $$ This term is in the numerator of the ratio of the variances, so the larger it gets, the smaller that ratio becomes, and the quantity gets closer to 1. So what does this imply? Take a very simple testing situation, where each item is correlated with an underlying factor with the same loading, thusly: $$ x_i = \lambda \xi + \epsilon_i$$ Then the covariances are of the form $\lambda^2$. If $\lambda$ is fairly large, relative to the noise $\epsilon$, I'm going to get something close to 1. In fact, if we standardize so that $\sigma^2_x=1$ $$ \alpha = \frac{K}{K-1}\left(1-\frac{1}{1+(K-1)\lambda^2}\right) $$ and $\alpha$ is basically a monotone, if non-linear, version of the factor loading. Sadly, the converse is not true, and large $\alpha$ values can be obtained from a variety of factor structures, or really none at all. The items need to be correlated, on average, but that's not actually saying much. The Cronbach alpha is a test statistic that gets way too much publicity, in my opinion, for what it's worth. Nowadays, there is no reason not to do a factor analysis and confirm whether the test items are performing as one believes they should. The following graph shows the value of $\alpha$ when there are 20 items with identical loadings, as above. Psychologists like to get an $\alpha$ greater than 0.80, but that is achievable with a loading of 0.5 -- not exactly a tight test item.
What is Cronbach's Alpha intuitively?
You can see what it means by studying the formula: $$ \alpha = \frac{K}{K-1}\left(1-\frac{\sum \sigma^2_{x_i}}{\sigma^2_T}\right) $$ where $T=x_1 + x_2 + ... x_K$. $T$ is the total score of a test wi
What is Cronbach's Alpha intuitively? You can see what it means by studying the formula: $$ \alpha = \frac{K}{K-1}\left(1-\frac{\sum \sigma^2_{x_i}}{\sigma^2_T}\right) $$ where $T=x_1 + x_2 + ... x_K$. $T$ is the total score of a test with $K$ items, each scores $x_i$, respectively. Unpack the formula, using what we know about the covariance of a sum of RV's. If the test items are independent (think $K$ random, Trivial Pursuit questions), then the variance of $T$ is the sum of the variances of the $x_i$ and $\alpha=0$. Suppose that the $x_i$ are actually the same question repeated $K$ times. Then $\sigma^2_T=K^2 \sigma^2_x$, and a little algebra shows that $\alpha=1$. These are the extreme cases. Normally, there will be some positive correlations between the items (assuming that everything is coded in the same direction), so the ratio of the variances will be smaller than 1. The greater the covariances, the larger the value of $\alpha$. Remember that there are $K(K-1)/2$ covariances in the $x_i$ to get the variance of $T$, so you need for most things to be reasonably correlated with most other variables to get a healthy $\alpha$. It is, as @ttnphns pointed out, an almost normalized average covariance. $$ \sigma^2_T = \sum \sigma^2_{x_i} + 2 \sum_{i < j}^K {\rm cov}(x_i,x_j) $$ This term is in the numerator of the ratio of the variances, so the larger it gets, the smaller that ratio becomes, and the quantity gets closer to 1. So what does this imply? Take a very simple testing situation, where each item is correlated with an underlying factor with the same loading, thusly: $$ x_i = \lambda \xi + \epsilon_i$$ Then the covariances are of the form $\lambda^2$. If $\lambda$ is fairly large, relative to the noise $\epsilon$, I'm going to get something close to 1. In fact, if we standardize so that $\sigma^2_x=1$ $$ \alpha = \frac{K}{K-1}\left(1-\frac{1}{1+(K-1)\lambda^2}\right) $$ and $\alpha$ is basically a monotone, if non-linear, version of the factor loading. Sadly, the converse is not true, and large $\alpha$ values can be obtained from a variety of factor structures, or really none at all. The items need to be correlated, on average, but that's not actually saying much. The Cronbach alpha is a test statistic that gets way too much publicity, in my opinion, for what it's worth. Nowadays, there is no reason not to do a factor analysis and confirm whether the test items are performing as one believes they should. The following graph shows the value of $\alpha$ when there are 20 items with identical loadings, as above. Psychologists like to get an $\alpha$ greater than 0.80, but that is achievable with a loading of 0.5 -- not exactly a tight test item.
What is Cronbach's Alpha intuitively? You can see what it means by studying the formula: $$ \alpha = \frac{K}{K-1}\left(1-\frac{\sum \sigma^2_{x_i}}{\sigma^2_T}\right) $$ where $T=x_1 + x_2 + ... x_K$. $T$ is the total score of a test wi
28,883
What does LS (least square) means refer to?
Consider the model: $$ y = \beta_0 + \beta_1 \text{treatment} + \beta_2 \text{block} + \beta_3 \text{year} $$ Where $y$ is some outcome of interest, treatment is a treatment factor, block is a blocking factor and year is the year (a factor) where the experiment is repeated over several years. We would like to recover $E(Y|\text{treatment})$, but it cannot done from this model. Instead we find $E(Y|\text{treatment}, \text{block}, \text{year})$. However we could average the fitted value from $E(Y|\text{treatment}, \text{block}, \text{year})$, over block and year, and then think of it as $E(Y|\text{treatment})$. If the model is estimated by least squares (OLS in the linear case), this is the LS-mean (of treatment, in this case). For a reference on implementation (in R) see this pdf it also covers LS-means from the common models.
What does LS (least square) means refer to?
Consider the model: $$ y = \beta_0 + \beta_1 \text{treatment} + \beta_2 \text{block} + \beta_3 \text{year} $$ Where $y$ is some outcome of interest, treatment is a treatment factor, block is a blockin
What does LS (least square) means refer to? Consider the model: $$ y = \beta_0 + \beta_1 \text{treatment} + \beta_2 \text{block} + \beta_3 \text{year} $$ Where $y$ is some outcome of interest, treatment is a treatment factor, block is a blocking factor and year is the year (a factor) where the experiment is repeated over several years. We would like to recover $E(Y|\text{treatment})$, but it cannot done from this model. Instead we find $E(Y|\text{treatment}, \text{block}, \text{year})$. However we could average the fitted value from $E(Y|\text{treatment}, \text{block}, \text{year})$, over block and year, and then think of it as $E(Y|\text{treatment})$. If the model is estimated by least squares (OLS in the linear case), this is the LS-mean (of treatment, in this case). For a reference on implementation (in R) see this pdf it also covers LS-means from the common models.
What does LS (least square) means refer to? Consider the model: $$ y = \beta_0 + \beta_1 \text{treatment} + \beta_2 \text{block} + \beta_3 \text{year} $$ Where $y$ is some outcome of interest, treatment is a treatment factor, block is a blockin
28,884
What does LS (least square) means refer to?
"In an analysis of covariance model, [LS Means] are the group means after having controlled for a covariate." The blog On Biostatistics and Clinical Trials has a post with what seems to be a good layman's explanation. References therein are also helpful.
What does LS (least square) means refer to?
"In an analysis of covariance model, [LS Means] are the group means after having controlled for a covariate." The blog On Biostatistics and Clinical Trials has a post with what seems to be a good laym
What does LS (least square) means refer to? "In an analysis of covariance model, [LS Means] are the group means after having controlled for a covariate." The blog On Biostatistics and Clinical Trials has a post with what seems to be a good layman's explanation. References therein are also helpful.
What does LS (least square) means refer to? "In an analysis of covariance model, [LS Means] are the group means after having controlled for a covariate." The blog On Biostatistics and Clinical Trials has a post with what seems to be a good laym
28,885
P-value vs Type 1 Error [duplicate]
A paper that may help is Murdock, D, Tsai, Y, and Adcock, J (2008) P-Values are Random Variables. The American Statistician. (62) 242-245. Imagine that you have a coin that you want to test if it is fair (maybe it is bent or otherwise distorted) and plan to flip the coin 10 times as your test. Clearly if you see 5 heads and 5 tails then you can't reject the null that it is fair and most people would be highly suspicious of the coin if you saw 10 heads (or 10 tails), but to be fair we should set up before the test a rule or rejection region to determine if we should reject the null hypothesis (fair coin) or not. One approach to deciding on the rejection region is to set a limit on the type I error rate and choose the rejection region such that the most extreem values whose cumulative probabilities are less than the limit will constitute the rejection region. So if we use the traditional 0.05 as our cut-off then we can start with the extreems and see that if the coin is fair (null is true) then the probability of seeing 0, 1, 9, or 10 heads is less than 5%, but if we add in 2 or 8 heads then the combined probability goes above 5%, so we will reject the null if we see 0, 1, 9, or 10 heads and fail to reject otherwise. A side note is that we could create a rejection region of reject if see 8 heads, don't reject otherwise and that would keep the probability of rejecting when the null is true under 5%, but it seems kind of silly to say we will reject fairness if we see 8 heads, but will not reject fairness if we see 9 or 10 heads. This is why the usual definitions of p-value include a phrase like "or more extreem". So for our test we have our alpha ($\alpha$) level set at 5%, but the actual probability of a type I error (null true as part of the definition) is a little above 2% (the probability of a fair coin showing 0, 1, 9, or 10 heads in 10 flips). Instead of comparing the actual number of heads to our rejection region, we can instead calculate the probability of what we observe (or more extreem) given the null is true and compare that probability to $\alpha = 0.05$. That probability is the p-value. So 0 or 10 heads would result in a p-value of $\frac{2}{1024}$ (one for 0, one for 10). 1 or 9 heads would give a p-value of $\frac{22}{1024}$ (one way to see 0, one way to see 10, 10 ways to see 1 and 10 ways to see 9). If we see 2 or 8 heads then the p-value is greater than 10%. So to summarize: The probability of a Type I error is a property of the chosen cut-off $\alpha$ and the nature of the test (in cases like the t-test when all the assumptions hold, the probability of a type I error will be exactly equal to $\alpha$). The p-value is a random variable computed from the actual observed data that can be compared to $\alpha$ as one way of performing the test).
P-value vs Type 1 Error [duplicate]
A paper that may help is Murdock, D, Tsai, Y, and Adcock, J (2008) P-Values are Random Variables. The American Statistician. (62) 242-245. Imagine that you have a coin that you want to test if it
P-value vs Type 1 Error [duplicate] A paper that may help is Murdock, D, Tsai, Y, and Adcock, J (2008) P-Values are Random Variables. The American Statistician. (62) 242-245. Imagine that you have a coin that you want to test if it is fair (maybe it is bent or otherwise distorted) and plan to flip the coin 10 times as your test. Clearly if you see 5 heads and 5 tails then you can't reject the null that it is fair and most people would be highly suspicious of the coin if you saw 10 heads (or 10 tails), but to be fair we should set up before the test a rule or rejection region to determine if we should reject the null hypothesis (fair coin) or not. One approach to deciding on the rejection region is to set a limit on the type I error rate and choose the rejection region such that the most extreem values whose cumulative probabilities are less than the limit will constitute the rejection region. So if we use the traditional 0.05 as our cut-off then we can start with the extreems and see that if the coin is fair (null is true) then the probability of seeing 0, 1, 9, or 10 heads is less than 5%, but if we add in 2 or 8 heads then the combined probability goes above 5%, so we will reject the null if we see 0, 1, 9, or 10 heads and fail to reject otherwise. A side note is that we could create a rejection region of reject if see 8 heads, don't reject otherwise and that would keep the probability of rejecting when the null is true under 5%, but it seems kind of silly to say we will reject fairness if we see 8 heads, but will not reject fairness if we see 9 or 10 heads. This is why the usual definitions of p-value include a phrase like "or more extreem". So for our test we have our alpha ($\alpha$) level set at 5%, but the actual probability of a type I error (null true as part of the definition) is a little above 2% (the probability of a fair coin showing 0, 1, 9, or 10 heads in 10 flips). Instead of comparing the actual number of heads to our rejection region, we can instead calculate the probability of what we observe (or more extreem) given the null is true and compare that probability to $\alpha = 0.05$. That probability is the p-value. So 0 or 10 heads would result in a p-value of $\frac{2}{1024}$ (one for 0, one for 10). 1 or 9 heads would give a p-value of $\frac{22}{1024}$ (one way to see 0, one way to see 10, 10 ways to see 1 and 10 ways to see 9). If we see 2 or 8 heads then the p-value is greater than 10%. So to summarize: The probability of a Type I error is a property of the chosen cut-off $\alpha$ and the nature of the test (in cases like the t-test when all the assumptions hold, the probability of a type I error will be exactly equal to $\alpha$). The p-value is a random variable computed from the actual observed data that can be compared to $\alpha$ as one way of performing the test).
P-value vs Type 1 Error [duplicate] A paper that may help is Murdock, D, Tsai, Y, and Adcock, J (2008) P-Values are Random Variables. The American Statistician. (62) 242-245. Imagine that you have a coin that you want to test if it
28,886
P-value vs Type 1 Error [duplicate]
$\alpha$ is the threshold at which you determine that you will decide to reject your null hypothesis, if your pvalue is below that established $\alpha$. It does not mean you can accept the null hypothesis if you hit that value or below.
P-value vs Type 1 Error [duplicate]
$\alpha$ is the threshold at which you determine that you will decide to reject your null hypothesis, if your pvalue is below that established $\alpha$. It does not mean you can accept the null hypoth
P-value vs Type 1 Error [duplicate] $\alpha$ is the threshold at which you determine that you will decide to reject your null hypothesis, if your pvalue is below that established $\alpha$. It does not mean you can accept the null hypothesis if you hit that value or below.
P-value vs Type 1 Error [duplicate] $\alpha$ is the threshold at which you determine that you will decide to reject your null hypothesis, if your pvalue is below that established $\alpha$. It does not mean you can accept the null hypoth
28,887
What is the best way to visualize difference-in-differences (multi-period) regression?
What is typically done is that you plot the averages of the outcome variables for your treatment and control group over time. So the control group here are naturally all those who did not receive the treatment whilst the treatment group are those who receive any intensity of the treatment. That was done for instance in this presentation (slides 25 and 26, regression equation is on slide 27). If you want to show the parallel trends by treatment intensity, there are different ways of doing so and in the end it just boils down on how you want to divide them up. For instance, you can plot the outcome for the treated units in the top 10%, the mean, and the 90% of the treatment intensity distribution. I've rarely seen this done in practice though, yet I think it is a meaningful exercise. To estimate the fading-out time of the treatment you can follow Autor (2003). He includes leads and lags of the treatment as in $$Y_{ist} = \gamma_s + \lambda_t + \sum^{M}_{m=0}\beta_{-m} D_{s,t-m} + \sum^{K}_{k=1}\beta_{+k} D_{s,t+k} + X'_{ist}\pi + \epsilon_{ist}$$ where he has data on each individual $i$, in state $s$ at time $t$, $\gamma$ are state fixed effects, $\lambda$ are time fixed effects, and $X$ are individual controls. The $m$ lags of the treatment estimate the fading out effect from $m=0$, i.e. the treatment period. You can visualize this by plotting the coefficients of the lags over time: The graph is on page 26 of his paper. The nice thing about this is that he also plots the confidence bands (vertical lines) for each coefficient so you can see when the effect is actually different from zero. In this application it seems that there is a long-run effect of the treatment in year two even though the overall treatment effect first increases and then stays stable (albeit insignificantly). You can do the same with the $k$ leads. However, those should be insignificant because otherwise this hints towards anticipatory behavior with respect to the treatment and therefore the treatment status may not be exogenous anymore.
What is the best way to visualize difference-in-differences (multi-period) regression?
What is typically done is that you plot the averages of the outcome variables for your treatment and control group over time. So the control group here are naturally all those who did not receive the
What is the best way to visualize difference-in-differences (multi-period) regression? What is typically done is that you plot the averages of the outcome variables for your treatment and control group over time. So the control group here are naturally all those who did not receive the treatment whilst the treatment group are those who receive any intensity of the treatment. That was done for instance in this presentation (slides 25 and 26, regression equation is on slide 27). If you want to show the parallel trends by treatment intensity, there are different ways of doing so and in the end it just boils down on how you want to divide them up. For instance, you can plot the outcome for the treated units in the top 10%, the mean, and the 90% of the treatment intensity distribution. I've rarely seen this done in practice though, yet I think it is a meaningful exercise. To estimate the fading-out time of the treatment you can follow Autor (2003). He includes leads and lags of the treatment as in $$Y_{ist} = \gamma_s + \lambda_t + \sum^{M}_{m=0}\beta_{-m} D_{s,t-m} + \sum^{K}_{k=1}\beta_{+k} D_{s,t+k} + X'_{ist}\pi + \epsilon_{ist}$$ where he has data on each individual $i$, in state $s$ at time $t$, $\gamma$ are state fixed effects, $\lambda$ are time fixed effects, and $X$ are individual controls. The $m$ lags of the treatment estimate the fading out effect from $m=0$, i.e. the treatment period. You can visualize this by plotting the coefficients of the lags over time: The graph is on page 26 of his paper. The nice thing about this is that he also plots the confidence bands (vertical lines) for each coefficient so you can see when the effect is actually different from zero. In this application it seems that there is a long-run effect of the treatment in year two even though the overall treatment effect first increases and then stays stable (albeit insignificantly). You can do the same with the $k$ leads. However, those should be insignificant because otherwise this hints towards anticipatory behavior with respect to the treatment and therefore the treatment status may not be exogenous anymore.
What is the best way to visualize difference-in-differences (multi-period) regression? What is typically done is that you plot the averages of the outcome variables for your treatment and control group over time. So the control group here are naturally all those who did not receive the
28,888
What does lsmeans report for a generalized linear model, such as Poisson mixed model (fit with glmer)?
The output represents predictions from your model for each image. With the poison family, the default link function is the natural log - so those values are on the log scale. If you do lsmeans(..., type = "response"), it will back-transform the predictions to the original response scale.
What does lsmeans report for a generalized linear model, such as Poisson mixed model (fit with glmer
The output represents predictions from your model for each image. With the poison family, the default link function is the natural log - so those values are on the log scale. If you do lsmeans(..., ty
What does lsmeans report for a generalized linear model, such as Poisson mixed model (fit with glmer)? The output represents predictions from your model for each image. With the poison family, the default link function is the natural log - so those values are on the log scale. If you do lsmeans(..., type = "response"), it will back-transform the predictions to the original response scale.
What does lsmeans report for a generalized linear model, such as Poisson mixed model (fit with glmer The output represents predictions from your model for each image. With the poison family, the default link function is the natural log - so those values are on the log scale. If you do lsmeans(..., ty
28,889
Good examples of statistics sections in applied academic journal articles
In mid 2000s, a group of medical statisticians put their heads together and issued STROBE statement (http://www.strobe-statement.org): STrengthening the Reporting of OBservational studies in Epidemiology. It was published in the same form in Lancet, PLoS Medicine, Journal of Clinical Epidemiology, and several others, which to me seems like the most amazing part of the whole exercise: putting heads together is not nearly as difficult as convincing a diverse group of editors to publish anything as is. There are various checklists based on the STROBE statement that help you define what a "well written" statistical part is. In an unrelated area, the U.S. Institute of Education has been accumulating evidence on performance of the various educational programs in their What Works Clearinghouse. Their Procedures and Standards Handbook delineates what constitutes a solid study (by the education community standards; biostatisticians with clinical trials background find them falling quite short of what FDA requires). Spoiler alert: of the 10,000 study reports in the WWC data base, only 500 "meet WWC standards without reservations"... so when you hear somebody say about an educational product that it is "research-based", there's exactly 95% chance it's actually bogus, with research conducted by the publishers of that product without the control group.
Good examples of statistics sections in applied academic journal articles
In mid 2000s, a group of medical statisticians put their heads together and issued STROBE statement (http://www.strobe-statement.org): STrengthening the Reporting of OBservational studies in Epidemiol
Good examples of statistics sections in applied academic journal articles In mid 2000s, a group of medical statisticians put their heads together and issued STROBE statement (http://www.strobe-statement.org): STrengthening the Reporting of OBservational studies in Epidemiology. It was published in the same form in Lancet, PLoS Medicine, Journal of Clinical Epidemiology, and several others, which to me seems like the most amazing part of the whole exercise: putting heads together is not nearly as difficult as convincing a diverse group of editors to publish anything as is. There are various checklists based on the STROBE statement that help you define what a "well written" statistical part is. In an unrelated area, the U.S. Institute of Education has been accumulating evidence on performance of the various educational programs in their What Works Clearinghouse. Their Procedures and Standards Handbook delineates what constitutes a solid study (by the education community standards; biostatisticians with clinical trials background find them falling quite short of what FDA requires). Spoiler alert: of the 10,000 study reports in the WWC data base, only 500 "meet WWC standards without reservations"... so when you hear somebody say about an educational product that it is "research-based", there's exactly 95% chance it's actually bogus, with research conducted by the publishers of that product without the control group.
Good examples of statistics sections in applied academic journal articles In mid 2000s, a group of medical statisticians put their heads together and issued STROBE statement (http://www.strobe-statement.org): STrengthening the Reporting of OBservational studies in Epidemiol
28,890
Good examples of statistics sections in applied academic journal articles
The following is a favorite article of mine: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2650104/ Here a very well controlled clinical trial was conducted to verify what was a commonly held belief that suppression of herpes outbreaks could reduce the transmission of HIV. It is an example of a null result. They also don't discredit their evidence because it is an enormous and well controlled trial. The design is immense, all aspects of possible confounding or bias were considered. What I appreciate about the statistics section is its brevity, its focus on pre-specified analyses, clearly delineating primary vs. secondary hypotheses, and disclaimer for conflict of interest, describing intent-to-treat and per protocol analyses, and explaining the possible source(s) of bias.
Good examples of statistics sections in applied academic journal articles
The following is a favorite article of mine: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2650104/ Here a very well controlled clinical trial was conducted to verify what was a commonly held belief th
Good examples of statistics sections in applied academic journal articles The following is a favorite article of mine: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2650104/ Here a very well controlled clinical trial was conducted to verify what was a commonly held belief that suppression of herpes outbreaks could reduce the transmission of HIV. It is an example of a null result. They also don't discredit their evidence because it is an enormous and well controlled trial. The design is immense, all aspects of possible confounding or bias were considered. What I appreciate about the statistics section is its brevity, its focus on pre-specified analyses, clearly delineating primary vs. secondary hypotheses, and disclaimer for conflict of interest, describing intent-to-treat and per protocol analyses, and explaining the possible source(s) of bias.
Good examples of statistics sections in applied academic journal articles The following is a favorite article of mine: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2650104/ Here a very well controlled clinical trial was conducted to verify what was a commonly held belief th
28,891
How do you mathematically prove that boosting cannot have zero error in training set arranged in a square with the corners labeled plus and minus?
On this dataset, there are four nontrivial things that a stump could do: $s_1$ classifies the left two points as positive; $s_2$ classifies the right two points as positive; $s_3$ classifies the top two points as positive; $s_4$ classifies the bottom two points as positive. So the function you end up learning could be anything of the form $$\hat y(x) = \sum_{i=1}^n f_i(x),$$ where each $f$ is one of the $s_j$. Now, note that each copy of $s_1$ in that sum cancels out a copy of $s_2$, because they're opposite, and similarly for $s_3$ and $s_4$. So $\hat y$ is really an integer combination $\hat y(x) = a\,s_1(x) + b\,s_3(x)$. But the first half of that expression doesn't change when you move from top to bottom, and the second half always changes by the same amount ($b$). So we know that the output of $\hat y$ must either always increase as the datapoint moves from top to bottom (if $b < 0$), or always decrease (if $b > 0$). If it always increases when moving from top to bottom, then it can't get both the top-left and bottom-left points correct (because the top one is greater than 0 and the bottom one is less than 0). If it always decreases, then similarly it can't get both the top-right and bottom-right points correct. Therefore, no possible sum of boosted stumps can classify the dataset perfectly, QED. (EDIT: I made the proof more understandable. The previous one was true, but didn't provide much intuition, and I figured out a way to do the intuitive thing without too much case analysis.)
How do you mathematically prove that boosting cannot have zero error in training set arranged in a s
On this dataset, there are four nontrivial things that a stump could do: $s_1$ classifies the left two points as positive; $s_2$ classifies the right two points as positive; $s_3$ classifies the top
How do you mathematically prove that boosting cannot have zero error in training set arranged in a square with the corners labeled plus and minus? On this dataset, there are four nontrivial things that a stump could do: $s_1$ classifies the left two points as positive; $s_2$ classifies the right two points as positive; $s_3$ classifies the top two points as positive; $s_4$ classifies the bottom two points as positive. So the function you end up learning could be anything of the form $$\hat y(x) = \sum_{i=1}^n f_i(x),$$ where each $f$ is one of the $s_j$. Now, note that each copy of $s_1$ in that sum cancels out a copy of $s_2$, because they're opposite, and similarly for $s_3$ and $s_4$. So $\hat y$ is really an integer combination $\hat y(x) = a\,s_1(x) + b\,s_3(x)$. But the first half of that expression doesn't change when you move from top to bottom, and the second half always changes by the same amount ($b$). So we know that the output of $\hat y$ must either always increase as the datapoint moves from top to bottom (if $b < 0$), or always decrease (if $b > 0$). If it always increases when moving from top to bottom, then it can't get both the top-left and bottom-left points correct (because the top one is greater than 0 and the bottom one is less than 0). If it always decreases, then similarly it can't get both the top-right and bottom-right points correct. Therefore, no possible sum of boosted stumps can classify the dataset perfectly, QED. (EDIT: I made the proof more understandable. The previous one was true, but didn't provide much intuition, and I figured out a way to do the intuitive thing without too much case analysis.)
How do you mathematically prove that boosting cannot have zero error in training set arranged in a s On this dataset, there are four nontrivial things that a stump could do: $s_1$ classifies the left two points as positive; $s_2$ classifies the right two points as positive; $s_3$ classifies the top
28,892
Expectation on higher-order products of normal distributions
The expectation clearly is proportional to the product of the squared scale factors $\sigma_{11}\sigma_{22}$. The constant of proportionality is obtained by standardizing the variables, which reduces $\Sigma$ to the correlation matrix with correlation $\rho = \sigma_{12}/\sqrt{\sigma_{11}\sigma_{22}}$. Assuming bivariate normality, then according to the analysis at https://stats.stackexchange.com/a/71303 we may change variables to $$X_1 = X,\ X_2 = \rho X + \left(\sqrt{1-\rho^2}\right) Y$$ where $(X,Y)$ has a standard (uncorrelated) bivariate Normal distribution, and we need only compute $$\mathbb{E}\left(X^2 (\rho X + \left(\sqrt{1-\rho^2}\right) Y)^2\right) = \mathbb{E}(\rho^2 X^4 + (1-\rho^2)X^2 Y^2 + c X^3 Y)$$ where the precise value of the constant $c$ does not matter. ($Y$ is the residual upon regressing $X_2$ against $X_1$.) Using the univariate expectations for the standard normal distribution $$\mathbb{E}(X^4)=3,\ \mathbb{E}(X^2) = \mathbb{E}(Y^2)=1,\ \mathbb{E}Y=0$$ and noting that $X$ and $Y$ are independent yields $$\mathbb{E}(\rho^2 X^4 + (1-\rho^2)X^2 Y^2 + c X^3 Y) = 3\rho^2 + (1-\rho^2) + 0 = 1 + 2\rho^2.$$ Multiplying this by $\sigma_{11}\sigma_{22}$ gives $$\mathbb{E}(X_1^2 X_2^2) = \sigma_{11}\sigma_{22} + 2\sigma_{12}^2.$$ The same method applies to finding the expectation of any polynomial in $(X_1,X_2)$, because it becomes a polynomial in $(X, \rho X + \left(\sqrt{1-\rho^2}\right)Y)$ and that, when expanded, is a polynomial in the independent normally distributed variables $X$ and $Y$. From $$\mathbb{E}(X^{2k}) = \mathbb{E}(Y^{2k}) = \frac{(2k)!}{k!2^k} = \pi^{-1/2} 2^k\Gamma\left(k+\frac{1}{2}\right)$$ for integral $k\ge 0$ (with all odd moments equal to zero by symmetry) we may derive $$\mathbb{E}(X_1^{2p}X_2^{2q}) = (2q)!2^{-p-q}\sum_{i=0}^q \rho^{2i}(1-\rho^2)^{q-i}\frac{(2p+2i)!}{(2i)! (p+i)! (q-i)!}$$ (with all other expectations of monomials equal to zero). This is proportional to a hypergeometric function (almost by definition: the manipulations involved are not deep or instructive), $$\frac{1}{\pi} 2^{p+q} \left(1-\rho ^2\right)^q \Gamma \left(p+\frac{1}{2}\right) \Gamma \left(q+\frac{1}{2}\right) \, _2F_1\left(p+\frac{1}{2},-q;\frac{1}{2};\frac{\rho ^2}{\rho ^2-1}\right).$$ The hypergeometric function times $\left(1-\rho ^2\right)^q$ is seen as a multiplicative correction for nonzero $\rho$.
Expectation on higher-order products of normal distributions
The expectation clearly is proportional to the product of the squared scale factors $\sigma_{11}\sigma_{22}$. The constant of proportionality is obtained by standardizing the variables, which reduces
Expectation on higher-order products of normal distributions The expectation clearly is proportional to the product of the squared scale factors $\sigma_{11}\sigma_{22}$. The constant of proportionality is obtained by standardizing the variables, which reduces $\Sigma$ to the correlation matrix with correlation $\rho = \sigma_{12}/\sqrt{\sigma_{11}\sigma_{22}}$. Assuming bivariate normality, then according to the analysis at https://stats.stackexchange.com/a/71303 we may change variables to $$X_1 = X,\ X_2 = \rho X + \left(\sqrt{1-\rho^2}\right) Y$$ where $(X,Y)$ has a standard (uncorrelated) bivariate Normal distribution, and we need only compute $$\mathbb{E}\left(X^2 (\rho X + \left(\sqrt{1-\rho^2}\right) Y)^2\right) = \mathbb{E}(\rho^2 X^4 + (1-\rho^2)X^2 Y^2 + c X^3 Y)$$ where the precise value of the constant $c$ does not matter. ($Y$ is the residual upon regressing $X_2$ against $X_1$.) Using the univariate expectations for the standard normal distribution $$\mathbb{E}(X^4)=3,\ \mathbb{E}(X^2) = \mathbb{E}(Y^2)=1,\ \mathbb{E}Y=0$$ and noting that $X$ and $Y$ are independent yields $$\mathbb{E}(\rho^2 X^4 + (1-\rho^2)X^2 Y^2 + c X^3 Y) = 3\rho^2 + (1-\rho^2) + 0 = 1 + 2\rho^2.$$ Multiplying this by $\sigma_{11}\sigma_{22}$ gives $$\mathbb{E}(X_1^2 X_2^2) = \sigma_{11}\sigma_{22} + 2\sigma_{12}^2.$$ The same method applies to finding the expectation of any polynomial in $(X_1,X_2)$, because it becomes a polynomial in $(X, \rho X + \left(\sqrt{1-\rho^2}\right)Y)$ and that, when expanded, is a polynomial in the independent normally distributed variables $X$ and $Y$. From $$\mathbb{E}(X^{2k}) = \mathbb{E}(Y^{2k}) = \frac{(2k)!}{k!2^k} = \pi^{-1/2} 2^k\Gamma\left(k+\frac{1}{2}\right)$$ for integral $k\ge 0$ (with all odd moments equal to zero by symmetry) we may derive $$\mathbb{E}(X_1^{2p}X_2^{2q}) = (2q)!2^{-p-q}\sum_{i=0}^q \rho^{2i}(1-\rho^2)^{q-i}\frac{(2p+2i)!}{(2i)! (p+i)! (q-i)!}$$ (with all other expectations of monomials equal to zero). This is proportional to a hypergeometric function (almost by definition: the manipulations involved are not deep or instructive), $$\frac{1}{\pi} 2^{p+q} \left(1-\rho ^2\right)^q \Gamma \left(p+\frac{1}{2}\right) \Gamma \left(q+\frac{1}{2}\right) \, _2F_1\left(p+\frac{1}{2},-q;\frac{1}{2};\frac{\rho ^2}{\rho ^2-1}\right).$$ The hypergeometric function times $\left(1-\rho ^2\right)^q$ is seen as a multiplicative correction for nonzero $\rho$.
Expectation on higher-order products of normal distributions The expectation clearly is proportional to the product of the squared scale factors $\sigma_{11}\sigma_{22}$. The constant of proportionality is obtained by standardizing the variables, which reduces
28,893
How to test the median of a population?
Synopsis The count of data exceeding $3.5$ has a Binomial distribution with unknown probability $p$. Use this to conduct a Binomial test of $p=1/2$ against the alternative $p\ne 1/2$. The rest of this post explains the underlying model and shows how to perform the calculations. It provides working R code to carry them out. An extended account of the underlying hypothesis testing theory is provided in my answer to "What is the meaning of p-values and t-values in statistical tests?". The statistical model Assuming the values are reasonably diverse (with few ties at $3.5$), then under your null hypothesis, any randomly sampled value has a $1/2=50\%$ chance of exceeding $3.5$ (since $3.5$ is characterized as the middle value of the population). Assuming all $250$ values were randomly and independently sampled, the number of them exceeding $3.5$ will therefore have a Binomial$(250,1/2)$ distribution. Let us call this number the "count," $k$. On the other hand, if the population median differs from $3.5$, the chance of a randomly sampled value exceeding $3.5$ will differ from $1/2$. This is the alternative hypothesis. Finding a suitable test The best way to distinguish the null situation from its alternatives is to look at the values of $k$ that are most likely under the null and less likely under the alternatives. These are the values near $1/2$ of $250$, equal to $125$. Thus, a critical region for your test consists of values relatively far from $125$: close to $0$ or close to $250$. But how far from $125$ must they be to constitute significant evidence that $3.5$ is not the population median? In depends on your standard of significance: this is called the test size, often termed $\alpha$. Under the null hypothesis, there should be close to--but not more than--an $\alpha$ chance that $k$ will be in the critical region. Ordinarily, when we have no preconceptions about which alternative will apply--a median greater or less than $3.5$--we try to construct the critical region so that there is half of that chance, $\alpha/2$, that $k$ is low and the other half, $\alpha/2$, that $k$ is high. Because we know the distribution of $k$ under the null hypothesis, this information is enough to determine the critical region. Technically, there are two common ways to carry out the calculation: compute the Binomial probabilities or approximate them with a Normal distribution. Calculation with binomial probabilities Use the percentage point (quantile) function. In R, for instance, this is called qbinom and would be invoked like alpha <- 0.05 # Test size c(qbinom(alpha/2, 250, 1/2)-1, qbinom(1-alpha/2, 250, 1/2)+1) The output for $\alpha=0.05$ is 109 141 It means that the critical region comprises all the low values of $k$ between (and including) $0$ and $109$, together with all the high values of $k$ between (and including) $141$ and $250$. As a check, we can ask R to calculate the chance that k lies in that region when the null is true: pbinom(109, 250, 1/2) + (1-pbinom(141-1, 250, 1/2)) The output is $0.0497$, very close to--but not greater than--$\alpha$ itself. Because the critical region must end at a whole number, it is not usually possible to make this actual test size exactly equal to the nominal test size $\alpha$, but in this case the two values are very close indeed. Calculation with the normal approximation The mean of a Binomial$(250, 1/2)$ distribution is $250\times 1/2=125$ and its variance is $250\times 1/2\times (1-1/2) = 250/4$, making its standard deviation equal to $\sqrt{250/4}\approx 7.9$. We will replace the Binomial distribution with a Normal distribution. The standard Normal distribution has $\alpha/2=0.05/2$ of its probability less than $-1.95996$, as computed by the R command qnorm(alpha/2) Because Normal distributions are symmetric, it also has $0.05/2$ of its probability greater than $+1.95996$. Therefore the critical region consists of values of $k$ that are more than $1.95996$ standard deviations away from $125$. Compute these thresholds: they equal $125 \pm 7.9\times 1.96 \approx 109.5, 140.5$. The calculation can be carried out in one swoop as 250*1/2 + sqrt(250*1/2*(1-1/2)) * qnorm(alpha/2) * c(1,-1) Since $k$ has to be a whole number, we see it will fall into the critical region when it is $109$ or less or $141$ or greater. This answer is identical to the one obtained using the exact Binomial calculation. This typically is the case when $p$ is nearer $1/2$ than it is to $0$ or $1$, the sample size is moderate to large (tens or more), and $\alpha$ is not very small (a few percent). This test, because it assumes nothing about the population (except that it doesn't have a lot of probability focused right on its median), is not as powerful as other tests that make specific assumptions about the population. If the test nevertheless rejects the null, there's no need to be concerned about lack of power. Otherwise, you have to make some delicate trade-offs between what you are willing to assume and what you are able to conclude about the population.
How to test the median of a population?
Synopsis The count of data exceeding $3.5$ has a Binomial distribution with unknown probability $p$. Use this to conduct a Binomial test of $p=1/2$ against the alternative $p\ne 1/2$. The rest of thi
How to test the median of a population? Synopsis The count of data exceeding $3.5$ has a Binomial distribution with unknown probability $p$. Use this to conduct a Binomial test of $p=1/2$ against the alternative $p\ne 1/2$. The rest of this post explains the underlying model and shows how to perform the calculations. It provides working R code to carry them out. An extended account of the underlying hypothesis testing theory is provided in my answer to "What is the meaning of p-values and t-values in statistical tests?". The statistical model Assuming the values are reasonably diverse (with few ties at $3.5$), then under your null hypothesis, any randomly sampled value has a $1/2=50\%$ chance of exceeding $3.5$ (since $3.5$ is characterized as the middle value of the population). Assuming all $250$ values were randomly and independently sampled, the number of them exceeding $3.5$ will therefore have a Binomial$(250,1/2)$ distribution. Let us call this number the "count," $k$. On the other hand, if the population median differs from $3.5$, the chance of a randomly sampled value exceeding $3.5$ will differ from $1/2$. This is the alternative hypothesis. Finding a suitable test The best way to distinguish the null situation from its alternatives is to look at the values of $k$ that are most likely under the null and less likely under the alternatives. These are the values near $1/2$ of $250$, equal to $125$. Thus, a critical region for your test consists of values relatively far from $125$: close to $0$ or close to $250$. But how far from $125$ must they be to constitute significant evidence that $3.5$ is not the population median? In depends on your standard of significance: this is called the test size, often termed $\alpha$. Under the null hypothesis, there should be close to--but not more than--an $\alpha$ chance that $k$ will be in the critical region. Ordinarily, when we have no preconceptions about which alternative will apply--a median greater or less than $3.5$--we try to construct the critical region so that there is half of that chance, $\alpha/2$, that $k$ is low and the other half, $\alpha/2$, that $k$ is high. Because we know the distribution of $k$ under the null hypothesis, this information is enough to determine the critical region. Technically, there are two common ways to carry out the calculation: compute the Binomial probabilities or approximate them with a Normal distribution. Calculation with binomial probabilities Use the percentage point (quantile) function. In R, for instance, this is called qbinom and would be invoked like alpha <- 0.05 # Test size c(qbinom(alpha/2, 250, 1/2)-1, qbinom(1-alpha/2, 250, 1/2)+1) The output for $\alpha=0.05$ is 109 141 It means that the critical region comprises all the low values of $k$ between (and including) $0$ and $109$, together with all the high values of $k$ between (and including) $141$ and $250$. As a check, we can ask R to calculate the chance that k lies in that region when the null is true: pbinom(109, 250, 1/2) + (1-pbinom(141-1, 250, 1/2)) The output is $0.0497$, very close to--but not greater than--$\alpha$ itself. Because the critical region must end at a whole number, it is not usually possible to make this actual test size exactly equal to the nominal test size $\alpha$, but in this case the two values are very close indeed. Calculation with the normal approximation The mean of a Binomial$(250, 1/2)$ distribution is $250\times 1/2=125$ and its variance is $250\times 1/2\times (1-1/2) = 250/4$, making its standard deviation equal to $\sqrt{250/4}\approx 7.9$. We will replace the Binomial distribution with a Normal distribution. The standard Normal distribution has $\alpha/2=0.05/2$ of its probability less than $-1.95996$, as computed by the R command qnorm(alpha/2) Because Normal distributions are symmetric, it also has $0.05/2$ of its probability greater than $+1.95996$. Therefore the critical region consists of values of $k$ that are more than $1.95996$ standard deviations away from $125$. Compute these thresholds: they equal $125 \pm 7.9\times 1.96 \approx 109.5, 140.5$. The calculation can be carried out in one swoop as 250*1/2 + sqrt(250*1/2*(1-1/2)) * qnorm(alpha/2) * c(1,-1) Since $k$ has to be a whole number, we see it will fall into the critical region when it is $109$ or less or $141$ or greater. This answer is identical to the one obtained using the exact Binomial calculation. This typically is the case when $p$ is nearer $1/2$ than it is to $0$ or $1$, the sample size is moderate to large (tens or more), and $\alpha$ is not very small (a few percent). This test, because it assumes nothing about the population (except that it doesn't have a lot of probability focused right on its median), is not as powerful as other tests that make specific assumptions about the population. If the test nevertheless rejects the null, there's no need to be concerned about lack of power. Otherwise, you have to make some delicate trade-offs between what you are willing to assume and what you are able to conclude about the population.
How to test the median of a population? Synopsis The count of data exceeding $3.5$ has a Binomial distribution with unknown probability $p$. Use this to conduct a Binomial test of $p=1/2$ against the alternative $p\ne 1/2$. The rest of thi
28,894
Show that if $X \sim Bin(n, p)$, then $E|X - np| \le \sqrt{npq}.$
So that the comment thread doesn't explode I'm collecting my hints toward a completely elementary proof (you can do it shorter than this but hopefully this makes each step intuitive). I've deleted most of my comments (which unfortunately leaves the comments looking a little disjointed). Let $Y=X-np$. Note $E(Y)=0$. Show $\text{Var}(Y)=npq$. If you already know $\text{Var}(X)$, you could just state $\text{Var}(Y)$, since shifting by a constant does nothing to variance. Let $Z=|Y|$. Write an obvious inequality in $\text{Var}(Z)$, expand $\text{Var}(Z)$ and use the previous result. [You may want to slightly reorganize this into a clear proof, but I am attempting to motivate how to arrive at a proof, not just the final proof.] That's all there is to it. It's 3 or 4 simple lines, using nothing more complicated than basic properties of variance and expectation (the only way the binomial comes into it at all is in giving the specific form of $E(X)$ and $\text{Var}(X)$ - you could prove the general case that the mean deviation is always $\leq \sigma$ just as readily). [Alternatively, if you're familiar with Jensen's inequality, you can do it slightly more briefly.] -- Now that some time has passed, I'll outline a little more detail about how to approach it: Let $Z=|X-nq|$. Then $\text{Var}(Z)=E(Z^2)-E(Z)^2$, and $E(Z^2)=E[(X-nq)^2]$ ... Note that variances must be positive. The result follows.
Show that if $X \sim Bin(n, p)$, then $E|X - np| \le \sqrt{npq}.$
So that the comment thread doesn't explode I'm collecting my hints toward a completely elementary proof (you can do it shorter than this but hopefully this makes each step intuitive). I've deleted mos
Show that if $X \sim Bin(n, p)$, then $E|X - np| \le \sqrt{npq}.$ So that the comment thread doesn't explode I'm collecting my hints toward a completely elementary proof (you can do it shorter than this but hopefully this makes each step intuitive). I've deleted most of my comments (which unfortunately leaves the comments looking a little disjointed). Let $Y=X-np$. Note $E(Y)=0$. Show $\text{Var}(Y)=npq$. If you already know $\text{Var}(X)$, you could just state $\text{Var}(Y)$, since shifting by a constant does nothing to variance. Let $Z=|Y|$. Write an obvious inequality in $\text{Var}(Z)$, expand $\text{Var}(Z)$ and use the previous result. [You may want to slightly reorganize this into a clear proof, but I am attempting to motivate how to arrive at a proof, not just the final proof.] That's all there is to it. It's 3 or 4 simple lines, using nothing more complicated than basic properties of variance and expectation (the only way the binomial comes into it at all is in giving the specific form of $E(X)$ and $\text{Var}(X)$ - you could prove the general case that the mean deviation is always $\leq \sigma$ just as readily). [Alternatively, if you're familiar with Jensen's inequality, you can do it slightly more briefly.] -- Now that some time has passed, I'll outline a little more detail about how to approach it: Let $Z=|X-nq|$. Then $\text{Var}(Z)=E(Z^2)-E(Z)^2$, and $E(Z^2)=E[(X-nq)^2]$ ... Note that variances must be positive. The result follows.
Show that if $X \sim Bin(n, p)$, then $E|X - np| \le \sqrt{npq}.$ So that the comment thread doesn't explode I'm collecting my hints toward a completely elementary proof (you can do it shorter than this but hopefully this makes each step intuitive). I've deleted mos
28,895
How should I model interactions between explanatory variables when one of them may have quadratic and cubic terms?
None of those approaches will work properly. Approach 3. came close, but then you said you would prune out insignificant terms. This is problematic because co-linearities make it impossible to find which terms to remove, and because this would give you the wrong degrees of freedom in hypothesis tests if you want to preserve type I error. Depending on the effective sample size and signal:noise ratio in your problem I'd suggest fitting a model with all product and main effect terms, and interpreting the model using plots and "chunk tests" (multiple d.f. tests of related terms, i.e., a test for overall interaction, test for nonlinear interaction, test for overall effect including main effect + interaction, etc.). The R rms package makes this easy to do for standard univariate models and for longitudinal models when $Y$ is multivariate normal. Example: # Fit a model with splines in x1 and x2 and tensor spline interaction surface # for the two. Model is additive and linear in x3. # Note that splines typically fit better than ordinary polynomials f <- ols(y ~ rcs(x1, 4) * rcs(x2, 4) + x3) anova(f) # get all meaningful hypothesis tests that can be inferred # from the model formula bplot(Predict(f, x1, x2)) # show joint effects plot(Predict(f, x1, x2=3)) # vary x1 and hold x2 constant When you see the anova table you'll see lines labeled All Interactions which for the whole model tests the combined influence of all interaction terms. For an individual predictor this is only helpful when the predictor interacts with more than one variable. There is an option in the print method for anova.rms to show by each line in the table exactly which parameters are being tested against zero. All of this works with mixtures of categorical and continuous predictors. If you want to use ordinary polynomials use pol instead of rcs. Unfortunately I haven't implemented mixed effect models.
How should I model interactions between explanatory variables when one of them may have quadratic an
None of those approaches will work properly. Approach 3. came close, but then you said you would prune out insignificant terms. This is problematic because co-linearities make it impossible to find
How should I model interactions between explanatory variables when one of them may have quadratic and cubic terms? None of those approaches will work properly. Approach 3. came close, but then you said you would prune out insignificant terms. This is problematic because co-linearities make it impossible to find which terms to remove, and because this would give you the wrong degrees of freedom in hypothesis tests if you want to preserve type I error. Depending on the effective sample size and signal:noise ratio in your problem I'd suggest fitting a model with all product and main effect terms, and interpreting the model using plots and "chunk tests" (multiple d.f. tests of related terms, i.e., a test for overall interaction, test for nonlinear interaction, test for overall effect including main effect + interaction, etc.). The R rms package makes this easy to do for standard univariate models and for longitudinal models when $Y$ is multivariate normal. Example: # Fit a model with splines in x1 and x2 and tensor spline interaction surface # for the two. Model is additive and linear in x3. # Note that splines typically fit better than ordinary polynomials f <- ols(y ~ rcs(x1, 4) * rcs(x2, 4) + x3) anova(f) # get all meaningful hypothesis tests that can be inferred # from the model formula bplot(Predict(f, x1, x2)) # show joint effects plot(Predict(f, x1, x2=3)) # vary x1 and hold x2 constant When you see the anova table you'll see lines labeled All Interactions which for the whole model tests the combined influence of all interaction terms. For an individual predictor this is only helpful when the predictor interacts with more than one variable. There is an option in the print method for anova.rms to show by each line in the table exactly which parameters are being tested against zero. All of this works with mixtures of categorical and continuous predictors. If you want to use ordinary polynomials use pol instead of rcs. Unfortunately I haven't implemented mixed effect models.
How should I model interactions between explanatory variables when one of them may have quadratic an None of those approaches will work properly. Approach 3. came close, but then you said you would prune out insignificant terms. This is problematic because co-linearities make it impossible to find
28,896
How should I model interactions between explanatory variables when one of them may have quadratic and cubic terms?
I am a fan of using nonparametric smoothing regressions to assess function forms of relationships between dependent variables and predictors, even when I subsequently going to estimate parametric regression models. While I have very often found nonlinear relationships, I have never found a nonlinear interaction interaction term, even when the main effects are strongly nonlinear. My take home: interaction effects need not be composed of the same functional forms as the predictors of which they are comprised.
How should I model interactions between explanatory variables when one of them may have quadratic an
I am a fan of using nonparametric smoothing regressions to assess function forms of relationships between dependent variables and predictors, even when I subsequently going to estimate parametric regr
How should I model interactions between explanatory variables when one of them may have quadratic and cubic terms? I am a fan of using nonparametric smoothing regressions to assess function forms of relationships between dependent variables and predictors, even when I subsequently going to estimate parametric regression models. While I have very often found nonlinear relationships, I have never found a nonlinear interaction interaction term, even when the main effects are strongly nonlinear. My take home: interaction effects need not be composed of the same functional forms as the predictors of which they are comprised.
How should I model interactions between explanatory variables when one of them may have quadratic an I am a fan of using nonparametric smoothing regressions to assess function forms of relationships between dependent variables and predictors, even when I subsequently going to estimate parametric regr
28,897
Confused about the visual explanation of eigenvectors: how can visually different datasets have the same eigenvectors?
You don't have to do PCA over the correlation matrix; you can decompose the covariance matrix as well. Note that these will typically yield different solutions. (For more on this, see: PCA on correlation or covariance?) In your second figure, the correlations are the same, but the groups look different. They look different because they have different covariances. However, the variances are also different (e.g., the red group varies over a wider range of X1), and the correlation is the covariance divided by the standard deviations (${\rm Cov}_{xy} / {\rm SD}_x{\rm SD}_y$). As a result, the correlations can be the same. Again, if you perform PCA with these groups using the covariance matrices, you will get a different result than if you use the correlation matrices.
Confused about the visual explanation of eigenvectors: how can visually different datasets have the
You don't have to do PCA over the correlation matrix; you can decompose the covariance matrix as well. Note that these will typically yield different solutions. (For more on this, see: PCA on correl
Confused about the visual explanation of eigenvectors: how can visually different datasets have the same eigenvectors? You don't have to do PCA over the correlation matrix; you can decompose the covariance matrix as well. Note that these will typically yield different solutions. (For more on this, see: PCA on correlation or covariance?) In your second figure, the correlations are the same, but the groups look different. They look different because they have different covariances. However, the variances are also different (e.g., the red group varies over a wider range of X1), and the correlation is the covariance divided by the standard deviations (${\rm Cov}_{xy} / {\rm SD}_x{\rm SD}_y$). As a result, the correlations can be the same. Again, if you perform PCA with these groups using the covariance matrices, you will get a different result than if you use the correlation matrices.
Confused about the visual explanation of eigenvectors: how can visually different datasets have the You don't have to do PCA over the correlation matrix; you can decompose the covariance matrix as well. Note that these will typically yield different solutions. (For more on this, see: PCA on correl
28,898
Are unequal groups a problem for one-way ANOVA?
The unevenness of sample size of itself is not an issue when the assumptions are satisfied. The test is still completely valid. However, it does reduce level-robustness to heteroskedasticity substantially - if the sample sizes are equal (or very close to it) then the test is robust to heteroskedasticity (at least, in that the level isn't much affected). With very different sample sizes, the Welch adjustment to degrees of freedom is a safer choice. If you have the ability to choose the sample sizes, power is better when the sample sizes are equal or nearly equal. See gung's answer here for details on this issue.
Are unequal groups a problem for one-way ANOVA?
The unevenness of sample size of itself is not an issue when the assumptions are satisfied. The test is still completely valid. However, it does reduce level-robustness to heteroskedasticity substanti
Are unequal groups a problem for one-way ANOVA? The unevenness of sample size of itself is not an issue when the assumptions are satisfied. The test is still completely valid. However, it does reduce level-robustness to heteroskedasticity substantially - if the sample sizes are equal (or very close to it) then the test is robust to heteroskedasticity (at least, in that the level isn't much affected). With very different sample sizes, the Welch adjustment to degrees of freedom is a safer choice. If you have the ability to choose the sample sizes, power is better when the sample sizes are equal or nearly equal. See gung's answer here for details on this issue.
Are unequal groups a problem for one-way ANOVA? The unevenness of sample size of itself is not an issue when the assumptions are satisfied. The test is still completely valid. However, it does reduce level-robustness to heteroskedasticity substanti
28,899
Why KL-Divergence uses "ln" in its formula?
This is somewhat intuitive, I hope I give some ideas. The KL divergence has several mathematical meanings. Although it is used to compare distributions, it comes from the field of information theory, where it measures how much "information" is lost when coding a source using a different distribution other than the real one. In information theory, it can be also defined as the difference between entropies - the joint entropy of $Q$ and $P$ and the entropy of $P$. So to discuss KL divergence, we need to understand the meaning of entropy. The entropy is the measure of "information" in a source, and generally describes how "surprised" you will be with the outcome of the random variable. For instance, if you have a uniform distribution, you will always be "surprised" because there is a wide range of variables it can take. It has high entropy. However, if the RV is a coin with $p=0.9$, then you will probably not be surprised, because it will succeed 90% of the times, so it has low entropy. Entropy is defined as $H(X)=-\sum_x P(x)\log P(x)=E[-\log P(X)]$, which is the expectation of $I(X)$, the information of a source. Why the log? one reason is the logarithm property of $\log(xy)=\log(x)+\log(y)$, meaning the information of a source composed of independent sources ($p(x)=p_1(x)p_2(x)$) will have the sum of their information. This can only happen by using a logarithm.
Why KL-Divergence uses "ln" in its formula?
This is somewhat intuitive, I hope I give some ideas. The KL divergence has several mathematical meanings. Although it is used to compare distributions, it comes from the field of information theory,
Why KL-Divergence uses "ln" in its formula? This is somewhat intuitive, I hope I give some ideas. The KL divergence has several mathematical meanings. Although it is used to compare distributions, it comes from the field of information theory, where it measures how much "information" is lost when coding a source using a different distribution other than the real one. In information theory, it can be also defined as the difference between entropies - the joint entropy of $Q$ and $P$ and the entropy of $P$. So to discuss KL divergence, we need to understand the meaning of entropy. The entropy is the measure of "information" in a source, and generally describes how "surprised" you will be with the outcome of the random variable. For instance, if you have a uniform distribution, you will always be "surprised" because there is a wide range of variables it can take. It has high entropy. However, if the RV is a coin with $p=0.9$, then you will probably not be surprised, because it will succeed 90% of the times, so it has low entropy. Entropy is defined as $H(X)=-\sum_x P(x)\log P(x)=E[-\log P(X)]$, which is the expectation of $I(X)$, the information of a source. Why the log? one reason is the logarithm property of $\log(xy)=\log(x)+\log(y)$, meaning the information of a source composed of independent sources ($p(x)=p_1(x)p_2(x)$) will have the sum of their information. This can only happen by using a logarithm.
Why KL-Divergence uses "ln" in its formula? This is somewhat intuitive, I hope I give some ideas. The KL divergence has several mathematical meanings. Although it is used to compare distributions, it comes from the field of information theory,
28,900
Why KL-Divergence uses "ln" in its formula?
In short, because Shannon entropy uses logarithm, see: What is the role of the logarithm in Shannon's entropy? KL-divergence is typically defined as cross entropy minus entropy.
Why KL-Divergence uses "ln" in its formula?
In short, because Shannon entropy uses logarithm, see: What is the role of the logarithm in Shannon's entropy? KL-divergence is typically defined as cross entropy minus entropy.
Why KL-Divergence uses "ln" in its formula? In short, because Shannon entropy uses logarithm, see: What is the role of the logarithm in Shannon's entropy? KL-divergence is typically defined as cross entropy minus entropy.
Why KL-Divergence uses "ln" in its formula? In short, because Shannon entropy uses logarithm, see: What is the role of the logarithm in Shannon's entropy? KL-divergence is typically defined as cross entropy minus entropy.