idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
45,001
Does order of events matter in Bayesian update?
In order for this to be the case, the random variables must be exchangeable. Your example is a little different since $p>1/2$ isn't an event. An event should be in the support of the likelihood. In this case, events are only constituted by binomial random variables, or sums thereof.
Does order of events matter in Bayesian update?
In order for this to be the case, the random variables must be exchangeable. Your example is a little different since $p>1/2$ isn't an event. An event should be in the support of the likelihood. In
Does order of events matter in Bayesian update? In order for this to be the case, the random variables must be exchangeable. Your example is a little different since $p>1/2$ isn't an event. An event should be in the support of the likelihood. In this case, events are only constituted by binomial random variables, or sums thereof.
Does order of events matter in Bayesian update? In order for this to be the case, the random variables must be exchangeable. Your example is a little different since $p>1/2$ isn't an event. An event should be in the support of the likelihood. In
45,002
How useful are linear hypotheses?
These linear hypotheses on the coefficient vector have three main uses: Testing the existence of relationships: We can test the existence of relationships between some subset of the explanatory variables and the response variable. To do this, let $\mathbf{e}_\mathcal{S}$ denote the indicator vector for the subset $\mathcal{S}$ and test the linear hypotheses: $$H_0: \mathbf{e}_\mathcal{S} \boldsymbol{\beta} = 0 \quad \quad \quad H_A: \mathbf{e}_\mathcal{S} \boldsymbol{\beta} \neq 0.$$ Testing a specified magnitude for the relationship: We can test the magnitude of a relationship between an explanatory variables and the response variable using some specified value of interest. This is often useful when a particular specified magnitude has some practical significance (e.g., it is often useful to test if the true coefficient is equal to one). To test $\beta_k = b$ we use the linear hypotheses: $$H_0: \mathbf{e}_k \boldsymbol{\beta} = b \quad \quad \quad H_A: \mathbf{e}_k \boldsymbol{\beta} \neq b.$$ Testing the expected responses of new explanatory variables: We can test the values expected values of responses corresponding to a new set of explanatory variables. Taking new explanatory data $\boldsymbol{X}_\text{new}$ we get corresponding expected values $\mathbb{E}(\boldsymbol{Y}_\text{new}) = \boldsymbol{X}_\text{new} \boldsymbol{\beta}$. This means that we can test the hypothesis $\mathbb{E}(\boldsymbol{Y}_\text{new}) = \boldsymbol{y} $ via the hypotheses: $$H_0: \boldsymbol{X}_\text{new} \boldsymbol{\beta} = \boldsymbol{y} \quad \quad \quad H_A: \boldsymbol{X}_\text{new} \boldsymbol{\beta} \neq \boldsymbol{y}.$$ As you can see, the first use is to test for whether some of the coefficients are zero, which is a test of whether those explanatory variables are related to the response in the model. However, you can also undertake more general tests of a specific magnitude for the relationship. You can also use the linear test to test the expected response for new data.
How useful are linear hypotheses?
These linear hypotheses on the coefficient vector have three main uses: Testing the existence of relationships: We can test the existence of relationships between some subset of the explanatory varia
How useful are linear hypotheses? These linear hypotheses on the coefficient vector have three main uses: Testing the existence of relationships: We can test the existence of relationships between some subset of the explanatory variables and the response variable. To do this, let $\mathbf{e}_\mathcal{S}$ denote the indicator vector for the subset $\mathcal{S}$ and test the linear hypotheses: $$H_0: \mathbf{e}_\mathcal{S} \boldsymbol{\beta} = 0 \quad \quad \quad H_A: \mathbf{e}_\mathcal{S} \boldsymbol{\beta} \neq 0.$$ Testing a specified magnitude for the relationship: We can test the magnitude of a relationship between an explanatory variables and the response variable using some specified value of interest. This is often useful when a particular specified magnitude has some practical significance (e.g., it is often useful to test if the true coefficient is equal to one). To test $\beta_k = b$ we use the linear hypotheses: $$H_0: \mathbf{e}_k \boldsymbol{\beta} = b \quad \quad \quad H_A: \mathbf{e}_k \boldsymbol{\beta} \neq b.$$ Testing the expected responses of new explanatory variables: We can test the values expected values of responses corresponding to a new set of explanatory variables. Taking new explanatory data $\boldsymbol{X}_\text{new}$ we get corresponding expected values $\mathbb{E}(\boldsymbol{Y}_\text{new}) = \boldsymbol{X}_\text{new} \boldsymbol{\beta}$. This means that we can test the hypothesis $\mathbb{E}(\boldsymbol{Y}_\text{new}) = \boldsymbol{y} $ via the hypotheses: $$H_0: \boldsymbol{X}_\text{new} \boldsymbol{\beta} = \boldsymbol{y} \quad \quad \quad H_A: \boldsymbol{X}_\text{new} \boldsymbol{\beta} \neq \boldsymbol{y}.$$ As you can see, the first use is to test for whether some of the coefficients are zero, which is a test of whether those explanatory variables are related to the response in the model. However, you can also undertake more general tests of a specific magnitude for the relationship. You can also use the linear test to test the expected response for new data.
How useful are linear hypotheses? These linear hypotheses on the coefficient vector have three main uses: Testing the existence of relationships: We can test the existence of relationships between some subset of the explanatory varia
45,003
How useful are linear hypotheses?
When you fit a linear model, the statistical softwares give you the point estimate, confidence interval, test statistics, and p-values of the $\beta$_s. If you are just interesting in these $\beta$s, you can stop here (for example, the simple linear regression just have on intercept and one slope, so $\beta$s themselves are enough). But for little complicated model, you will not satisfied by $\beta$s, and you want to estimate, test the linear combinations of $\beta$s. At this time The importance of C matrix is obvious. For complicated model, such as model with interactions, C matrix must needs be constructed. Example 1: For ANOVA, one categorical covariate has 3 level. Suppose level 1 is reference. Two $\beta$ will give you the difference between level 2 vs 1 and level 3 vs 1. If you want the difference between level 2 and 3, you need C matrix (0 1 -1). (the first 0 is for intercept). If you want to estimate the means for level 3, C=(1 0 1) is needed. Example 2: If you want to the multiple hypothesis simultaneously, see Testing the general linear hypothesis: $H_0: \beta_1 = \beta_2 = \beta_3 = \beta_4 = \beta$. Here T=C. Example 3: If the interactions exist, we need to have the linear relation for each combination (cell) of interaction. Here is 16x16 C matrix to get 8 intercepts and slopes. How to understand the coefficients of a three-way interaction in a regression? You can find more example on the internet, textbooks. In summary, for linear model, constructing C matrix is equal to half of theory of linear model.
How useful are linear hypotheses?
When you fit a linear model, the statistical softwares give you the point estimate, confidence interval, test statistics, and p-values of the $\beta$_s. If you are just interesting in these $\beta$s,
How useful are linear hypotheses? When you fit a linear model, the statistical softwares give you the point estimate, confidence interval, test statistics, and p-values of the $\beta$_s. If you are just interesting in these $\beta$s, you can stop here (for example, the simple linear regression just have on intercept and one slope, so $\beta$s themselves are enough). But for little complicated model, you will not satisfied by $\beta$s, and you want to estimate, test the linear combinations of $\beta$s. At this time The importance of C matrix is obvious. For complicated model, such as model with interactions, C matrix must needs be constructed. Example 1: For ANOVA, one categorical covariate has 3 level. Suppose level 1 is reference. Two $\beta$ will give you the difference between level 2 vs 1 and level 3 vs 1. If you want the difference between level 2 and 3, you need C matrix (0 1 -1). (the first 0 is for intercept). If you want to estimate the means for level 3, C=(1 0 1) is needed. Example 2: If you want to the multiple hypothesis simultaneously, see Testing the general linear hypothesis: $H_0: \beta_1 = \beta_2 = \beta_3 = \beta_4 = \beta$. Here T=C. Example 3: If the interactions exist, we need to have the linear relation for each combination (cell) of interaction. Here is 16x16 C matrix to get 8 intercepts and slopes. How to understand the coefficients of a three-way interaction in a regression? You can find more example on the internet, textbooks. In summary, for linear model, constructing C matrix is equal to half of theory of linear model.
How useful are linear hypotheses? When you fit a linear model, the statistical softwares give you the point estimate, confidence interval, test statistics, and p-values of the $\beta$_s. If you are just interesting in these $\beta$s,
45,004
Power calculations using pilot effect sizes
This just happens to be a topic that has popped up in a few different areas lately: This interactive tool that accompanies on a pub on the topic: http://pilotpower.table1.org/ This Lakens pre-print: https://psyarxiv.com/b7z4q And this post from Andrew Gelman: http://andrewgelman.com/2018/03/20/purpose-pilot-study-demonstrate-feasibility-experiment-not-estimate-treatment-effect/ Of these, Gelman is the most direct in stating that the purpose of a pilot study isn't to estimate an effect at all. Lakens seems to suggest that you will probably only use your pilot effect size if gives you a feasible sample size so you're setting yourself up for failure there to. He also gives a bit more advice if you still want to do a pilot: Use sequential analysis (i.e. your pilot study rolls into your actual study) OR use the lower bound of the 80% interval (as you are considering) In other posts I've seen him argue to conduct a full decision analysis (requires a utility function) using your pilot data and then conduct value of information to determine whether a trial is worthwhile and if so how large it should be. This is consistent with how health economists think of accruing evidence for decision making. The paper attached to the interactive tool (and Gelman) both give different versions of the argument that you should use your content expertise for similar interventions/phenomena to hypothesize either a realistic effect you want to detect or the minimally important effect (i.e. the smallest effect you would care about if true). Frank Harrel argues that your sample size calc should capture your uncertainty in the true effect parameter so that you get a range of sample sizes. The adaptive trial lit would say you identify an effect you care about, and then use the predictive distribution of your pilot to tell you how likely you would be to find a statistically significant result if you continued to recruit to a full sample (this is a version of sequential analysis). There's probably more views on this as well, so sorry if all I've done is muddy the waters a bit. Outside of hardcore trialists, industry, and stats methods though you'll find that most people don't pay much attention to any of this and just use their effect from their pilot (as long as it gives them a feasible sample size ;) )
Power calculations using pilot effect sizes
This just happens to be a topic that has popped up in a few different areas lately: This interactive tool that accompanies on a pub on the topic: http://pilotpower.table1.org/ This Lakens pre-print: h
Power calculations using pilot effect sizes This just happens to be a topic that has popped up in a few different areas lately: This interactive tool that accompanies on a pub on the topic: http://pilotpower.table1.org/ This Lakens pre-print: https://psyarxiv.com/b7z4q And this post from Andrew Gelman: http://andrewgelman.com/2018/03/20/purpose-pilot-study-demonstrate-feasibility-experiment-not-estimate-treatment-effect/ Of these, Gelman is the most direct in stating that the purpose of a pilot study isn't to estimate an effect at all. Lakens seems to suggest that you will probably only use your pilot effect size if gives you a feasible sample size so you're setting yourself up for failure there to. He also gives a bit more advice if you still want to do a pilot: Use sequential analysis (i.e. your pilot study rolls into your actual study) OR use the lower bound of the 80% interval (as you are considering) In other posts I've seen him argue to conduct a full decision analysis (requires a utility function) using your pilot data and then conduct value of information to determine whether a trial is worthwhile and if so how large it should be. This is consistent with how health economists think of accruing evidence for decision making. The paper attached to the interactive tool (and Gelman) both give different versions of the argument that you should use your content expertise for similar interventions/phenomena to hypothesize either a realistic effect you want to detect or the minimally important effect (i.e. the smallest effect you would care about if true). Frank Harrel argues that your sample size calc should capture your uncertainty in the true effect parameter so that you get a range of sample sizes. The adaptive trial lit would say you identify an effect you care about, and then use the predictive distribution of your pilot to tell you how likely you would be to find a statistically significant result if you continued to recruit to a full sample (this is a version of sequential analysis). There's probably more views on this as well, so sorry if all I've done is muddy the waters a bit. Outside of hardcore trialists, industry, and stats methods though you'll find that most people don't pay much attention to any of this and just use their effect from their pilot (as long as it gives them a feasible sample size ;) )
Power calculations using pilot effect sizes This just happens to be a topic that has popped up in a few different areas lately: This interactive tool that accompanies on a pub on the topic: http://pilotpower.table1.org/ This Lakens pre-print: h
45,005
Power calculations using pilot effect sizes
Perhaps worth expanding on one of the points @Tdisher makes. In his article available here entitled "On the use of a pilot sample for sample size determination" Browne discusses the role of estimating the standard deviation The abstract states To compute the sample size needed to achieve the planned power for a t‐test, one needs an estimate of the population standard deviation δ. If one uses the sample standard deviation from a small pilot study as an estimate of δ, it is quite likely that the actual power for the planned study will be less than the planned power. Monte Carlo simulations indicate that using a 100(1 − γ) per cent upper one‐sided confidence limit on δ will provide a sample size sufficient to achieve the planned power in at least 100(1 − γ) per cent of such trials.
Power calculations using pilot effect sizes
Perhaps worth expanding on one of the points @Tdisher makes. In his article available here entitled "On the use of a pilot sample for sample size determination" Browne discusses the role of estimating
Power calculations using pilot effect sizes Perhaps worth expanding on one of the points @Tdisher makes. In his article available here entitled "On the use of a pilot sample for sample size determination" Browne discusses the role of estimating the standard deviation The abstract states To compute the sample size needed to achieve the planned power for a t‐test, one needs an estimate of the population standard deviation δ. If one uses the sample standard deviation from a small pilot study as an estimate of δ, it is quite likely that the actual power for the planned study will be less than the planned power. Monte Carlo simulations indicate that using a 100(1 − γ) per cent upper one‐sided confidence limit on δ will provide a sample size sufficient to achieve the planned power in at least 100(1 − γ) per cent of such trials.
Power calculations using pilot effect sizes Perhaps worth expanding on one of the points @Tdisher makes. In his article available here entitled "On the use of a pilot sample for sample size determination" Browne discusses the role of estimating
45,006
What is effect of increasing number of hidden layers in a Feed Forward NN? [duplicate]
I recommend you taking a look at http://www.deeplearningbook.org/ , they explain really well the concept of "Capacity" (Chapter 5, page 110), which might give you some answers to your questions. I'll try my best though, for the sake of my answer. 1) Increasing the number of hidden layers might improve the accuracy or might not, it really depends on the complexity of the problem that you are trying to solve. 2) Increasing the number of hidden layers much more than the sufficient number of layers will cause accuracy in the test set to decrease, yes. It will cause your network to overfit to the training set, that is, it will learn the training data, but it won't be able to generalize to new unseen data. A picture taken from the aforementioned book gives a pretty good intuition for this concept Where in the left picture they try to fit a linear function to the data. This function is not complex enough to correctly represent the data, and it suffers from a bias(underfitting) problem. In the middle picture, the model has the appropriate complexity to accurately represent the data and to generalize, since it has learned the trend that this data follows (the data was synthetically created and has an inverted parabola shape). In the right picture, the model fits to the data, but it overfits to it, it hasn't learnt the trend and thus it is not able to generalize to new data. 3) The number of epochs ... I am actually not sure if you necessarily need more epochs the more hidden layers that you have. I guess it depends on other factors such as regularization. If you are trying to solve a super simple problem, then if you have a shallow and a really deep network, and you train both for the same number of epochs, you would probably get a better test accuracy in the shallow one (due to the overfitting of the deeper network that I mentioned above). However, if the problem is complex, then you might need to train the shallow network more epochs and apply regularization to achieve the same accuracy results as with the deep network. I am not an expert in this field, so don't take this last answer very serious. I run an experiment to see the validation cost for two models (3 convolutional layers + 1 Fully connected + 1 Softmax output layer), the blue curve corresponds to the model having 64 hidden units in the FC layer and the green to the one having 128 hidden units in that same layer. As you can see, for the same number of epochs (x-axis), the overfitting starts to occur earlier for the model having 128 hidden units (having more capacity). This overfitting point can be seen as when the validation cost stops decreasing and starts to increase. Check that book, it is awesome.
What is effect of increasing number of hidden layers in a Feed Forward NN? [duplicate]
I recommend you taking a look at http://www.deeplearningbook.org/ , they explain really well the concept of "Capacity" (Chapter 5, page 110), which might give you some answers to your questions. I'll
What is effect of increasing number of hidden layers in a Feed Forward NN? [duplicate] I recommend you taking a look at http://www.deeplearningbook.org/ , they explain really well the concept of "Capacity" (Chapter 5, page 110), which might give you some answers to your questions. I'll try my best though, for the sake of my answer. 1) Increasing the number of hidden layers might improve the accuracy or might not, it really depends on the complexity of the problem that you are trying to solve. 2) Increasing the number of hidden layers much more than the sufficient number of layers will cause accuracy in the test set to decrease, yes. It will cause your network to overfit to the training set, that is, it will learn the training data, but it won't be able to generalize to new unseen data. A picture taken from the aforementioned book gives a pretty good intuition for this concept Where in the left picture they try to fit a linear function to the data. This function is not complex enough to correctly represent the data, and it suffers from a bias(underfitting) problem. In the middle picture, the model has the appropriate complexity to accurately represent the data and to generalize, since it has learned the trend that this data follows (the data was synthetically created and has an inverted parabola shape). In the right picture, the model fits to the data, but it overfits to it, it hasn't learnt the trend and thus it is not able to generalize to new data. 3) The number of epochs ... I am actually not sure if you necessarily need more epochs the more hidden layers that you have. I guess it depends on other factors such as regularization. If you are trying to solve a super simple problem, then if you have a shallow and a really deep network, and you train both for the same number of epochs, you would probably get a better test accuracy in the shallow one (due to the overfitting of the deeper network that I mentioned above). However, if the problem is complex, then you might need to train the shallow network more epochs and apply regularization to achieve the same accuracy results as with the deep network. I am not an expert in this field, so don't take this last answer very serious. I run an experiment to see the validation cost for two models (3 convolutional layers + 1 Fully connected + 1 Softmax output layer), the blue curve corresponds to the model having 64 hidden units in the FC layer and the green to the one having 128 hidden units in that same layer. As you can see, for the same number of epochs (x-axis), the overfitting starts to occur earlier for the model having 128 hidden units (having more capacity). This overfitting point can be seen as when the validation cost stops decreasing and starts to increase. Check that book, it is awesome.
What is effect of increasing number of hidden layers in a Feed Forward NN? [duplicate] I recommend you taking a look at http://www.deeplearningbook.org/ , they explain really well the concept of "Capacity" (Chapter 5, page 110), which might give you some answers to your questions. I'll
45,007
Why is the correlation coefficient a limited measure of dependence?
This is explained in the Wikipedia entry for Correlation and Dependence. Correlation basically measures how close two variables are to having a linear relationship between them. Consider now $X \sim U(-1, 1)$, and $Y = X^2$. Then if you know $X$, you know $Y$ exactly, and if you know $Y$, you know $X$ up to its sign. Hence they are not independent. An easy calculation shows that their correlaton is 0, however.
Why is the correlation coefficient a limited measure of dependence?
This is explained in the Wikipedia entry for Correlation and Dependence. Correlation basically measures how close two variables are to having a linear relationship between them. Consider now $X \sim U
Why is the correlation coefficient a limited measure of dependence? This is explained in the Wikipedia entry for Correlation and Dependence. Correlation basically measures how close two variables are to having a linear relationship between them. Consider now $X \sim U(-1, 1)$, and $Y = X^2$. Then if you know $X$, you know $Y$ exactly, and if you know $Y$, you know $X$ up to its sign. Hence they are not independent. An easy calculation shows that their correlaton is 0, however.
Why is the correlation coefficient a limited measure of dependence? This is explained in the Wikipedia entry for Correlation and Dependence. Correlation basically measures how close two variables are to having a linear relationship between them. Consider now $X \sim U
45,008
Why is the correlation coefficient a limited measure of dependence?
A simple example. The correlation between a random variable $x$ and its square $x^2$ is zero for any symmetrical distribution on $\mathbb{R}$. Here's the means of a variable and its square: $$\mu=\int x dF(x)=0$$ $$\sigma^2=\int x^2 dF(x)$$ Let's calculate Pearson correlation: $$\rho=\frac{\int x x^2 dF(x)}{\mu \sigma^2}=\frac{\int x^3 dF(x)}{\mu \sigma^2}=\frac{0}{\mu \sigma^2}=0$$ However, if I know $x$ it tells me everything about $x^2$. That's one example where correlation does not reveal how much strong is the relationship between two variables.
Why is the correlation coefficient a limited measure of dependence?
A simple example. The correlation between a random variable $x$ and its square $x^2$ is zero for any symmetrical distribution on $\mathbb{R}$. Here's the means of a variable and its square: $$\mu=\int
Why is the correlation coefficient a limited measure of dependence? A simple example. The correlation between a random variable $x$ and its square $x^2$ is zero for any symmetrical distribution on $\mathbb{R}$. Here's the means of a variable and its square: $$\mu=\int x dF(x)=0$$ $$\sigma^2=\int x^2 dF(x)$$ Let's calculate Pearson correlation: $$\rho=\frac{\int x x^2 dF(x)}{\mu \sigma^2}=\frac{\int x^3 dF(x)}{\mu \sigma^2}=\frac{0}{\mu \sigma^2}=0$$ However, if I know $x$ it tells me everything about $x^2$. That's one example where correlation does not reveal how much strong is the relationship between two variables.
Why is the correlation coefficient a limited measure of dependence? A simple example. The correlation between a random variable $x$ and its square $x^2$ is zero for any symmetrical distribution on $\mathbb{R}$. Here's the means of a variable and its square: $$\mu=\int
45,009
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by its transpose not give back the covariance matrix?
As explained in my comment, the inconvenient truth is that the Cholesky decomposition while usually defined as $K=LL^T$ where $L$ is lower triangular, is equally valid as $K=U^TU$ where $U$ is upper triangular. The implementation of Cholesky decomposition in LAPACK (the libraries our computer use to compute Linear Algebra tasks) allow both expressions. R unfortunately has hard-coded the upper one. (There is a U in the call of the routine dpstrf that actually compute the Cholesky.) This means one has to transpose the results for chol in order to get a lower triangular matrix. After that is done, as you have already discovered yourself, the result follow-up directly. So for example, given the matrix S of the original post: U <- chol(S); L <- t(chol(S)); S - crossprod(U) # This is equivalent to S- U^T*U and should be approx. 0 S - tcrossprod(L) # This is equivalent to S- L*L^T and should be approx. 0 I hope it is therefore clear that the Wikipedia page is not wrong. Being somewhat critical, Wikipedia's Cholesky decomposition article should probably mention that the Cholesky decomposition $K=LL^T$ is equivalent to $K=U^TU$ where $U=L^T$. It was probably omitted for consistency of notation.
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by
As explained in my comment, the inconvenient truth is that the Cholesky decomposition while usually defined as $K=LL^T$ where $L$ is lower triangular, is equally valid as $K=U^TU$ where $U$ is upper t
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by its transpose not give back the covariance matrix? As explained in my comment, the inconvenient truth is that the Cholesky decomposition while usually defined as $K=LL^T$ where $L$ is lower triangular, is equally valid as $K=U^TU$ where $U$ is upper triangular. The implementation of Cholesky decomposition in LAPACK (the libraries our computer use to compute Linear Algebra tasks) allow both expressions. R unfortunately has hard-coded the upper one. (There is a U in the call of the routine dpstrf that actually compute the Cholesky.) This means one has to transpose the results for chol in order to get a lower triangular matrix. After that is done, as you have already discovered yourself, the result follow-up directly. So for example, given the matrix S of the original post: U <- chol(S); L <- t(chol(S)); S - crossprod(U) # This is equivalent to S- U^T*U and should be approx. 0 S - tcrossprod(L) # This is equivalent to S- L*L^T and should be approx. 0 I hope it is therefore clear that the Wikipedia page is not wrong. Being somewhat critical, Wikipedia's Cholesky decomposition article should probably mention that the Cholesky decomposition $K=LL^T$ is equivalent to $K=U^TU$ where $U=L^T$. It was probably omitted for consistency of notation.
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by As explained in my comment, the inconvenient truth is that the Cholesky decomposition while usually defined as $K=LL^T$ where $L$ is lower triangular, is equally valid as $K=U^TU$ where $U$ is upper t
45,010
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by its transpose not give back the covariance matrix?
So, why do you think that chol(S) returns your A and not A'? In fact it does return A' if you look at the values or read the documentation. It returns upper triangular, which corresponds to A' in your Wiki reference
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by
So, why do you think that chol(S) returns your A and not A'? In fact it does return A' if you look at the values or read the documentation. It returns upper triangular, which corresponds to A' in your
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by its transpose not give back the covariance matrix? So, why do you think that chol(S) returns your A and not A'? In fact it does return A' if you look at the values or read the documentation. It returns upper triangular, which corresponds to A' in your Wiki reference
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by So, why do you think that chol(S) returns your A and not A'? In fact it does return A' if you look at the values or read the documentation. It returns upper triangular, which corresponds to A' in your
45,011
Batch Learning w/Random Forest Sklearn [closed]
Yes, Batch Learning is certainly possible in scikit-learn. When you first initialize your RandomForestClassifier object you'll want to set the warm_start parameter to True. This means that successive calls to model.fit will not fit entirely new models, but add successive trees. Here's some pseudo-code to get you started. This will build one tree for each sub chunk of your data. # split your data into an iterable of (X,y) pairs # size each one so that it can fit into memory data_splits = ... clf = RandomForestClassifier(warm_start = True, n_estimators = 1) for _ in range(10): # 10 passes through the data for X, y in data_splits: clf.fit(X,y) clf.n_estimators += 1 # increment by one so next will add 1 tree I'm surprised to see that there's not a subsample parameter in RandomForestClassifier similar to the one in GradientBoostingClassifier that controls the number of observations visible to each tree. If you switched to GradientBoostingClassifier you might be able to simply set subsample to be a very small number to achieve the same results.
Batch Learning w/Random Forest Sklearn [closed]
Yes, Batch Learning is certainly possible in scikit-learn. When you first initialize your RandomForestClassifier object you'll want to set the warm_start parameter to True. This means that successive
Batch Learning w/Random Forest Sklearn [closed] Yes, Batch Learning is certainly possible in scikit-learn. When you first initialize your RandomForestClassifier object you'll want to set the warm_start parameter to True. This means that successive calls to model.fit will not fit entirely new models, but add successive trees. Here's some pseudo-code to get you started. This will build one tree for each sub chunk of your data. # split your data into an iterable of (X,y) pairs # size each one so that it can fit into memory data_splits = ... clf = RandomForestClassifier(warm_start = True, n_estimators = 1) for _ in range(10): # 10 passes through the data for X, y in data_splits: clf.fit(X,y) clf.n_estimators += 1 # increment by one so next will add 1 tree I'm surprised to see that there's not a subsample parameter in RandomForestClassifier similar to the one in GradientBoostingClassifier that controls the number of observations visible to each tree. If you switched to GradientBoostingClassifier you might be able to simply set subsample to be a very small number to achieve the same results.
Batch Learning w/Random Forest Sklearn [closed] Yes, Batch Learning is certainly possible in scikit-learn. When you first initialize your RandomForestClassifier object you'll want to set the warm_start parameter to True. This means that successive
45,012
Is $Pr(x \leq C)$ equal to $Pr(\sqrt{x} \leq \sqrt{C})$?
Yes, because $x \leq C \Leftrightarrow \sqrt{x} \leq \sqrt{C}$ for $x, C \geq 0$. This means that the set of events $\{A \in \Omega: x(A) \leq C\}$ equals the set $\{A \in \Omega: \sqrt{x(A)} \leq \sqrt{C}\}$, so are their probabilities.
Is $Pr(x \leq C)$ equal to $Pr(\sqrt{x} \leq \sqrt{C})$?
Yes, because $x \leq C \Leftrightarrow \sqrt{x} \leq \sqrt{C}$ for $x, C \geq 0$. This means that the set of events $\{A \in \Omega: x(A) \leq C\}$ equals the set $\{A \in \Omega: \sqrt{x(A)} \leq \s
Is $Pr(x \leq C)$ equal to $Pr(\sqrt{x} \leq \sqrt{C})$? Yes, because $x \leq C \Leftrightarrow \sqrt{x} \leq \sqrt{C}$ for $x, C \geq 0$. This means that the set of events $\{A \in \Omega: x(A) \leq C\}$ equals the set $\{A \in \Omega: \sqrt{x(A)} \leq \sqrt{C}\}$, so are their probabilities.
Is $Pr(x \leq C)$ equal to $Pr(\sqrt{x} \leq \sqrt{C})$? Yes, because $x \leq C \Leftrightarrow \sqrt{x} \leq \sqrt{C}$ for $x, C \geq 0$. This means that the set of events $\{A \in \Omega: x(A) \leq C\}$ equals the set $\{A \in \Omega: \sqrt{x(A)} \leq \s
45,013
How does perfect separation in logistic regression affect the AUC?
Why not try a simple simulation to try to figure it out? Here is one, coded in R: library(ROCR) # we'll use this package for the ROC & AUC set.seed(8365) # this makes the example exactly reproducible x = c(runif(50, min=0, max=4), # the x data have a gap from 4 to 6 runif(50, min=6, max=10)) y = ifelse(x<5, 0, 1) # lower values are all 0's; higher values 1's m = glm(y~x, family=binomial) summary(m) # ... # Deviance Residuals: # Min 1Q Median 3Q Max # -2.961e-05 -2.110e-08 0.000e+00 2.110e-08 2.674e-05 # residuals all ~0 # # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) -98.42 75721.51 -0.001 0.999 # huge coefficients & SEs # x 19.42 14525.40 0.001 0.999 # ... # # Null deviance: 1.3863e+02 on 99 degrees of freedom # Residual deviance: 2.6504e-09 on 98 degrees of freedom # residual deviance ~0 # AIC: 4 # # Number of Fisher Scoring iterations: 25 # very many iterations pred = prediction(predict(m, type="response"), y) # these create the ROC perf = performance(pred, "tpr", "fpr") performance(pred, "auc")@y.values[[1]] # this is the AUC # [1] 1 windows(width=7, height=4) layout(matrix(1:2, nrow=1)) plot(x,y, main="Data w/ fitted model") xs = seq(0,10,by=.1) lines(xs, predict(m, data.frame(x=xs), "response"), col="red") plot(perf, main="ROC curve") What you see in the output and in the figure is that the AUC is $1$. The AUC is the area under the ROC curve. The ROC curve is computed by varying the threshold above which you predict class $1$. At each point, you then examine the predicted classes vis-a-vie the actual classes and determine the true positive rate and the false positive rate. The ROC curve is just the plot of those two rates. Note however, that no matter what threshold you use, you will have perfect accuracy. Therefore, the ROC 'curve' is necessarily just the left and top sides of the unit square, and the area under it is $100\%$. @Clif AB makes a good point that you can have separation without having an $AUC = 1$. Here is an illustration of that case. You can see that you nonetheless get a higher AUC when there is separation than an almost identical case without separation. The reason is essentially the same: no matter where you set your threshold, you will have a better true positive rate and false positive rate from the separable data than without. x = rep(0:1, each=10) y1 = c(0,0,0,0,0,1,1,1,1,1, 0,1,1,1,1,1,1,1,1,1) # one 0 when x=1; no separation y2 = c(0,0,0,0,0,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1) # no 0's when x=1; separation m1 = glm(y1~x, family=binomial) m2 = glm(y2~x, family=binomial) summary(m1) # output omitted summary(m2) # output omitted pred1 = prediction(predict(m1, type="response"), y1) perf1 = performance(pred1, "tpr", "fpr") performance(pred1, "auc")@y.values[[1]] # this is the AUC # [1] 0.7380952 pred2 = prediction(predict(m2, type="response"), y2) perf2 = performance(pred2, "tpr", "fpr") performance(pred2, "auc")@y.values[[1]] # this is the AUC # [1] 0.8333333 windows() plot(perf1, col="blue", main="ROCs") plot(perf2, col="red", add=T) legend("bottomright", legend=c("Separation", "No separation"), lty=1, col=c("red","blue"))
How does perfect separation in logistic regression affect the AUC?
Why not try a simple simulation to try to figure it out? Here is one, coded in R: library(ROCR) # we'll use this package for the ROC & AUC set.seed(8365) # this
How does perfect separation in logistic regression affect the AUC? Why not try a simple simulation to try to figure it out? Here is one, coded in R: library(ROCR) # we'll use this package for the ROC & AUC set.seed(8365) # this makes the example exactly reproducible x = c(runif(50, min=0, max=4), # the x data have a gap from 4 to 6 runif(50, min=6, max=10)) y = ifelse(x<5, 0, 1) # lower values are all 0's; higher values 1's m = glm(y~x, family=binomial) summary(m) # ... # Deviance Residuals: # Min 1Q Median 3Q Max # -2.961e-05 -2.110e-08 0.000e+00 2.110e-08 2.674e-05 # residuals all ~0 # # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) -98.42 75721.51 -0.001 0.999 # huge coefficients & SEs # x 19.42 14525.40 0.001 0.999 # ... # # Null deviance: 1.3863e+02 on 99 degrees of freedom # Residual deviance: 2.6504e-09 on 98 degrees of freedom # residual deviance ~0 # AIC: 4 # # Number of Fisher Scoring iterations: 25 # very many iterations pred = prediction(predict(m, type="response"), y) # these create the ROC perf = performance(pred, "tpr", "fpr") performance(pred, "auc")@y.values[[1]] # this is the AUC # [1] 1 windows(width=7, height=4) layout(matrix(1:2, nrow=1)) plot(x,y, main="Data w/ fitted model") xs = seq(0,10,by=.1) lines(xs, predict(m, data.frame(x=xs), "response"), col="red") plot(perf, main="ROC curve") What you see in the output and in the figure is that the AUC is $1$. The AUC is the area under the ROC curve. The ROC curve is computed by varying the threshold above which you predict class $1$. At each point, you then examine the predicted classes vis-a-vie the actual classes and determine the true positive rate and the false positive rate. The ROC curve is just the plot of those two rates. Note however, that no matter what threshold you use, you will have perfect accuracy. Therefore, the ROC 'curve' is necessarily just the left and top sides of the unit square, and the area under it is $100\%$. @Clif AB makes a good point that you can have separation without having an $AUC = 1$. Here is an illustration of that case. You can see that you nonetheless get a higher AUC when there is separation than an almost identical case without separation. The reason is essentially the same: no matter where you set your threshold, you will have a better true positive rate and false positive rate from the separable data than without. x = rep(0:1, each=10) y1 = c(0,0,0,0,0,1,1,1,1,1, 0,1,1,1,1,1,1,1,1,1) # one 0 when x=1; no separation y2 = c(0,0,0,0,0,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1) # no 0's when x=1; separation m1 = glm(y1~x, family=binomial) m2 = glm(y2~x, family=binomial) summary(m1) # output omitted summary(m2) # output omitted pred1 = prediction(predict(m1, type="response"), y1) perf1 = performance(pred1, "tpr", "fpr") performance(pred1, "auc")@y.values[[1]] # this is the AUC # [1] 0.7380952 pred2 = prediction(predict(m2, type="response"), y2) perf2 = performance(pred2, "tpr", "fpr") performance(pred2, "auc")@y.values[[1]] # this is the AUC # [1] 0.8333333 windows() plot(perf1, col="blue", main="ROCs") plot(perf2, col="red", add=T) legend("bottomright", legend=c("Separation", "No separation"), lty=1, col=c("red","blue"))
How does perfect separation in logistic regression affect the AUC? Why not try a simple simulation to try to figure it out? Here is one, coded in R: library(ROCR) # we'll use this package for the ROC & AUC set.seed(8365) # this
45,014
How does perfect separation in logistic regression affect the AUC?
@gung had a great answer. I just want to add more explanations on why "no matter what threshold you use, you will have perfect accuracy" If we add one more line to @gung's code to check the predicted probability, we can see this: essentially for all the data points the predicted probability ether 0 or 1, this is why the threshold does not matter and we got 1 on AUC. > predict(m,data.frame(x=x), type="response") 1 2 3 4 5 6 7 8 9 10 11 12 13 14 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 4.384945e-10 2.220446e-16 5.245702e-12 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 15 16 17 18 19 20 21 22 23 24 25 26 27 28 2.220446e-16 2.220446e-16 2.220446e-16 4.719935e-11 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.669394e-13 2.220446e-16 1.365883e-10 2.220446e-16 6.992038e-13 29 30 31 32 33 34 35 36 37 38 39 40 41 42 2.220446e-16 5.435395e-12 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 5.922012e-12 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 43 44 45 46 47 48 49 50 51 52 53 54 55 56 2.220446e-16 2.220446e-16 2.912038e-12 2.220446e-16 2.220446e-16 1.258165e-11 2.220446e-16 2.220446e-16 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 57 58 59 60 61 62 63 64 65 66 67 68 69 70 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 71 72 73 74 75 76 77 78 79 80 81 82 83 84 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 85 86 87 88 89 90 91 92 93 94 95 96 97 98 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 99 100 1.000000e+00 1.000000e+00 In the output there are 100 probability predictions on 100 data points. The first 50 data point's probability is 0s and the second half are 1s. Among all these 100 numbers, there are 2 uniqe values 0 and 1. If we select any threshold between 0 and 1, we will always have the perfect cut, exact same label as ground truth.
How does perfect separation in logistic regression affect the AUC?
@gung had a great answer. I just want to add more explanations on why "no matter what threshold you use, you will have perfect accuracy" If we add one more line to @gung's code to check the predicted
How does perfect separation in logistic regression affect the AUC? @gung had a great answer. I just want to add more explanations on why "no matter what threshold you use, you will have perfect accuracy" If we add one more line to @gung's code to check the predicted probability, we can see this: essentially for all the data points the predicted probability ether 0 or 1, this is why the threshold does not matter and we got 1 on AUC. > predict(m,data.frame(x=x), type="response") 1 2 3 4 5 6 7 8 9 10 11 12 13 14 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 4.384945e-10 2.220446e-16 5.245702e-12 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 15 16 17 18 19 20 21 22 23 24 25 26 27 28 2.220446e-16 2.220446e-16 2.220446e-16 4.719935e-11 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.669394e-13 2.220446e-16 1.365883e-10 2.220446e-16 6.992038e-13 29 30 31 32 33 34 35 36 37 38 39 40 41 42 2.220446e-16 5.435395e-12 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 5.922012e-12 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 2.220446e-16 43 44 45 46 47 48 49 50 51 52 53 54 55 56 2.220446e-16 2.220446e-16 2.912038e-12 2.220446e-16 2.220446e-16 1.258165e-11 2.220446e-16 2.220446e-16 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 57 58 59 60 61 62 63 64 65 66 67 68 69 70 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 71 72 73 74 75 76 77 78 79 80 81 82 83 84 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 85 86 87 88 89 90 91 92 93 94 95 96 97 98 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 99 100 1.000000e+00 1.000000e+00 In the output there are 100 probability predictions on 100 data points. The first 50 data point's probability is 0s and the second half are 1s. Among all these 100 numbers, there are 2 uniqe values 0 and 1. If we select any threshold between 0 and 1, we will always have the perfect cut, exact same label as ground truth.
How does perfect separation in logistic regression affect the AUC? @gung had a great answer. I just want to add more explanations on why "no matter what threshold you use, you will have perfect accuracy" If we add one more line to @gung's code to check the predicted
45,015
How does the second derivative inform an update step in Gradient Descent?
I agree with your distaste for the writing. It seems as though you have an understanding of what is going on, but I will attempt to clarify why the second derivative is important. Consider a two-dimensional orthogonal system. Since they are orthogonal we can look at them independently, and together. This need not be the case, but I use the orthogonal system to avoid the linear algebra which may muddy the intuition . In the $x_1$ dimension, the objective, f, varies roughly as $f=x_1^2$. In the $x_2$ dimension, the objective varies as $f = .00001x_2^2$. The minima is f(0,0) = 0. This is the gradient descent update in each dimension: $x_{1,k+1} = x_{1,k} - 2\alpha x_{1,k} $ $x_{2,k+1} = x_{2,k} - .00002\alpha x_{2,k} $ Where $\alpha$ is the learning rate. That is, According to a gradient descent update, if you start at about (1,1) then after a few iterations you will be at $\approx (0,1)$ because the gradient in the $x_2$ direction is already very near zero. True, we may have predicted this based on the fact that the gradient at every point in the $x_2$ direction is near zero, but it is still undesirable -- I think that is the point they were trying to make in the bolded sentence. Now we note that $\frac{\partial^2 f}{\partial x_2^2} = .00002$. $\frac{\partial^2 f}{\partial x_1^2} = 2$. Dividing by this amounts to accounting for the curvature (or lack thereof) in that dimension. Now let's solve these two 1-d functions using second order information. Recall that the form is $x_{k+1} = x_k - \alpha \frac{\partial^2 f}{\partial x_k^2}^{-1} \frac{\partial f}{\partial x_k}$: $x_{1,k+1} = x_{1,k} - \alpha x_{1,k} $ $x_{2,k+1} = x_{2,k} - \alpha x_{2,k} $ That is, they are converging at the same rate, exactly as we would hope they would!
How does the second derivative inform an update step in Gradient Descent?
I agree with your distaste for the writing. It seems as though you have an understanding of what is going on, but I will attempt to clarify why the second derivative is important. Consider a two-dimen
How does the second derivative inform an update step in Gradient Descent? I agree with your distaste for the writing. It seems as though you have an understanding of what is going on, but I will attempt to clarify why the second derivative is important. Consider a two-dimensional orthogonal system. Since they are orthogonal we can look at them independently, and together. This need not be the case, but I use the orthogonal system to avoid the linear algebra which may muddy the intuition . In the $x_1$ dimension, the objective, f, varies roughly as $f=x_1^2$. In the $x_2$ dimension, the objective varies as $f = .00001x_2^2$. The minima is f(0,0) = 0. This is the gradient descent update in each dimension: $x_{1,k+1} = x_{1,k} - 2\alpha x_{1,k} $ $x_{2,k+1} = x_{2,k} - .00002\alpha x_{2,k} $ Where $\alpha$ is the learning rate. That is, According to a gradient descent update, if you start at about (1,1) then after a few iterations you will be at $\approx (0,1)$ because the gradient in the $x_2$ direction is already very near zero. True, we may have predicted this based on the fact that the gradient at every point in the $x_2$ direction is near zero, but it is still undesirable -- I think that is the point they were trying to make in the bolded sentence. Now we note that $\frac{\partial^2 f}{\partial x_2^2} = .00002$. $\frac{\partial^2 f}{\partial x_1^2} = 2$. Dividing by this amounts to accounting for the curvature (or lack thereof) in that dimension. Now let's solve these two 1-d functions using second order information. Recall that the form is $x_{k+1} = x_k - \alpha \frac{\partial^2 f}{\partial x_k^2}^{-1} \frac{\partial f}{\partial x_k}$: $x_{1,k+1} = x_{1,k} - \alpha x_{1,k} $ $x_{2,k+1} = x_{2,k} - \alpha x_{2,k} $ That is, they are converging at the same rate, exactly as we would hope they would!
How does the second derivative inform an update step in Gradient Descent? I agree with your distaste for the writing. It seems as though you have an understanding of what is going on, but I will attempt to clarify why the second derivative is important. Consider a two-dimen
45,016
How does the second derivative inform an update step in Gradient Descent?
Deviation from an approximation by linear function An estimate of the improvement after a gradient step approximates the value $f(x_{n+1})$ in the point $x_{n+1}$ by assuming that the slope of the function is constant. It approximates the function with a linear function Your expectation might be that for a small change of the coordinates from $s= x_{n+1} - x_n$ the change of the function is approximately linear $$f(x_{n+1}) \approx f(x_n) + \nabla f(x) (x_{n+1}-x_{n}) $$ Whether the improvement will be as much of an improvement as the one based on such linear approximation depends on the second derivative and how much the function is not constant. If the second directional derivative (along the direction of change $s$) is positive then you will get effectively a smaller change in the function. E.g. in one dimensional quadratic function $f(x)=ax^2 + bx +c$ you have $$f(x_{n+1}) = f(x_{n}) + f^\prime(x_n) (x_{n+1}-x_{n}) + \frac{f^{\prime\prime}(x_n)}{2}(x_{n+1}-x_{n})^2$$ More generally, instead of just the second order term you have a Taylor expansion $$f(x_{n+1}) = f(x_{n}) + \sum_{k=1}^\infty \frac{f^{(k)}(x_n)}{k!} (x_{n+1}-x_{n})^k $$ although in many practical cases for small changes $(x_{n+1}-x_{n}) \ll 1$ the higher order terms are sufficiently small or even vanish (e.g. polynomial functions). Expression of the error We can quantify the limits of the error that is can be made by using a constant slope $$x_{n+1} \approx x_{n} + f'(x_n) (x_{n+1}-x_{n})$$ instead of a varying slope $$x_{n+1} = x_{n} + \int_{x_{n}}^{x_{n+1}} f'(s) ds$$ The error that we can make by taking a constant slope can not be larger than the difference between the situation between the steepest slopes (up/down). $$ \int_{x_n}^{x_{n+1}} f^\prime_{min} ds \leq \int_{x_n}^{x_{n+1}} f^\prime(s) ds \leq \int_{x_n}^{x_{n+1}} f^\prime_{max} ds $$ or $$ f^\prime_{min} (x_{n+1}-x_{n}) \leq \int_{x_n}^{x_{n+1}} f^\prime(s) ds \leq f^\prime_{max}(x_{n+1}-x_{n})$$ When the second derivative $\vert f^{\prime \prime} \vert$ is small between $x_{n}$ and $x_{n+1}$, then the minimum of the $f^\prime$ and maximum $f^\prime$ values won't differ much. You can also express the limits of the error of the approximation in terms of minimum and maximum values of second-order derivative. You can view the error $\epsilon = [ f(x_n) + f'(x_n) (x_{n+1}-x_{n})) ] - f(x_{n+1})$ based on the highest and lowest possible second derivative $f''$. We assumed $f''=0$, and the error that we make can be based on the worst cases when $f''$ is either constantly at it's maximum or minimum. $$f''_{min} 0.5(x_{n+1}-x_n)^2 \leq \epsilon \leq f''_{max} 0.5(x_{n+1}-x_n)^2$$ Often the value of $f''$ is positive near the optimum (both upper/maximum and lower/minimum bounds, because the cost function is convex near the optimum and slope flattens), which means that the error is always positive and we often make smaller steps than expected. That is probably why the expression is using the term 'as much', because commonly the improvement after a gradient step will be less. (but this is not always true, it is just the common situation) whether a gradient step will cause as much of an improvement as we would expect based on the gradient alone.
How does the second derivative inform an update step in Gradient Descent?
Deviation from an approximation by linear function An estimate of the improvement after a gradient step approximates the value $f(x_{n+1})$ in the point $x_{n+1}$ by assuming that the slope of the fun
How does the second derivative inform an update step in Gradient Descent? Deviation from an approximation by linear function An estimate of the improvement after a gradient step approximates the value $f(x_{n+1})$ in the point $x_{n+1}$ by assuming that the slope of the function is constant. It approximates the function with a linear function Your expectation might be that for a small change of the coordinates from $s= x_{n+1} - x_n$ the change of the function is approximately linear $$f(x_{n+1}) \approx f(x_n) + \nabla f(x) (x_{n+1}-x_{n}) $$ Whether the improvement will be as much of an improvement as the one based on such linear approximation depends on the second derivative and how much the function is not constant. If the second directional derivative (along the direction of change $s$) is positive then you will get effectively a smaller change in the function. E.g. in one dimensional quadratic function $f(x)=ax^2 + bx +c$ you have $$f(x_{n+1}) = f(x_{n}) + f^\prime(x_n) (x_{n+1}-x_{n}) + \frac{f^{\prime\prime}(x_n)}{2}(x_{n+1}-x_{n})^2$$ More generally, instead of just the second order term you have a Taylor expansion $$f(x_{n+1}) = f(x_{n}) + \sum_{k=1}^\infty \frac{f^{(k)}(x_n)}{k!} (x_{n+1}-x_{n})^k $$ although in many practical cases for small changes $(x_{n+1}-x_{n}) \ll 1$ the higher order terms are sufficiently small or even vanish (e.g. polynomial functions). Expression of the error We can quantify the limits of the error that is can be made by using a constant slope $$x_{n+1} \approx x_{n} + f'(x_n) (x_{n+1}-x_{n})$$ instead of a varying slope $$x_{n+1} = x_{n} + \int_{x_{n}}^{x_{n+1}} f'(s) ds$$ The error that we can make by taking a constant slope can not be larger than the difference between the situation between the steepest slopes (up/down). $$ \int_{x_n}^{x_{n+1}} f^\prime_{min} ds \leq \int_{x_n}^{x_{n+1}} f^\prime(s) ds \leq \int_{x_n}^{x_{n+1}} f^\prime_{max} ds $$ or $$ f^\prime_{min} (x_{n+1}-x_{n}) \leq \int_{x_n}^{x_{n+1}} f^\prime(s) ds \leq f^\prime_{max}(x_{n+1}-x_{n})$$ When the second derivative $\vert f^{\prime \prime} \vert$ is small between $x_{n}$ and $x_{n+1}$, then the minimum of the $f^\prime$ and maximum $f^\prime$ values won't differ much. You can also express the limits of the error of the approximation in terms of minimum and maximum values of second-order derivative. You can view the error $\epsilon = [ f(x_n) + f'(x_n) (x_{n+1}-x_{n})) ] - f(x_{n+1})$ based on the highest and lowest possible second derivative $f''$. We assumed $f''=0$, and the error that we make can be based on the worst cases when $f''$ is either constantly at it's maximum or minimum. $$f''_{min} 0.5(x_{n+1}-x_n)^2 \leq \epsilon \leq f''_{max} 0.5(x_{n+1}-x_n)^2$$ Often the value of $f''$ is positive near the optimum (both upper/maximum and lower/minimum bounds, because the cost function is convex near the optimum and slope flattens), which means that the error is always positive and we often make smaller steps than expected. That is probably why the expression is using the term 'as much', because commonly the improvement after a gradient step will be less. (but this is not always true, it is just the common situation) whether a gradient step will cause as much of an improvement as we would expect based on the gradient alone.
How does the second derivative inform an update step in Gradient Descent? Deviation from an approximation by linear function An estimate of the improvement after a gradient step approximates the value $f(x_{n+1})$ in the point $x_{n+1}$ by assuming that the slope of the fun
45,017
How does the second derivative inform an update step in Gradient Descent?
Second derivatives are used to understand the rate of change of derivatives. Considering the huge number of hyper-parameters involved in building the trained model, it is always necessary to detect early the accuracy rate of the model being trained. Many of us, spend a considerable time in training the models with different set of hyper-parameters to reach an optimal solution. Thus, early detection of the learning trend helps in saving resources. Second derivatives are one of the easiest ways to help us in early detection of the learning trends and provides the model designer a feedback to intervene and refine the hyper-parameters at a much early stage, thus saving precious time and other resources.
How does the second derivative inform an update step in Gradient Descent?
Second derivatives are used to understand the rate of change of derivatives. Considering the huge number of hyper-parameters involved in building the trained model, it is always necessary to detect e
How does the second derivative inform an update step in Gradient Descent? Second derivatives are used to understand the rate of change of derivatives. Considering the huge number of hyper-parameters involved in building the trained model, it is always necessary to detect early the accuracy rate of the model being trained. Many of us, spend a considerable time in training the models with different set of hyper-parameters to reach an optimal solution. Thus, early detection of the learning trend helps in saving resources. Second derivatives are one of the easiest ways to help us in early detection of the learning trends and provides the model designer a feedback to intervene and refine the hyper-parameters at a much early stage, thus saving precious time and other resources.
How does the second derivative inform an update step in Gradient Descent? Second derivatives are used to understand the rate of change of derivatives. Considering the huge number of hyper-parameters involved in building the trained model, it is always necessary to detect e
45,018
Pearson correlation between a variable and its square
You are curious about whether your value of $r$ is "too high" — it seems you think that, as $X$ and $X^2$ do not have an exactly linear relationship, then the Pearson's $r$ should be rather low. The high $r$ is not telling you that the relationship is linear, but it is telling you that the relationship is rather close to being linear. If you are specifically interested in the case where $X$ is uniform, you might want to look at this thread on Math SE on the covariance between a uniform distribution and its square. You are using discrete uniform distribution $1,2,\dots,n$ but if you rescaled $X$ by a factor of $1/n$, and hence rescaled $X^2$ by a factor $1/n^2$, the correlation would be unchanged (since correlation is not affected by rescaling by a positive scale factor). You would now have a discrete uniform distribution with equal probability masses on $\frac{1}{n}, \frac{2}{n}, \dots, \frac{n-1}{n}, 1$. For large values of $n$, this approximates a continuous uniform distribution (also called "rectangular distribution") on $[0,1]$. By an argument analogous to that on the Math SE thread, we have: $$\operatorname{Cov}(X,X^2) = \mathbb{E}(X^3)-\mathbb{E}(X)\mathbb{E}(X^2) = \int_0^1 x^3 dx - \int_0^1 x dx \cdot \int_0^1 x^2 dx$$ This integrates to $\frac{1}{4} - \frac{1}{2} \cdot \frac{1}{3} = \frac{1}{12}$. We also have $\operatorname{Var}(X) = \mathbb{E}(X^2)-\mathbb{E}(X)^2 = \frac{1}{3} - \left(\frac{1}{2}\right)^2 = \frac{1}{12}$. Similarly we find $\operatorname{Var}(X^2) = \mathbb{E}(X^4)-\mathbb{E}(X^2)^2 = \frac{1}{5} - \left(\frac{1}{3}\right)^2 = \frac{4}{45}$. Hence, if $X \sim U(0,1)$, then: $$\operatorname{Corr}(X,X^2) = \frac{\operatorname{Cov}(X,X^2)}{\sqrt{\operatorname{Var}(X) \cdot \operatorname{Var}(X^2)}} = \frac{\frac{1}{12}}{\sqrt{{\frac{1}{12}}\cdot{\frac{4}{45}}}} = \frac{\sqrt{15}}{4}$$ To seven decimal places, this is $r = 0.96824583$, even though the relationship is quadratic rather than linear. Now you have taken a discrete uniform distribution on $1, 2, \dots, n$ rather than a continuous one, but for the reasons explained above, increasing $n$ will produce a correlation closer to the continuous case, so that $\sqrt{15}/4$ will be the limiting value. Let us confirm this in R: corn <- function(n){ x = 1:n cor(x,x^2) } > corn(2) [1] 1 > corn(3) [1] 0.9897433 > corn(4) [1] 0.984374 > corn(5) [1] 0.9811049 > corn(10) [1] 0.9745586 > corn(100) [1] 0.9688545 > corn(1e3) [1] 0.9683064 > corn(1e6) [1] 0.9682459 > corn(1e7) [1] 0.9682458 That correlation of $r=0.9682458$ may sound surprisingly high, but if we inspected a graph of the relationship between $X$ and $X^2$ it would indeed appear approximately linear, and this is all that the correlation coefficient is telling you. Moreover, we can see from our table of output from the corn function that increasing the value of $n$ makes the linear correlation smaller (note that with two points, we had a perfect linear fit and a correlation equal to one!) but that although $r$ is falling, it is bounded below by $\sqrt{15}/4$. In other words, increasing the length of your sequence of integers makes the linear fit somewhat worse, but even as $n$ tends to infinity your $r$ never becomes worse than $0.9682\dots$. x=1:100; y=x^2 plot(x,y) abline(lm(y~x)) Perhaps visually you are still not convinced that the correlation looks as strong as the calculated coefficient suggests — clearly the points are below the line of best fit for low and high values of $X$, and above it for intermediate $X$. If it can't capture this quadratic curvature, is the line really such a good fit to the points? You may find it helpful to compare the overall variation of the $Y$ coordinates about their own mean (the "total variation") to how much the points vary above and below the regression line (the "residual variation" that the regression line was unable to explain). The fraction of the residual variation over the total variation tells you what proportion of the variation was not explained by the regression line; the proportion of variation that is explained by the regression line is then one minus this fraction, and is called the $R^2$. In this case, we can see that the variation of points above and below the line is relatively small compared to the variation in their $Y$ coordinates, and so the proportion unexplained by the regression is small and the $R^2$ is large. It turns out that for a simple linear regression, $R^2$ is equal to the square of the Pearson correlation. In fact $r=\sqrt{R^2}$ if the regression slope is positive (an increasing relationship) or $r=-\sqrt{R^2}$ if the slope is negative (decreasing). We had a large $R^2$ so our correlation is large also. This is the sense we mean when we state that "a Pearson correlation near $\pm 1$ indicates the linear fit is good" — not that our straight regression line captures the true nature of the relationship between $X$ and $Y$, and so there is no curvature and no discernible pattern in the residual variation, but instead that the line provides a good approximation to the true relationship, and that the proportion of residual variation (i.e. that part left unexplained by the linear model) is small. Note that had you chosen a discrete uniform on e.g. $-100, -99, \dots, 99, 100$ and rescaled that to being between $[-1,1]$ and you would have found a covariance and correlation of zero, as happens in the linked Math SE thread. There is neither an increasing nor decreasing relationship. x=-100:100; y=x^2 plot(x,y) abline(lm(y~x)) As an exercise to think through, what would be the correlation between $-1, -2, -3, \dots, -n$ and its squares? You can easily write some R code to confirm your guess. If all you care about is the existence of an increasing or decreasing relationship, rather than the extent to which it is linear, you can use a rank-based measure such as Kendall's tau or Spearman's rho, as mentioned in Glen_b's answer. For my first graph, which had a perfectly monotonic increasing relationship, both methods would have given the highest possible correlation (one). For the second graph, which is neither increasing nor decreasing, both would give a correlation of zero.
Pearson correlation between a variable and its square
You are curious about whether your value of $r$ is "too high" — it seems you think that, as $X$ and $X^2$ do not have an exactly linear relationship, then the Pearson's $r$ should be rather low. The h
Pearson correlation between a variable and its square You are curious about whether your value of $r$ is "too high" — it seems you think that, as $X$ and $X^2$ do not have an exactly linear relationship, then the Pearson's $r$ should be rather low. The high $r$ is not telling you that the relationship is linear, but it is telling you that the relationship is rather close to being linear. If you are specifically interested in the case where $X$ is uniform, you might want to look at this thread on Math SE on the covariance between a uniform distribution and its square. You are using discrete uniform distribution $1,2,\dots,n$ but if you rescaled $X$ by a factor of $1/n$, and hence rescaled $X^2$ by a factor $1/n^2$, the correlation would be unchanged (since correlation is not affected by rescaling by a positive scale factor). You would now have a discrete uniform distribution with equal probability masses on $\frac{1}{n}, \frac{2}{n}, \dots, \frac{n-1}{n}, 1$. For large values of $n$, this approximates a continuous uniform distribution (also called "rectangular distribution") on $[0,1]$. By an argument analogous to that on the Math SE thread, we have: $$\operatorname{Cov}(X,X^2) = \mathbb{E}(X^3)-\mathbb{E}(X)\mathbb{E}(X^2) = \int_0^1 x^3 dx - \int_0^1 x dx \cdot \int_0^1 x^2 dx$$ This integrates to $\frac{1}{4} - \frac{1}{2} \cdot \frac{1}{3} = \frac{1}{12}$. We also have $\operatorname{Var}(X) = \mathbb{E}(X^2)-\mathbb{E}(X)^2 = \frac{1}{3} - \left(\frac{1}{2}\right)^2 = \frac{1}{12}$. Similarly we find $\operatorname{Var}(X^2) = \mathbb{E}(X^4)-\mathbb{E}(X^2)^2 = \frac{1}{5} - \left(\frac{1}{3}\right)^2 = \frac{4}{45}$. Hence, if $X \sim U(0,1)$, then: $$\operatorname{Corr}(X,X^2) = \frac{\operatorname{Cov}(X,X^2)}{\sqrt{\operatorname{Var}(X) \cdot \operatorname{Var}(X^2)}} = \frac{\frac{1}{12}}{\sqrt{{\frac{1}{12}}\cdot{\frac{4}{45}}}} = \frac{\sqrt{15}}{4}$$ To seven decimal places, this is $r = 0.96824583$, even though the relationship is quadratic rather than linear. Now you have taken a discrete uniform distribution on $1, 2, \dots, n$ rather than a continuous one, but for the reasons explained above, increasing $n$ will produce a correlation closer to the continuous case, so that $\sqrt{15}/4$ will be the limiting value. Let us confirm this in R: corn <- function(n){ x = 1:n cor(x,x^2) } > corn(2) [1] 1 > corn(3) [1] 0.9897433 > corn(4) [1] 0.984374 > corn(5) [1] 0.9811049 > corn(10) [1] 0.9745586 > corn(100) [1] 0.9688545 > corn(1e3) [1] 0.9683064 > corn(1e6) [1] 0.9682459 > corn(1e7) [1] 0.9682458 That correlation of $r=0.9682458$ may sound surprisingly high, but if we inspected a graph of the relationship between $X$ and $X^2$ it would indeed appear approximately linear, and this is all that the correlation coefficient is telling you. Moreover, we can see from our table of output from the corn function that increasing the value of $n$ makes the linear correlation smaller (note that with two points, we had a perfect linear fit and a correlation equal to one!) but that although $r$ is falling, it is bounded below by $\sqrt{15}/4$. In other words, increasing the length of your sequence of integers makes the linear fit somewhat worse, but even as $n$ tends to infinity your $r$ never becomes worse than $0.9682\dots$. x=1:100; y=x^2 plot(x,y) abline(lm(y~x)) Perhaps visually you are still not convinced that the correlation looks as strong as the calculated coefficient suggests — clearly the points are below the line of best fit for low and high values of $X$, and above it for intermediate $X$. If it can't capture this quadratic curvature, is the line really such a good fit to the points? You may find it helpful to compare the overall variation of the $Y$ coordinates about their own mean (the "total variation") to how much the points vary above and below the regression line (the "residual variation" that the regression line was unable to explain). The fraction of the residual variation over the total variation tells you what proportion of the variation was not explained by the regression line; the proportion of variation that is explained by the regression line is then one minus this fraction, and is called the $R^2$. In this case, we can see that the variation of points above and below the line is relatively small compared to the variation in their $Y$ coordinates, and so the proportion unexplained by the regression is small and the $R^2$ is large. It turns out that for a simple linear regression, $R^2$ is equal to the square of the Pearson correlation. In fact $r=\sqrt{R^2}$ if the regression slope is positive (an increasing relationship) or $r=-\sqrt{R^2}$ if the slope is negative (decreasing). We had a large $R^2$ so our correlation is large also. This is the sense we mean when we state that "a Pearson correlation near $\pm 1$ indicates the linear fit is good" — not that our straight regression line captures the true nature of the relationship between $X$ and $Y$, and so there is no curvature and no discernible pattern in the residual variation, but instead that the line provides a good approximation to the true relationship, and that the proportion of residual variation (i.e. that part left unexplained by the linear model) is small. Note that had you chosen a discrete uniform on e.g. $-100, -99, \dots, 99, 100$ and rescaled that to being between $[-1,1]$ and you would have found a covariance and correlation of zero, as happens in the linked Math SE thread. There is neither an increasing nor decreasing relationship. x=-100:100; y=x^2 plot(x,y) abline(lm(y~x)) As an exercise to think through, what would be the correlation between $-1, -2, -3, \dots, -n$ and its squares? You can easily write some R code to confirm your guess. If all you care about is the existence of an increasing or decreasing relationship, rather than the extent to which it is linear, you can use a rank-based measure such as Kendall's tau or Spearman's rho, as mentioned in Glen_b's answer. For my first graph, which had a perfectly monotonic increasing relationship, both methods would have given the highest possible correlation (one). For the second graph, which is neither increasing nor decreasing, both would give a correlation of zero.
Pearson correlation between a variable and its square You are curious about whether your value of $r$ is "too high" — it seems you think that, as $X$ and $X^2$ do not have an exactly linear relationship, then the Pearson's $r$ should be rather low. The h
45,019
Pearson correlation between a variable and its square
The Pearson correlation measures the closeness to a linear relationship. If $X$ is positive, then the correlation between $X$ and $X^2$ is often fairly close to 1. If you want to measure the strength of monotonic relationship, there are a number of other choices, of which the two best known are the Kendall correlation (Kendall's tau), and the Spearman correlation (Spearman's rho) x=1:100 cor(x,x^2,method="pearson") [1] 0.9688545 cor(x,x^2,method="kendall") [1] 1 cor(x,x^2,method="spearman") [1] 1 I'd add that looking at the correlation of non-random values isn't necessarily where I'd start - it can be useful when exploring edge cases, however. For the Pearson correlation you may find it useful to consider playing about with the rho and n values here: n=100 rho=0.6 x=rnorm(100) z=rnorm(100) y=rho*x + sqrt(1-rho^2)*z plot(x,y) cor(x,y) (In particular, you might try varying rho from close to -1 up to close to 1) You may also find these discussions of correlation useful for getting a handle on what correlations do and don't do: Why zero correlation does not necessarily imply independence Does the correlation coefficient, r, for linear association always exist? If A and B are correlated with C, why are A and B not necessarily correlated? How would you explain covariance to someone who understands only the mean? Pearson's or Spearman's correlation with non-normal data How to choose between Pearson and Spearman correlation? Kendall Tau or Spearman's rho? If linear regression is related to Pearson's correlation, are there any regression techniques related to Kendall's and Spearman's correlations?
Pearson correlation between a variable and its square
The Pearson correlation measures the closeness to a linear relationship. If $X$ is positive, then the correlation between $X$ and $X^2$ is often fairly close to 1. If you want to measure the strength
Pearson correlation between a variable and its square The Pearson correlation measures the closeness to a linear relationship. If $X$ is positive, then the correlation between $X$ and $X^2$ is often fairly close to 1. If you want to measure the strength of monotonic relationship, there are a number of other choices, of which the two best known are the Kendall correlation (Kendall's tau), and the Spearman correlation (Spearman's rho) x=1:100 cor(x,x^2,method="pearson") [1] 0.9688545 cor(x,x^2,method="kendall") [1] 1 cor(x,x^2,method="spearman") [1] 1 I'd add that looking at the correlation of non-random values isn't necessarily where I'd start - it can be useful when exploring edge cases, however. For the Pearson correlation you may find it useful to consider playing about with the rho and n values here: n=100 rho=0.6 x=rnorm(100) z=rnorm(100) y=rho*x + sqrt(1-rho^2)*z plot(x,y) cor(x,y) (In particular, you might try varying rho from close to -1 up to close to 1) You may also find these discussions of correlation useful for getting a handle on what correlations do and don't do: Why zero correlation does not necessarily imply independence Does the correlation coefficient, r, for linear association always exist? If A and B are correlated with C, why are A and B not necessarily correlated? How would you explain covariance to someone who understands only the mean? Pearson's or Spearman's correlation with non-normal data How to choose between Pearson and Spearman correlation? Kendall Tau or Spearman's rho? If linear regression is related to Pearson's correlation, are there any regression techniques related to Kendall's and Spearman's correlations?
Pearson correlation between a variable and its square The Pearson correlation measures the closeness to a linear relationship. If $X$ is positive, then the correlation between $X$ and $X^2$ is often fairly close to 1. If you want to measure the strength
45,020
What does P(A|B)*P(A|C) simplify to?
Let's say we have a problem of predicting whether a storm is coming or not. So we'd like to predict whether a storm is coming or not (event $A$), and we have some clues available to us, namely the amount of clouds in the sky (event $B$) and how scared your dogs are (event $C$). We can visualise the problem at hand using a Venn diagram: We are interested in calculating the probability of a storm given the clues, $P(A|B,C)$. That quantity isn't represented directly in the diagram; instead, we can get $P(A,B,C)$ (a.k.a $P(A \cap B \cap C)$) from the white central area in the diagram. Fortunately, the relationship between $P(A|B,C)$ and $P(A,B,C)$ is simple: $$P(A,B,C) = P(A|B,C) \cdot P(B,C)$$ where $P(B,C)$ corresponds to the conjunction between the magenta and white areas of the diagram. We have a model P(storm is coming | how many clouds are outside), and have another model P(storm is coming | how scared the dogs are) So we know $P(A|B)$ and $P(A|C)$. Like before, these two quantities are not represented directly in the diagram. Instead, we have $P(A,B)$, which corresponds to the yellow and white areas, and $P(A,C)$, which corresponds to the cyan and white areas. As before, we know the relationship between $P(A,B)$ and $P(A|B)$: $$P(A,B) = P(A|B) \cdot P(B)$$ Same goes for $P(A,C)$ and $P(A|C)$. To recap, we would like to know $P(A|B,C)$, which is related to the white area in the Venn diagram. So what happens if we add $P(A)$, $P(B)$ and $P(C)$? We are counting the magenta, yellow and cyan areas twice each, and the white central area three times. So we subtract the magenta, yellow and cyan areas once: $$P(A) + P(B) + P(C) - P(A,B) - P(A,C) - P(B,C)$$ Except now we removed the white area from the summation; we added the white area three times when we summed up $A$, $B$, and $C$, but we removed it three times when we subtracted $(A,B)$, $(A,C)$ and $(B,C)$. So we add it back: $$P(A) + P(B) + P(C) - P(A,B) - P(A,C) - P(B,C) + P(A,B,C)$$ We didn't account for the area outside all the circles, which corresponds to $P(\tilde{} A, \tilde{} B, \tilde{} C)$, which is the chance that there is no storm AND there are no clouds AND the dogs aren't scared. $$P(A) + P(B) + P(C) - P(A,B) - P(A,C) - P(B,C) + P(A,B,C) + P(\tilde{} A, \tilde{} B, \tilde{} C) = 1$$ Let's assume that a storm ocurring with a spotless sky is very unlikely; $P(\tilde{} A, \tilde{} B, \tilde{} C) \approx 0$. In that case, $$P(A) + P(B) + P(C) - P(A,B) - P(A,C) - P(B,C) + P(A,B,C) = 1$$ Let's apply the transformations we saw before: $\begin{align} P(A|B,C) \cdot P(B,C) &= 1 - [P(A) + P(B) + P(C) - P(A|B) \cdot P(B) - P(A|C) \cdot P(C) - P(B,C)]\\ P(A|B,C) &= \dfrac{1 - P(A) - P(B) - P(C) + P(A|B) \cdot P(B) + P(A|C) \cdot P(C)}{P(B,C)} + 1 \end{align} $ As you can see, you would need more information if you want to calculate the probability of a storm given your clues. Namely: The probability of a storm in general; The probability of a cloudy sky in general; The probability of your dogs being scared in general; and The probability that your dogs will be scared AND the sky will be cloudy. If you think about it, numbers 1-3 make sense: The clues may increase the probability of a storm, but if there aren't many storms to begin with, then the probability of a storm given your clues will still be small (albeit larger than the baseline probability of a storm); If you live in a typically cloudy area, the amount of clouds in the sky will probably be a poor predictor of a storm (because it's always cloudy, storm or no storm); Ditto for your dogs being scared. Number 4 is a bit trickier. If either your dogs or the sky (or both) are perfect predictors of a storm, then there is no need for the other. Now all this math assumes that your model outputs $P(\mathrm{storm} | \mathrm{clouds})$ ($P(A | B)$) and $P(\mathrm{storm} | \mathrm{scared\ dogs})$ ($P(A|C)$). However, it is typically easier to observe $P(\mathrm{clouds} | \mathrm{storm})$ ($P(B | A)$) and $P(\mathrm{scared\ dogs} | \mathrm{storm})$ ($P(C|A)$). In that case, we must note that $$P(A,B) = P(A|B) \cdot P(B) = P(B|A) \cdot P(A)$$ so our previous model becomes $$P(A|B,C) = \dfrac{1 - P(A) - P(B) - P(C) + P(B|A) \cdot P(A) + P(C|A) \cdot P(A)}{P(B,C)} + 1$$
What does P(A|B)*P(A|C) simplify to?
Let's say we have a problem of predicting whether a storm is coming or not. So we'd like to predict whether a storm is coming or not (event $A$), and we have some clues available to us, namely the am
What does P(A|B)*P(A|C) simplify to? Let's say we have a problem of predicting whether a storm is coming or not. So we'd like to predict whether a storm is coming or not (event $A$), and we have some clues available to us, namely the amount of clouds in the sky (event $B$) and how scared your dogs are (event $C$). We can visualise the problem at hand using a Venn diagram: We are interested in calculating the probability of a storm given the clues, $P(A|B,C)$. That quantity isn't represented directly in the diagram; instead, we can get $P(A,B,C)$ (a.k.a $P(A \cap B \cap C)$) from the white central area in the diagram. Fortunately, the relationship between $P(A|B,C)$ and $P(A,B,C)$ is simple: $$P(A,B,C) = P(A|B,C) \cdot P(B,C)$$ where $P(B,C)$ corresponds to the conjunction between the magenta and white areas of the diagram. We have a model P(storm is coming | how many clouds are outside), and have another model P(storm is coming | how scared the dogs are) So we know $P(A|B)$ and $P(A|C)$. Like before, these two quantities are not represented directly in the diagram. Instead, we have $P(A,B)$, which corresponds to the yellow and white areas, and $P(A,C)$, which corresponds to the cyan and white areas. As before, we know the relationship between $P(A,B)$ and $P(A|B)$: $$P(A,B) = P(A|B) \cdot P(B)$$ Same goes for $P(A,C)$ and $P(A|C)$. To recap, we would like to know $P(A|B,C)$, which is related to the white area in the Venn diagram. So what happens if we add $P(A)$, $P(B)$ and $P(C)$? We are counting the magenta, yellow and cyan areas twice each, and the white central area three times. So we subtract the magenta, yellow and cyan areas once: $$P(A) + P(B) + P(C) - P(A,B) - P(A,C) - P(B,C)$$ Except now we removed the white area from the summation; we added the white area three times when we summed up $A$, $B$, and $C$, but we removed it three times when we subtracted $(A,B)$, $(A,C)$ and $(B,C)$. So we add it back: $$P(A) + P(B) + P(C) - P(A,B) - P(A,C) - P(B,C) + P(A,B,C)$$ We didn't account for the area outside all the circles, which corresponds to $P(\tilde{} A, \tilde{} B, \tilde{} C)$, which is the chance that there is no storm AND there are no clouds AND the dogs aren't scared. $$P(A) + P(B) + P(C) - P(A,B) - P(A,C) - P(B,C) + P(A,B,C) + P(\tilde{} A, \tilde{} B, \tilde{} C) = 1$$ Let's assume that a storm ocurring with a spotless sky is very unlikely; $P(\tilde{} A, \tilde{} B, \tilde{} C) \approx 0$. In that case, $$P(A) + P(B) + P(C) - P(A,B) - P(A,C) - P(B,C) + P(A,B,C) = 1$$ Let's apply the transformations we saw before: $\begin{align} P(A|B,C) \cdot P(B,C) &= 1 - [P(A) + P(B) + P(C) - P(A|B) \cdot P(B) - P(A|C) \cdot P(C) - P(B,C)]\\ P(A|B,C) &= \dfrac{1 - P(A) - P(B) - P(C) + P(A|B) \cdot P(B) + P(A|C) \cdot P(C)}{P(B,C)} + 1 \end{align} $ As you can see, you would need more information if you want to calculate the probability of a storm given your clues. Namely: The probability of a storm in general; The probability of a cloudy sky in general; The probability of your dogs being scared in general; and The probability that your dogs will be scared AND the sky will be cloudy. If you think about it, numbers 1-3 make sense: The clues may increase the probability of a storm, but if there aren't many storms to begin with, then the probability of a storm given your clues will still be small (albeit larger than the baseline probability of a storm); If you live in a typically cloudy area, the amount of clouds in the sky will probably be a poor predictor of a storm (because it's always cloudy, storm or no storm); Ditto for your dogs being scared. Number 4 is a bit trickier. If either your dogs or the sky (or both) are perfect predictors of a storm, then there is no need for the other. Now all this math assumes that your model outputs $P(\mathrm{storm} | \mathrm{clouds})$ ($P(A | B)$) and $P(\mathrm{storm} | \mathrm{scared\ dogs})$ ($P(A|C)$). However, it is typically easier to observe $P(\mathrm{clouds} | \mathrm{storm})$ ($P(B | A)$) and $P(\mathrm{scared\ dogs} | \mathrm{storm})$ ($P(C|A)$). In that case, we must note that $$P(A,B) = P(A|B) \cdot P(B) = P(B|A) \cdot P(A)$$ so our previous model becomes $$P(A|B,C) = \dfrac{1 - P(A) - P(B) - P(C) + P(B|A) \cdot P(A) + P(C|A) \cdot P(A)}{P(B,C)} + 1$$
What does P(A|B)*P(A|C) simplify to? Let's say we have a problem of predicting whether a storm is coming or not. So we'd like to predict whether a storm is coming or not (event $A$), and we have some clues available to us, namely the am
45,021
What does P(A|B)*P(A|C) simplify to?
In my opinion your problem is not about that expression but about modelling. You have different clues (clouds, scared dogs) which provide evidence for a forthcoming event (rain). In other words, if I understand you, your question is actually more about: how to combine different clues?. This question is dealt with in the field of graphical models. Let me refer to the corresponding Wikipedia site, and the references therein. Specially the paper by David Heckerman (A Tutorial on Learning With Bayesian Networks), and the reference to Christopher Bishop's Machine Learning book. The example you give is very similar to those presented to describe the explain away effect, that is nicely described in this video.
What does P(A|B)*P(A|C) simplify to?
In my opinion your problem is not about that expression but about modelling. You have different clues (clouds, scared dogs) which provide evidence for a forthcoming event (rain). In other words, if I
What does P(A|B)*P(A|C) simplify to? In my opinion your problem is not about that expression but about modelling. You have different clues (clouds, scared dogs) which provide evidence for a forthcoming event (rain). In other words, if I understand you, your question is actually more about: how to combine different clues?. This question is dealt with in the field of graphical models. Let me refer to the corresponding Wikipedia site, and the references therein. Specially the paper by David Heckerman (A Tutorial on Learning With Bayesian Networks), and the reference to Christopher Bishop's Machine Learning book. The example you give is very similar to those presented to describe the explain away effect, that is nicely described in this video.
What does P(A|B)*P(A|C) simplify to? In my opinion your problem is not about that expression but about modelling. You have different clues (clouds, scared dogs) which provide evidence for a forthcoming event (rain). In other words, if I
45,022
What does P(A|B)*P(A|C) simplify to?
Not really a simplification (if that is possible at all) but, if you wish a relation between $P(a \vert b)P(a \vert c)$ and $P(a \vert b,c)$, you could say that $P(a \vert b)P(a \vert c) = P(a \vert b,c)^2 \frac{P(c \vert b)}{P(c \vert a,b)} \frac{P(b \vert c)}{P(b \vert a,c)}$ I think that this relation is more theoretic and fun than useful. In practical cases you would use the simpler $P(a \vert b) = P(a \vert b,c) \frac{P(c \vert b)}{P(c \vert a,b)}$ $P(a \vert c) = P(a \vert b,c) \frac{P(b \vert c)}{P(b \vert a,c)}$ one use is maybe that this tells how you can't give a usefull interpretation for $P(a \vert b)P(a \vert c)$. Say $P(\text{cancer} \vert \text{smoking})=0.1$ and $P(\text{cancer} \vert \text{sports})=0.001$ then how these relate to $P(\text{cancer} \vert \text{smoking and sports})^2$ is dependent on the terms in the fractions and it can be much different from $0.0001$ either larger or smaller and depends on the case.
What does P(A|B)*P(A|C) simplify to?
Not really a simplification (if that is possible at all) but, if you wish a relation between $P(a \vert b)P(a \vert c)$ and $P(a \vert b,c)$, you could say that $P(a \vert b)P(a \vert c) = P(a \vert
What does P(A|B)*P(A|C) simplify to? Not really a simplification (if that is possible at all) but, if you wish a relation between $P(a \vert b)P(a \vert c)$ and $P(a \vert b,c)$, you could say that $P(a \vert b)P(a \vert c) = P(a \vert b,c)^2 \frac{P(c \vert b)}{P(c \vert a,b)} \frac{P(b \vert c)}{P(b \vert a,c)}$ I think that this relation is more theoretic and fun than useful. In practical cases you would use the simpler $P(a \vert b) = P(a \vert b,c) \frac{P(c \vert b)}{P(c \vert a,b)}$ $P(a \vert c) = P(a \vert b,c) \frac{P(b \vert c)}{P(b \vert a,c)}$ one use is maybe that this tells how you can't give a usefull interpretation for $P(a \vert b)P(a \vert c)$. Say $P(\text{cancer} \vert \text{smoking})=0.1$ and $P(\text{cancer} \vert \text{sports})=0.001$ then how these relate to $P(\text{cancer} \vert \text{smoking and sports})^2$ is dependent on the terms in the fractions and it can be much different from $0.0001$ either larger or smaller and depends on the case.
What does P(A|B)*P(A|C) simplify to? Not really a simplification (if that is possible at all) but, if you wish a relation between $P(a \vert b)P(a \vert c)$ and $P(a \vert b,c)$, you could say that $P(a \vert b)P(a \vert c) = P(a \vert
45,023
What does P(A|B)*P(A|C) simplify to?
Bayes' Theorem states: $$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$$ so logically we have: $$P(A|B)P(A|C)\\ =\frac{P(B|A)P(A)}{P(B)}\frac{P(C|A)P(A)}{P(C)}\\ =\frac{P(B|A)P(C|A)P(A)^2}{P(B)P(C)} $$ As $B$ and $C$ are independent, I think this is as far as this route extends.
What does P(A|B)*P(A|C) simplify to?
Bayes' Theorem states: $$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$$ so logically we have: $$P(A|B)P(A|C)\\ =\frac{P(B|A)P(A)}{P(B)}\frac{P(C|A)P(A)}{P(C)}\\ =\frac{P(B|A)P(C|A)P(A)^2}{P(B)P(C)} $$ As $B$ and $C
What does P(A|B)*P(A|C) simplify to? Bayes' Theorem states: $$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$$ so logically we have: $$P(A|B)P(A|C)\\ =\frac{P(B|A)P(A)}{P(B)}\frac{P(C|A)P(A)}{P(C)}\\ =\frac{P(B|A)P(C|A)P(A)^2}{P(B)P(C)} $$ As $B$ and $C$ are independent, I think this is as far as this route extends.
What does P(A|B)*P(A|C) simplify to? Bayes' Theorem states: $$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$$ so logically we have: $$P(A|B)P(A|C)\\ =\frac{P(B|A)P(A)}{P(B)}\frac{P(C|A)P(A)}{P(C)}\\ =\frac{P(B|A)P(C|A)P(A)^2}{P(B)P(C)} $$ As $B$ and $C
45,024
What does P(A|B)*P(A|C) simplify to?
As JonMark Perry has already mentioned, Bayes' theorem prohibits that your initial suspicion is true. The rules for conditional probabilities specifically allow for multiplication of probabilities only if conditioning under the same event (either B or C or both simultaneously). To show you a visualistion of the 2 probabilities take a look at this image of a tree diagram.. Within a branch of the tree, you may multiply the probabilities which is interpreted as both events having to happen simultaneously P(A and B) = P(A|B) * P(B) or P(A and C) = P(A|C) * P(C) . In order to combine to two different branches (the directions would be B and C) the probabilities have to be added. P(A|B) * P(B)+ P(A|C) * P(C) is then the probability that the storm comes up, independent of the clouds coming up or how scared the dogs are.
What does P(A|B)*P(A|C) simplify to?
As JonMark Perry has already mentioned, Bayes' theorem prohibits that your initial suspicion is true. The rules for conditional probabilities specifically allow for multiplication of probabilities onl
What does P(A|B)*P(A|C) simplify to? As JonMark Perry has already mentioned, Bayes' theorem prohibits that your initial suspicion is true. The rules for conditional probabilities specifically allow for multiplication of probabilities only if conditioning under the same event (either B or C or both simultaneously). To show you a visualistion of the 2 probabilities take a look at this image of a tree diagram.. Within a branch of the tree, you may multiply the probabilities which is interpreted as both events having to happen simultaneously P(A and B) = P(A|B) * P(B) or P(A and C) = P(A|C) * P(C) . In order to combine to two different branches (the directions would be B and C) the probabilities have to be added. P(A|B) * P(B)+ P(A|C) * P(C) is then the probability that the storm comes up, independent of the clouds coming up or how scared the dogs are.
What does P(A|B)*P(A|C) simplify to? As JonMark Perry has already mentioned, Bayes' theorem prohibits that your initial suspicion is true. The rules for conditional probabilities specifically allow for multiplication of probabilities onl
45,025
What does P(A|B)*P(A|C) simplify to?
If the objective is to somehow combine the impacts of B and C on the probability of A, I think it make sense to evaluate the probabilities $P(A|B\cup C) $ and $P(A|B\cap C)$ where: $P(A|B\cup C) =\frac{P(A|B)P(B)+P(A|C)P(C)-P(A\cap B\cap C)}{P(B\cup C)}$ and $P(A|B\cap C) =\frac{P(A\cap B \cap C)}{P(B \cap C)}$
What does P(A|B)*P(A|C) simplify to?
If the objective is to somehow combine the impacts of B and C on the probability of A, I think it make sense to evaluate the probabilities $P(A|B\cup C) $ and $P(A|B\cap C)$ where: $P(A|B\cup C) =\fra
What does P(A|B)*P(A|C) simplify to? If the objective is to somehow combine the impacts of B and C on the probability of A, I think it make sense to evaluate the probabilities $P(A|B\cup C) $ and $P(A|B\cap C)$ where: $P(A|B\cup C) =\frac{P(A|B)P(B)+P(A|C)P(C)-P(A\cap B\cap C)}{P(B\cup C)}$ and $P(A|B\cap C) =\frac{P(A\cap B \cap C)}{P(B \cap C)}$
What does P(A|B)*P(A|C) simplify to? If the objective is to somehow combine the impacts of B and C on the probability of A, I think it make sense to evaluate the probabilities $P(A|B\cup C) $ and $P(A|B\cap C)$ where: $P(A|B\cup C) =\fra
45,026
How can I obtain z-values instead of t-values in linear mixed-effect model (lmer vs glmer)?
tl;dr lmer (linear mixed models) labels this column as a "t statistic", while glmer (generalized linear mixed models) labels it as a "Z statistic", but they're actually the same number. This mirrors the difference between the way lm and glm report their output. The "t statistics" reported by lmer (assuming a Gaussian distribution of observations conditional on fixed and random effects) and the "Z statistics" reported by glmer (assuming binomial, Poisson, etc. ... distributions ...) are really the same number, i.e. the point estimate divided by the estimate of its standard error (you can check this via cc <- coef(summary(fitted_model)); cc[,"Estimate"]/cc[,"Std. Error"]). In a standard least-squares fit, we can prove that this quantity is $t$-distributed, and we can derive the corresponding degrees of freedom exactly. We can also do this for some linear mixed models (i.e. balanced designs with single or nested random effects only). For more complex linear mixed models there are various approximations (Satterthwaite, Kenward-Roger; see the pbkrtest or lmerTest packages) for deriving the approximate distribution of $\bar x/\hat \sigma$. For generalized linear models, the finite-size corrections are less well understood and disseminated; the sampling distribution of $\bar x/\hat \sigma$ for non-Normal responses is not t-distributed even in simple cases (it's also not Z-distributed, except asymptotically). There is some literature on approximate finite-size corrections under the rubric of Bartlett corrections, but the standard practice in GLMs is to report these values as "Z statistics" and assume (when calculating p-values) that people know that they're assuming the sample size is large.
How can I obtain z-values instead of t-values in linear mixed-effect model (lmer vs glmer)?
tl;dr lmer (linear mixed models) labels this column as a "t statistic", while glmer (generalized linear mixed models) labels it as a "Z statistic", but they're actually the same number. This mirrors t
How can I obtain z-values instead of t-values in linear mixed-effect model (lmer vs glmer)? tl;dr lmer (linear mixed models) labels this column as a "t statistic", while glmer (generalized linear mixed models) labels it as a "Z statistic", but they're actually the same number. This mirrors the difference between the way lm and glm report their output. The "t statistics" reported by lmer (assuming a Gaussian distribution of observations conditional on fixed and random effects) and the "Z statistics" reported by glmer (assuming binomial, Poisson, etc. ... distributions ...) are really the same number, i.e. the point estimate divided by the estimate of its standard error (you can check this via cc <- coef(summary(fitted_model)); cc[,"Estimate"]/cc[,"Std. Error"]). In a standard least-squares fit, we can prove that this quantity is $t$-distributed, and we can derive the corresponding degrees of freedom exactly. We can also do this for some linear mixed models (i.e. balanced designs with single or nested random effects only). For more complex linear mixed models there are various approximations (Satterthwaite, Kenward-Roger; see the pbkrtest or lmerTest packages) for deriving the approximate distribution of $\bar x/\hat \sigma$. For generalized linear models, the finite-size corrections are less well understood and disseminated; the sampling distribution of $\bar x/\hat \sigma$ for non-Normal responses is not t-distributed even in simple cases (it's also not Z-distributed, except asymptotically). There is some literature on approximate finite-size corrections under the rubric of Bartlett corrections, but the standard practice in GLMs is to report these values as "Z statistics" and assume (when calculating p-values) that people know that they're assuming the sample size is large.
How can I obtain z-values instead of t-values in linear mixed-effect model (lmer vs glmer)? tl;dr lmer (linear mixed models) labels this column as a "t statistic", while glmer (generalized linear mixed models) labels it as a "Z statistic", but they're actually the same number. This mirrors t
45,027
Lasso regression coefficients values
After you have done LASSO you should generally NOT use the selected variables in a separate linear regression. There are several ways to select a subset of predictor variables for a model. For example, you could use stepwise regression or, with few enough predictors, you could examine all possible subsets of predictors. In these cases a criterion like AIC is used to trade off the fit against the number of variables included. But in common use the selected variables are then simply incorporated into a standard linear regression. The p-values in that linear regression are not valid, as they do not incorporate the fact that you had already performed outcome-based variable selection. Also, if there are correlations among predictors, the particular variables that you choose can depend heavily upon the particular data sample you analyzed. Making this worse, the regression coefficients for those selected from a set of correlated predictors will tend to be larger in magnitude than their true values in the population. Thus the results from these types of linear models can have poor performance on new samples from the population. You can examine these behaviors by analyzing multiple bootstrap samples from your data set. Although LASSO also may select different sets of variables on different data samples, it has a major advantage over those other approaches. It also penalizes the regression coefficients of the selected variables, lowering their magnitudes from those in a standard linear regression. This penalization typically improves the ability to predict results on new data samples. So if you simply take the LASSO-selected variables and put them into a new linear regression, not only do you have the problems imposed by all variable selection approaches but also you have lost the LASSO advantage of penalizing coefficients of the selected variables to improve prediction. The p-values for that linear regression will be no more valid than for stepwise or best-subset selection. (Removing "insignificant" predictors in a multiple regression based on a p-value cutoff is not a good idea in any event). So your proposed approach would undo the good that you did by choosing a principled approach like LASSO, with its penalization, in the first place. As Richard Hardy notes in a comment, there can be ways to use LASSO for variable selection to incorporate into linear regressions, but those are specialized multi-step approaches for particular circumstances, and they don't seem to give any advantage over glmnet() in your application. So stick with the predictors and coefficients that LASSO provided.
Lasso regression coefficients values
After you have done LASSO you should generally NOT use the selected variables in a separate linear regression. There are several ways to select a subset of predictor variables for a model. For example
Lasso regression coefficients values After you have done LASSO you should generally NOT use the selected variables in a separate linear regression. There are several ways to select a subset of predictor variables for a model. For example, you could use stepwise regression or, with few enough predictors, you could examine all possible subsets of predictors. In these cases a criterion like AIC is used to trade off the fit against the number of variables included. But in common use the selected variables are then simply incorporated into a standard linear regression. The p-values in that linear regression are not valid, as they do not incorporate the fact that you had already performed outcome-based variable selection. Also, if there are correlations among predictors, the particular variables that you choose can depend heavily upon the particular data sample you analyzed. Making this worse, the regression coefficients for those selected from a set of correlated predictors will tend to be larger in magnitude than their true values in the population. Thus the results from these types of linear models can have poor performance on new samples from the population. You can examine these behaviors by analyzing multiple bootstrap samples from your data set. Although LASSO also may select different sets of variables on different data samples, it has a major advantage over those other approaches. It also penalizes the regression coefficients of the selected variables, lowering their magnitudes from those in a standard linear regression. This penalization typically improves the ability to predict results on new data samples. So if you simply take the LASSO-selected variables and put them into a new linear regression, not only do you have the problems imposed by all variable selection approaches but also you have lost the LASSO advantage of penalizing coefficients of the selected variables to improve prediction. The p-values for that linear regression will be no more valid than for stepwise or best-subset selection. (Removing "insignificant" predictors in a multiple regression based on a p-value cutoff is not a good idea in any event). So your proposed approach would undo the good that you did by choosing a principled approach like LASSO, with its penalization, in the first place. As Richard Hardy notes in a comment, there can be ways to use LASSO for variable selection to incorporate into linear regressions, but those are specialized multi-step approaches for particular circumstances, and they don't seem to give any advantage over glmnet() in your application. So stick with the predictors and coefficients that LASSO provided.
Lasso regression coefficients values After you have done LASSO you should generally NOT use the selected variables in a separate linear regression. There are several ways to select a subset of predictor variables for a model. For example
45,028
Why use squared loss on probabilities instead of logistic loss?
Squared loss on binary outcomes is called the Brier score. It's valid in the sense of being a "proper scoring rule", because you'll get the lowest mean squared error when you use the correct probability. In other words, logistic loss and squared loss have the same minimum. This paper compares the properties of the Brier score ("square loss") to some other loss functions. They find that square loss/Brier score converges more slowly than logistic loss. Square loss has some advantages that might compensate in some cases: It's always finite (unlike logistic loss, which can be infinite if $p=1$ and $y=0$ or vice versa) It accelerates as the size of the errors increases (so it's less likely to allow any wildly inaccurate predictions to slip through, compared to accuracy and absolute loss) It's differentiable everywhere (unlike hinge loss and zero-one loss) It's the most commonly implemented loss in software packages, so it might be the only option in some cases
Why use squared loss on probabilities instead of logistic loss?
Squared loss on binary outcomes is called the Brier score. It's valid in the sense of being a "proper scoring rule", because you'll get the lowest mean squared error when you use the correct probabil
Why use squared loss on probabilities instead of logistic loss? Squared loss on binary outcomes is called the Brier score. It's valid in the sense of being a "proper scoring rule", because you'll get the lowest mean squared error when you use the correct probability. In other words, logistic loss and squared loss have the same minimum. This paper compares the properties of the Brier score ("square loss") to some other loss functions. They find that square loss/Brier score converges more slowly than logistic loss. Square loss has some advantages that might compensate in some cases: It's always finite (unlike logistic loss, which can be infinite if $p=1$ and $y=0$ or vice versa) It accelerates as the size of the errors increases (so it's less likely to allow any wildly inaccurate predictions to slip through, compared to accuracy and absolute loss) It's differentiable everywhere (unlike hinge loss and zero-one loss) It's the most commonly implemented loss in software packages, so it might be the only option in some cases
Why use squared loss on probabilities instead of logistic loss? Squared loss on binary outcomes is called the Brier score. It's valid in the sense of being a "proper scoring rule", because you'll get the lowest mean squared error when you use the correct probabil
45,029
Why does Maximum Likelihood estimation maximize probability density instead of probability
$f(x_i, \theta)$ may not be a probability, it is a density function. In general statistics, we don't want to have to make special exceptions for continuous versus discrete random variables all the time, especially since there is a field of mathematics that gives us a unified approach yet allows us to be rigorous about such things. The rationale for maximizing the product of the densities of a sample, or the likelihood, is much like the rationale for an integral in calculus. Take height, it is a continuous value. And suppose I have some belief about a "normal, maximum entropy Gaussian" spread to underlie this distribution in a population, and it is parametrized by a mean and standard deviation. My height is measured with error, and even if I knew it to an atomic level I could never actually find a probability associated with that single value. The probability that my height is between 5'10" and 5'11" is small, but between 5'10.25" and 5'10.75" is even smaller, and if I squeeze and squeeze this range into an $\epsilon$-ball, the associated probability goes to 0, even if my height happens to be the mean, mode, and median of the population sample. So how is it that this value which is highly characteristic of the population shows such a small probability? A zen answer might be: the infinitessimal differences make up the whole. By look at the density, or the differential of probability, you actually find that a random observation achieving a mean, mode, median is actually very characteristic: it achieves the highest likelihood of any other value in that density.
Why does Maximum Likelihood estimation maximize probability density instead of probability
$f(x_i, \theta)$ may not be a probability, it is a density function. In general statistics, we don't want to have to make special exceptions for continuous versus discrete random variables all the tim
Why does Maximum Likelihood estimation maximize probability density instead of probability $f(x_i, \theta)$ may not be a probability, it is a density function. In general statistics, we don't want to have to make special exceptions for continuous versus discrete random variables all the time, especially since there is a field of mathematics that gives us a unified approach yet allows us to be rigorous about such things. The rationale for maximizing the product of the densities of a sample, or the likelihood, is much like the rationale for an integral in calculus. Take height, it is a continuous value. And suppose I have some belief about a "normal, maximum entropy Gaussian" spread to underlie this distribution in a population, and it is parametrized by a mean and standard deviation. My height is measured with error, and even if I knew it to an atomic level I could never actually find a probability associated with that single value. The probability that my height is between 5'10" and 5'11" is small, but between 5'10.25" and 5'10.75" is even smaller, and if I squeeze and squeeze this range into an $\epsilon$-ball, the associated probability goes to 0, even if my height happens to be the mean, mode, and median of the population sample. So how is it that this value which is highly characteristic of the population shows such a small probability? A zen answer might be: the infinitessimal differences make up the whole. By look at the density, or the differential of probability, you actually find that a random observation achieving a mean, mode, median is actually very characteristic: it achieves the highest likelihood of any other value in that density.
Why does Maximum Likelihood estimation maximize probability density instead of probability $f(x_i, \theta)$ may not be a probability, it is a density function. In general statistics, we don't want to have to make special exceptions for continuous versus discrete random variables all the tim
45,030
Why does Maximum Likelihood estimation maximize probability density instead of probability
Your question applies only to continuous random variables. In the case of discrete random variables you do use probabilities and not densities. For a continuous random variable, the probability of each point (one value of the variable) is 0, and only intervals have positive probabilities obtained by integrating the density function over the interval. Since the sample consists of points, you cannot multiply probabilities (the result will always be 0) and you must multiply densities (which are in some sense a "representative" of the probability but cannot be called probability). To be even more specific: "probability density" and "density" are one and the same - two names for the same function. To understand what the density function means you should have a knowledge of calculus. The density function f(x) can be explained as the "slope" of the probability at point x. f(x)dx can be explained as the probability of the point x, which on one hand is equal to 0 (because dx is equal to 0), but on the other hand becomes greater than 0 when integrated over an interval. So f(x) only represents how "dense" the probability is at point x, but is not the probability, still can be used as a "proxy" to probability.
Why does Maximum Likelihood estimation maximize probability density instead of probability
Your question applies only to continuous random variables. In the case of discrete random variables you do use probabilities and not densities. For a continuous random variable, the probability of eac
Why does Maximum Likelihood estimation maximize probability density instead of probability Your question applies only to continuous random variables. In the case of discrete random variables you do use probabilities and not densities. For a continuous random variable, the probability of each point (one value of the variable) is 0, and only intervals have positive probabilities obtained by integrating the density function over the interval. Since the sample consists of points, you cannot multiply probabilities (the result will always be 0) and you must multiply densities (which are in some sense a "representative" of the probability but cannot be called probability). To be even more specific: "probability density" and "density" are one and the same - two names for the same function. To understand what the density function means you should have a knowledge of calculus. The density function f(x) can be explained as the "slope" of the probability at point x. f(x)dx can be explained as the probability of the point x, which on one hand is equal to 0 (because dx is equal to 0), but on the other hand becomes greater than 0 when integrated over an interval. So f(x) only represents how "dense" the probability is at point x, but is not the probability, still can be used as a "proxy" to probability.
Why does Maximum Likelihood estimation maximize probability density instead of probability Your question applies only to continuous random variables. In the case of discrete random variables you do use probabilities and not densities. For a continuous random variable, the probability of eac
45,031
Why does Maximum Likelihood estimation maximize probability density instead of probability
I read the question as: why do we start from the density function $f(\boldsymbol{x}|\theta)$ (with $\theta$ constant) to change point of view and interpret it as a function of $\theta$ (with $\boldsymbol{x}$'s constant) that we want to maximize? Intuitively and absolutely not rigorously, if we consider an infinitesimal interval $d\boldsymbol{x}$ around $\boldsymbol{x}$, then $f(\boldsymbol{x}|\theta)d\boldsymbol{x}$ can be thought as the infinitesimal probability of getting inside that infinitesimal interval, so in a sense it is a probability (i.e. when "summed" over all possible infinitesimal intervals it yields 1 as you would expect from a probability. This summation is called integration in calculus). Now you want to maximize against $\theta$, so you want to find that value $\hat{\theta}$ such that: $$\forall \theta: f(\boldsymbol{x}|\hat{\theta})d\boldsymbol{x}\geq f(\boldsymbol{x}|\theta)d\boldsymbol{x}$$ Now... assuming we trust that we can divide by $d\boldsymbol{x}$ on both sides, we obtain: $$\forall \theta: f(\boldsymbol{x}|\hat{\theta})\geq f(\boldsymbol{x}|\theta)$$ i.e. $\hat{\theta}$ is the value of $\theta$ that maximizes $f(\boldsymbol{x}|\theta)$. Again, this is not rigorous but I hope it gives you the gist of it. If these "infinitesimals" disturb you, try to think in terms of finite probabilities of falling inside finite intervals $\Delta\boldsymbol{x}$ and then evaluate the limit for this interval's amplitude that goes to 0 in all different $\boldsymbol{x}$'s...
Why does Maximum Likelihood estimation maximize probability density instead of probability
I read the question as: why do we start from the density function $f(\boldsymbol{x}|\theta)$ (with $\theta$ constant) to change point of view and interpret it as a function of $\theta$ (with $\boldsym
Why does Maximum Likelihood estimation maximize probability density instead of probability I read the question as: why do we start from the density function $f(\boldsymbol{x}|\theta)$ (with $\theta$ constant) to change point of view and interpret it as a function of $\theta$ (with $\boldsymbol{x}$'s constant) that we want to maximize? Intuitively and absolutely not rigorously, if we consider an infinitesimal interval $d\boldsymbol{x}$ around $\boldsymbol{x}$, then $f(\boldsymbol{x}|\theta)d\boldsymbol{x}$ can be thought as the infinitesimal probability of getting inside that infinitesimal interval, so in a sense it is a probability (i.e. when "summed" over all possible infinitesimal intervals it yields 1 as you would expect from a probability. This summation is called integration in calculus). Now you want to maximize against $\theta$, so you want to find that value $\hat{\theta}$ such that: $$\forall \theta: f(\boldsymbol{x}|\hat{\theta})d\boldsymbol{x}\geq f(\boldsymbol{x}|\theta)d\boldsymbol{x}$$ Now... assuming we trust that we can divide by $d\boldsymbol{x}$ on both sides, we obtain: $$\forall \theta: f(\boldsymbol{x}|\hat{\theta})\geq f(\boldsymbol{x}|\theta)$$ i.e. $\hat{\theta}$ is the value of $\theta$ that maximizes $f(\boldsymbol{x}|\theta)$. Again, this is not rigorous but I hope it gives you the gist of it. If these "infinitesimals" disturb you, try to think in terms of finite probabilities of falling inside finite intervals $\Delta\boldsymbol{x}$ and then evaluate the limit for this interval's amplitude that goes to 0 in all different $\boldsymbol{x}$'s...
Why does Maximum Likelihood estimation maximize probability density instead of probability I read the question as: why do we start from the density function $f(\boldsymbol{x}|\theta)$ (with $\theta$ constant) to change point of view and interpret it as a function of $\theta$ (with $\boldsym
45,032
Why does Maximum Likelihood estimation maximize probability density instead of probability
The key idea here is to consider that although Point probability is not defined for the continuous probability distribution but we can easily see that probability that the random variable('X') is "around" x is equals to f(X=x)dx. Therefore when its multiplied for all of the points the Likelihood function would not be affected by all of these dx's so that we could ignore those and just maximize the product of f(X=x_i) for all data points. Hope that helps, Cheers! Reference: https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading10b.pdf
Why does Maximum Likelihood estimation maximize probability density instead of probability
The key idea here is to consider that although Point probability is not defined for the continuous probability distribution but we can easily see that probability that the random variable('X') is "aro
Why does Maximum Likelihood estimation maximize probability density instead of probability The key idea here is to consider that although Point probability is not defined for the continuous probability distribution but we can easily see that probability that the random variable('X') is "around" x is equals to f(X=x)dx. Therefore when its multiplied for all of the points the Likelihood function would not be affected by all of these dx's so that we could ignore those and just maximize the product of f(X=x_i) for all data points. Hope that helps, Cheers! Reference: https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading10b.pdf
Why does Maximum Likelihood estimation maximize probability density instead of probability The key idea here is to consider that although Point probability is not defined for the continuous probability distribution but we can easily see that probability that the random variable('X') is "aro
45,033
ROC as feature selection
Univariate feature selection is generally a poor method. This question is deftly answered by silverfish in the context of correlation, but all his arguments apply to your case as well. In short, there is no reason to believe that univariately checking how each individual variable $x$ is related to your response $y$ reveals anything about the multivariate nature of the relationship between $X$ and $y$. It's quite possible that you end up screening out many of your good predictors. As you point out, LASSO, ridge, or glmnet are much preferred methods for feature selection in a multiple regression model, as they: Take a fully multivariate view of your predictor / reponse relationship. Avoid making high variance, binary decisions like "this variable is completely in, this variable is completely out". Lend themselves naturally to cross-validation and other model validation techniques. You should carefully and respectfully start pointing your team towards a more modern and disciplined approach. (*) You also don't mention if your team is testing for non-linear relationships when fitting these univariate models. At the very least, these univariate models should be based on some basis expansion of the feature, like cubic splines. Clearly if they are only testing for univariate linear relationships, there there are some issues there as well.
ROC as feature selection
Univariate feature selection is generally a poor method. This question is deftly answered by silverfish in the context of correlation, but all his arguments apply to your case as well. In short, ther
ROC as feature selection Univariate feature selection is generally a poor method. This question is deftly answered by silverfish in the context of correlation, but all his arguments apply to your case as well. In short, there is no reason to believe that univariately checking how each individual variable $x$ is related to your response $y$ reveals anything about the multivariate nature of the relationship between $X$ and $y$. It's quite possible that you end up screening out many of your good predictors. As you point out, LASSO, ridge, or glmnet are much preferred methods for feature selection in a multiple regression model, as they: Take a fully multivariate view of your predictor / reponse relationship. Avoid making high variance, binary decisions like "this variable is completely in, this variable is completely out". Lend themselves naturally to cross-validation and other model validation techniques. You should carefully and respectfully start pointing your team towards a more modern and disciplined approach. (*) You also don't mention if your team is testing for non-linear relationships when fitting these univariate models. At the very least, these univariate models should be based on some basis expansion of the feature, like cubic splines. Clearly if they are only testing for univariate linear relationships, there there are some issues there as well.
ROC as feature selection Univariate feature selection is generally a poor method. This question is deftly answered by silverfish in the context of correlation, but all his arguments apply to your case as well. In short, ther
45,034
ROC as feature selection
Echoing Matthew's response from another light: many markers have a concept of stratified predictive accuracy. In this sense they provide extremely good predictive accuracy in a subgroup or in-tandem with another marker. Two examples from the health sciences: Suppose for instance two types of breast cancer grow in women who are premenopausal and post-menopausal. The majority of women who are diagnosed with cancer are post-menopausal, yet the cancers diagnosed in pre-menopausal women are extremely aggressive, difficult to treat, and their genetic markers are unknown. If you naively presume that all cancers have the same genotype and genetic markers, you will show a low predictive accuracy for a genetic marker that yields a 100% Area-Under-The ROC in premenopausal women. Another example is how a dyad of conditions might be necessary for disease. For instance, in nephrology, people are typically at risk and subsequently are diagnosed with Stage 3 Chronic Kidney Disease once they develop both hypertension and diabetes. CKD is known to progress to ESRD, which required chronic renal replacement therapy, and is generally very bad. Interventions to halt progress of CKD are poorly known, so earlier diagnostics are needed, but the manifestations of the disease are coincident with many other conditions, it is only a specific spectrum of conditions or a combination of markers that may inform clinicians that the patient has CKD. Just the same, a stepwise approach to evaluating a sequence of markers does not hold the promise of identifying an optimal ROC in any scientific sense, albeit perhaps in a statistical sense it would. In my experience of evaluating markers, the best use for ROC is evaluating a pre-specified hypothesis about a specific marker rather than a huge list thereof. The CIs for ROCs and their AUCs tend to be quite wide since they deal with empirical functions, and after correction for multiple testing, the risk of Type II error is just too high to justify evaluating 100 markers or more. The exception to this might be the understanding that the nature of the analysis is a hypothesis generating study. That means, however, that none of the previously collected data may serve to confirm this hypothesis. A separate study must be done. Many clinicians are disheartened at how irreproducible results can be from such fishing expeditions.
ROC as feature selection
Echoing Matthew's response from another light: many markers have a concept of stratified predictive accuracy. In this sense they provide extremely good predictive accuracy in a subgroup or in-tandem w
ROC as feature selection Echoing Matthew's response from another light: many markers have a concept of stratified predictive accuracy. In this sense they provide extremely good predictive accuracy in a subgroup or in-tandem with another marker. Two examples from the health sciences: Suppose for instance two types of breast cancer grow in women who are premenopausal and post-menopausal. The majority of women who are diagnosed with cancer are post-menopausal, yet the cancers diagnosed in pre-menopausal women are extremely aggressive, difficult to treat, and their genetic markers are unknown. If you naively presume that all cancers have the same genotype and genetic markers, you will show a low predictive accuracy for a genetic marker that yields a 100% Area-Under-The ROC in premenopausal women. Another example is how a dyad of conditions might be necessary for disease. For instance, in nephrology, people are typically at risk and subsequently are diagnosed with Stage 3 Chronic Kidney Disease once they develop both hypertension and diabetes. CKD is known to progress to ESRD, which required chronic renal replacement therapy, and is generally very bad. Interventions to halt progress of CKD are poorly known, so earlier diagnostics are needed, but the manifestations of the disease are coincident with many other conditions, it is only a specific spectrum of conditions or a combination of markers that may inform clinicians that the patient has CKD. Just the same, a stepwise approach to evaluating a sequence of markers does not hold the promise of identifying an optimal ROC in any scientific sense, albeit perhaps in a statistical sense it would. In my experience of evaluating markers, the best use for ROC is evaluating a pre-specified hypothesis about a specific marker rather than a huge list thereof. The CIs for ROCs and their AUCs tend to be quite wide since they deal with empirical functions, and after correction for multiple testing, the risk of Type II error is just too high to justify evaluating 100 markers or more. The exception to this might be the understanding that the nature of the analysis is a hypothesis generating study. That means, however, that none of the previously collected data may serve to confirm this hypothesis. A separate study must be done. Many clinicians are disheartened at how irreproducible results can be from such fishing expeditions.
ROC as feature selection Echoing Matthew's response from another light: many markers have a concept of stratified predictive accuracy. In this sense they provide extremely good predictive accuracy in a subgroup or in-tandem w
45,035
ROC as feature selection
Since you are using SAS, I thought I'd share this. I'm not sure what model you are using, but if you are using logistic regression this may be a useful resource. Sample 54866: Logistic model selection using area under curve (AUC) or R-square selection criteria Under "Details", it reads: In addition to the AIC and BIC criteria available in PROC HPLOGISTIC, the SELECT macro can also choose models using the area under the ROC curve (CHOOSE=AUC), the R-square statistic (CHOOSE=RSQUARE), or the max-rescaled R-square statistic (CHOOSE=RSQUARE_RESCALED). Under "Limitations", it reads The best subsets selection method (SELECTION=SCORE) available in PROC LOGISTIC is not available in the SELECT macro. The area under the ROC curve criterion (CHOOSE=AUC) is not available with nominal, multinomial (LINK=GLOGIT) models.
ROC as feature selection
Since you are using SAS, I thought I'd share this. I'm not sure what model you are using, but if you are using logistic regression this may be a useful resource. Sample 54866: Logistic model selection
ROC as feature selection Since you are using SAS, I thought I'd share this. I'm not sure what model you are using, but if you are using logistic regression this may be a useful resource. Sample 54866: Logistic model selection using area under curve (AUC) or R-square selection criteria Under "Details", it reads: In addition to the AIC and BIC criteria available in PROC HPLOGISTIC, the SELECT macro can also choose models using the area under the ROC curve (CHOOSE=AUC), the R-square statistic (CHOOSE=RSQUARE), or the max-rescaled R-square statistic (CHOOSE=RSQUARE_RESCALED). Under "Limitations", it reads The best subsets selection method (SELECTION=SCORE) available in PROC LOGISTIC is not available in the SELECT macro. The area under the ROC curve criterion (CHOOSE=AUC) is not available with nominal, multinomial (LINK=GLOGIT) models.
ROC as feature selection Since you are using SAS, I thought I'd share this. I'm not sure what model you are using, but if you are using logistic regression this may be a useful resource. Sample 54866: Logistic model selection
45,036
Normal Distribution with Uniform Mean
You can compute the mean and variance of the compound distribution $X$ with the law of total expectation and law of total variance. Mean: $$ E[X] = E \left[ E [X \mid U ] \right] = E[U] = \frac{b + a}{2}$$ Which is, as you observe, the mean of the uniform distribution. Variance: $$ Var[X] = E[ Var[X \mid U] ] + Var[ E[X \mid U ] ] = E[3^2] + Var[U] = 9 + \frac{(b - a)^2}{12} $$ Which is, as you observe, the sum of the two variances. The compound distribution can certainly be far from normal. Consider the case where the standard deviation of the normal distribution is very small in relation to the width of the uniform distribution. u <- runif(10000) n <- rnorm(10000, mean=u, sd=0.01) hist(n, breaks=100)
Normal Distribution with Uniform Mean
You can compute the mean and variance of the compound distribution $X$ with the law of total expectation and law of total variance. Mean: $$ E[X] = E \left[ E [X \mid U ] \right] = E[U] = \frac{b + a}
Normal Distribution with Uniform Mean You can compute the mean and variance of the compound distribution $X$ with the law of total expectation and law of total variance. Mean: $$ E[X] = E \left[ E [X \mid U ] \right] = E[U] = \frac{b + a}{2}$$ Which is, as you observe, the mean of the uniform distribution. Variance: $$ Var[X] = E[ Var[X \mid U] ] + Var[ E[X \mid U ] ] = E[3^2] + Var[U] = 9 + \frac{(b - a)^2}{12} $$ Which is, as you observe, the sum of the two variances. The compound distribution can certainly be far from normal. Consider the case where the standard deviation of the normal distribution is very small in relation to the width of the uniform distribution. u <- runif(10000) n <- rnorm(10000, mean=u, sd=0.01) hist(n, breaks=100)
Normal Distribution with Uniform Mean You can compute the mean and variance of the compound distribution $X$ with the law of total expectation and law of total variance. Mean: $$ E[X] = E \left[ E [X \mid U ] \right] = E[U] = \frac{b + a}
45,037
Normal Distribution with Uniform Mean
Distribution that is related to special case of your question was described by Bhattacharjee, Pandit, and Mohan (1963). It assumes that uniform distribution is centered around the global mean $\mu$ and has $(\mu-a, \mu+a)$ bounds. In standard form it has probability density function $$ f(z) = \frac{1}{2a} \left[\Phi\left(z+a\right) - \Phi\left(z-a\right)\right] $$ and cumulative distribution function $$ F(z) = \frac{1}{2a} \left[z\,\Phi\left(z+a\right) - z\,\Phi\left(z-a\right) + \phi\left(z+a\right) - \phi\left(z-a\right)\right] $$ where $\Phi$ is a standard normal cdf and $\phi$ is a standard normal pdf. It emerges when $U \sim \mathcal{U}(\mu-a, \mu+a)$ and $X \sim \mathcal{N}(\mu, \sigma^2)$, then $Z = U+X$ follows the distribution described by Bhattacharjee et al. library(extraDistr) set.seed(123) u <- runif(10000, -1, 1) n <- rnorm(10000, mean=u, sd=1) hist(n, breaks=100, freq = F) curve(dbhatt(x, 0, 1, 1), -6, 6, add = T, col = "red") set.seed(123) u <- runif(10000, -3, 3) n <- rnorm(10000, mean=u, sd=1) hist(n, breaks=100, freq = F) curve(dbhatt(x, 0, 1, 3), -6, 6, add = T, col = "red") Bhattacharjee, G.P., Pandit, S.N.N., and Mohan, R. (1963). Dimensional chains involving rectangular and normal error-distributions. Technometrics, 5, 404-406.
Normal Distribution with Uniform Mean
Distribution that is related to special case of your question was described by Bhattacharjee, Pandit, and Mohan (1963). It assumes that uniform distribution is centered around the global mean $\mu$ an
Normal Distribution with Uniform Mean Distribution that is related to special case of your question was described by Bhattacharjee, Pandit, and Mohan (1963). It assumes that uniform distribution is centered around the global mean $\mu$ and has $(\mu-a, \mu+a)$ bounds. In standard form it has probability density function $$ f(z) = \frac{1}{2a} \left[\Phi\left(z+a\right) - \Phi\left(z-a\right)\right] $$ and cumulative distribution function $$ F(z) = \frac{1}{2a} \left[z\,\Phi\left(z+a\right) - z\,\Phi\left(z-a\right) + \phi\left(z+a\right) - \phi\left(z-a\right)\right] $$ where $\Phi$ is a standard normal cdf and $\phi$ is a standard normal pdf. It emerges when $U \sim \mathcal{U}(\mu-a, \mu+a)$ and $X \sim \mathcal{N}(\mu, \sigma^2)$, then $Z = U+X$ follows the distribution described by Bhattacharjee et al. library(extraDistr) set.seed(123) u <- runif(10000, -1, 1) n <- rnorm(10000, mean=u, sd=1) hist(n, breaks=100, freq = F) curve(dbhatt(x, 0, 1, 1), -6, 6, add = T, col = "red") set.seed(123) u <- runif(10000, -3, 3) n <- rnorm(10000, mean=u, sd=1) hist(n, breaks=100, freq = F) curve(dbhatt(x, 0, 1, 3), -6, 6, add = T, col = "red") Bhattacharjee, G.P., Pandit, S.N.N., and Mohan, R. (1963). Dimensional chains involving rectangular and normal error-distributions. Technometrics, 5, 404-406.
Normal Distribution with Uniform Mean Distribution that is related to special case of your question was described by Bhattacharjee, Pandit, and Mohan (1963). It assumes that uniform distribution is centered around the global mean $\mu$ an
45,038
Interpretation of p-value in Mann-Whitney rank test
The p-value represents the probability of getting a test-statistic at least as extreme$^\dagger$ as the one you had in your sample, if the null hypothesis were true. A high p-value indicates you saw something really consistent with the null hypothesis (e.g. tossing 151 heads in 300 tosses of a coin you're examining for fairness), and something that's really consistent with the null being true would not cause you to think it was false. (In some situations it might perhaps lead you to think more carefully about the assumptions.) If you thought that a and b were very similar in values then you'd expect to obtain a high p-value, not a low one. (If you expected a low p-value, you may have some misunderstanding of how p-values work.) That is, the p-value from the test is consistent with your statement about them being very similar*. Low p-values are the things that would cause you to hold doubt about the null. A caveat: the fact that the values move together in pairs across many orders of magnitude indicates a very high correlation, so the assumption of independence is untenable (the data seem to be paired). It should surely raise doubts about the suitability of the test. (I presume you made up the values to see how the test behaves, but if that's real data you have a problem.) $\dagger$ away from what you'd expect to see under the null, in the direction of what you'd see under the alternative * however note that values may in some circumstances be very similar (at least in some sense) without the p-value being high.
Interpretation of p-value in Mann-Whitney rank test
The p-value represents the probability of getting a test-statistic at least as extreme$^\dagger$ as the one you had in your sample, if the null hypothesis were true. A high p-value indicates you saw s
Interpretation of p-value in Mann-Whitney rank test The p-value represents the probability of getting a test-statistic at least as extreme$^\dagger$ as the one you had in your sample, if the null hypothesis were true. A high p-value indicates you saw something really consistent with the null hypothesis (e.g. tossing 151 heads in 300 tosses of a coin you're examining for fairness), and something that's really consistent with the null being true would not cause you to think it was false. (In some situations it might perhaps lead you to think more carefully about the assumptions.) If you thought that a and b were very similar in values then you'd expect to obtain a high p-value, not a low one. (If you expected a low p-value, you may have some misunderstanding of how p-values work.) That is, the p-value from the test is consistent with your statement about them being very similar*. Low p-values are the things that would cause you to hold doubt about the null. A caveat: the fact that the values move together in pairs across many orders of magnitude indicates a very high correlation, so the assumption of independence is untenable (the data seem to be paired). It should surely raise doubts about the suitability of the test. (I presume you made up the values to see how the test behaves, but if that's real data you have a problem.) $\dagger$ away from what you'd expect to see under the null, in the direction of what you'd see under the alternative * however note that values may in some circumstances be very similar (at least in some sense) without the p-value being high.
Interpretation of p-value in Mann-Whitney rank test The p-value represents the probability of getting a test-statistic at least as extreme$^\dagger$ as the one you had in your sample, if the null hypothesis were true. A high p-value indicates you saw s
45,039
Interpretation of p-value in Mann-Whitney rank test
If you are trying to prove that the two vectors are approximately equal then you have two issues: 1) What exactly do you mean by "equal"? and 2) No usual test is appropriate. For 1) you have to consider if the data are independent or not and whether you want to test means or medians or whatever. For 2) You should look into tests of equivalence such as TOST (two one sided t-tests) which allow you to test the hypothesis you have in mind.
Interpretation of p-value in Mann-Whitney rank test
If you are trying to prove that the two vectors are approximately equal then you have two issues: 1) What exactly do you mean by "equal"? and 2) No usual test is appropriate. For 1) you have to cons
Interpretation of p-value in Mann-Whitney rank test If you are trying to prove that the two vectors are approximately equal then you have two issues: 1) What exactly do you mean by "equal"? and 2) No usual test is appropriate. For 1) you have to consider if the data are independent or not and whether you want to test means or medians or whatever. For 2) You should look into tests of equivalence such as TOST (two one sided t-tests) which allow you to test the hypothesis you have in mind.
Interpretation of p-value in Mann-Whitney rank test If you are trying to prove that the two vectors are approximately equal then you have two issues: 1) What exactly do you mean by "equal"? and 2) No usual test is appropriate. For 1) you have to cons
45,040
Transforming data
Transformations are like drugs ... Some are good for you and some aren't. Transforming data by scaling is almost always a good idea . Transforming time series data like taking differences can be a bad idea as an unwarranted difference can actually inject structure into the data. Transforming data by replacing anomalous values by cleansed values enabling a clearer picture robust to the anomalies is also a good idea just as long as you get motivated to find out why the data was anomalous AND enable confidence limits that include the possibility of anomalous values . See @Aksakal's very wise words on this How to fit a model for a time series that contains outliers Power Transforms like logs or any other assumed transformation can be a bad idea . See When (and why) should you take the log of a distribution (of numbers)? for a discussion of when and why you should transform. One caveat there are certain model objectives i.e. specific models which require transformations but these are usually special purpose and rare.
Transforming data
Transformations are like drugs ... Some are good for you and some aren't. Transforming data by scaling is almost always a good idea . Transforming time series data like taking differences can be a ba
Transforming data Transformations are like drugs ... Some are good for you and some aren't. Transforming data by scaling is almost always a good idea . Transforming time series data like taking differences can be a bad idea as an unwarranted difference can actually inject structure into the data. Transforming data by replacing anomalous values by cleansed values enabling a clearer picture robust to the anomalies is also a good idea just as long as you get motivated to find out why the data was anomalous AND enable confidence limits that include the possibility of anomalous values . See @Aksakal's very wise words on this How to fit a model for a time series that contains outliers Power Transforms like logs or any other assumed transformation can be a bad idea . See When (and why) should you take the log of a distribution (of numbers)? for a discussion of when and why you should transform. One caveat there are certain model objectives i.e. specific models which require transformations but these are usually special purpose and rare.
Transforming data Transformations are like drugs ... Some are good for you and some aren't. Transforming data by scaling is almost always a good idea . Transforming time series data like taking differences can be a ba
45,041
Transforming data
There is no particular reason for wanting to transform your data as far as the adequacy of the model is concerned. However you may want to re-scale your outcome to make the coefficients lie in a more manageable range. For instance instead of having sales as the raw count you might express it as so many millions or so many thousands. This would have the effect of dividing your coefficient for rainy days by 1000 or 1000000 which might make it look more sensible. This is often done for predictor variables but in your case from your description it is the outcome which needs attention. Your model adequacy is not changed though which is the important thing. As pointed out by commentators I am assuming that sales are billions of some currency unit which if it is the sales of different products sold at different prices may well fulfill the usual assumptions of linear regression. However if it is billions of umbrellas and hence a count then a different model such as Poisson may be appropriate.
Transforming data
There is no particular reason for wanting to transform your data as far as the adequacy of the model is concerned. However you may want to re-scale your outcome to make the coefficients lie in a more
Transforming data There is no particular reason for wanting to transform your data as far as the adequacy of the model is concerned. However you may want to re-scale your outcome to make the coefficients lie in a more manageable range. For instance instead of having sales as the raw count you might express it as so many millions or so many thousands. This would have the effect of dividing your coefficient for rainy days by 1000 or 1000000 which might make it look more sensible. This is often done for predictor variables but in your case from your description it is the outcome which needs attention. Your model adequacy is not changed though which is the important thing. As pointed out by commentators I am assuming that sales are billions of some currency unit which if it is the sales of different products sold at different prices may well fulfill the usual assumptions of linear regression. However if it is billions of umbrellas and hence a count then a different model such as Poisson may be appropriate.
Transforming data There is no particular reason for wanting to transform your data as far as the adequacy of the model is concerned. However you may want to re-scale your outcome to make the coefficients lie in a more
45,042
Strong vs Weak Assumptions
Let $\mathbf u$ be the $T \times 1$ column error vector and $\mathbf X$ be the $T \times k$ regressor matrix, where $T$ is the sample size. Then strict exogeneity is defined as $$E\left(\mathbf u \mid \mathbf X\right) = \mathbf 0$$ This can be decomposed and written perhaps more clearly as $$E(u_t \mid \mathbf X) = 0,\;\;\; t=1,...,T$$ which shows that strict exogeneity requires that each error term is mean-independent from all regressors, "past present and future". On the contrary contemporaneous exogeneity is defined as, denoting $\mathbf x_t$ a row of $\mathbf X$ (i.e. the regressors at one period in time), $$E(u_t \mid \mathbf x_t) =0, \;\;\; t=1,...,T$$ This is weaker, because it incorporates only a subset of the assumptions implied by strict exogeneity. Other important relations often encountered as assumptions or desiderata are Contemporaneous uncorrelatedness (or orthgonality) $$E(u_t \mathbf x_t) =0, \;\;\; t=1,...,T$$ This is sometimes also called "predetermined regressors" but this last term is also used for more stronger conditions in the literature. This is weaker than mean-independence, because mean-independence implies non-correlation but not vice versa. An "intermediate" in strength assumption is $$E(u_t \mathbf x_s) = \mathbf 0 \;\;\; \forall (t,s)$$ Here we do require the relation to hold for each error and all periods, but we only require orthogonality and not mean-independence. Call it "strict orthogonality" perhaps? Note: the interchangeable use of the terms "uncorrelatedness" and "orthogonality" depends critically on the assumption that the error term has zero-mean. Otherwise, the correct term is "orthogonality" and not "uncorrelatedness" (see here)
Strong vs Weak Assumptions
Let $\mathbf u$ be the $T \times 1$ column error vector and $\mathbf X$ be the $T \times k$ regressor matrix, where $T$ is the sample size. Then strict exogeneity is defined as $$E\left(\mathbf u \m
Strong vs Weak Assumptions Let $\mathbf u$ be the $T \times 1$ column error vector and $\mathbf X$ be the $T \times k$ regressor matrix, where $T$ is the sample size. Then strict exogeneity is defined as $$E\left(\mathbf u \mid \mathbf X\right) = \mathbf 0$$ This can be decomposed and written perhaps more clearly as $$E(u_t \mid \mathbf X) = 0,\;\;\; t=1,...,T$$ which shows that strict exogeneity requires that each error term is mean-independent from all regressors, "past present and future". On the contrary contemporaneous exogeneity is defined as, denoting $\mathbf x_t$ a row of $\mathbf X$ (i.e. the regressors at one period in time), $$E(u_t \mid \mathbf x_t) =0, \;\;\; t=1,...,T$$ This is weaker, because it incorporates only a subset of the assumptions implied by strict exogeneity. Other important relations often encountered as assumptions or desiderata are Contemporaneous uncorrelatedness (or orthgonality) $$E(u_t \mathbf x_t) =0, \;\;\; t=1,...,T$$ This is sometimes also called "predetermined regressors" but this last term is also used for more stronger conditions in the literature. This is weaker than mean-independence, because mean-independence implies non-correlation but not vice versa. An "intermediate" in strength assumption is $$E(u_t \mathbf x_s) = \mathbf 0 \;\;\; \forall (t,s)$$ Here we do require the relation to hold for each error and all periods, but we only require orthogonality and not mean-independence. Call it "strict orthogonality" perhaps? Note: the interchangeable use of the terms "uncorrelatedness" and "orthogonality" depends critically on the assumption that the error term has zero-mean. Otherwise, the correct term is "orthogonality" and not "uncorrelatedness" (see here)
Strong vs Weak Assumptions Let $\mathbf u$ be the $T \times 1$ column error vector and $\mathbf X$ be the $T \times k$ regressor matrix, where $T$ is the sample size. Then strict exogeneity is defined as $$E\left(\mathbf u \m
45,043
Strong vs Weak Assumptions
Think about a regression of sales of ice cream on advertising, where the error comes from the effect of weather, which is unobserved to you, but not to the ice cream man or his customers. You care about what advertising does to sales. Assume for the sake of simplicity that weather has no persistence across days, nor was it recorded to be included in our model as a control. The concern is that the ice cream man uses advertising to smooth the effect of weather, so days with high advertising are also cooler days and low advertising days are hotter, so the coefficient that comes from comparing days with more marketing to less marketing will be contaminated by that relationship. Advertising will look less effective than it really is since we don't know what the weather was. That is the most basic kind of endogeneity. The weak version says advertising today cannot respond to today's weather. In other words, knowing the advertising level today tells me nothing about the weather today, on average. That might seem reasonable, since advertising takes time to roll out. Flyers need to be designed and printed and someone must be hired to hand them out. But advertising tomorrow could be correlated with today's cool weather. You can also have advertising today correlated with tomorrow's weather (or beliefs about it). The strong version says that in addition to today, advertising yesterday and tomorrow does not tell you about weather today, so the ice cream parlor cannot select advertising tomorrow to compensate for the effect of cool weather today. That is a much more restrictive assumption, since it also rules out compensating behavior across time in addition to within each day.
Strong vs Weak Assumptions
Think about a regression of sales of ice cream on advertising, where the error comes from the effect of weather, which is unobserved to you, but not to the ice cream man or his customers. You care abo
Strong vs Weak Assumptions Think about a regression of sales of ice cream on advertising, where the error comes from the effect of weather, which is unobserved to you, but not to the ice cream man or his customers. You care about what advertising does to sales. Assume for the sake of simplicity that weather has no persistence across days, nor was it recorded to be included in our model as a control. The concern is that the ice cream man uses advertising to smooth the effect of weather, so days with high advertising are also cooler days and low advertising days are hotter, so the coefficient that comes from comparing days with more marketing to less marketing will be contaminated by that relationship. Advertising will look less effective than it really is since we don't know what the weather was. That is the most basic kind of endogeneity. The weak version says advertising today cannot respond to today's weather. In other words, knowing the advertising level today tells me nothing about the weather today, on average. That might seem reasonable, since advertising takes time to roll out. Flyers need to be designed and printed and someone must be hired to hand them out. But advertising tomorrow could be correlated with today's cool weather. You can also have advertising today correlated with tomorrow's weather (or beliefs about it). The strong version says that in addition to today, advertising yesterday and tomorrow does not tell you about weather today, so the ice cream parlor cannot select advertising tomorrow to compensate for the effect of cool weather today. That is a much more restrictive assumption, since it also rules out compensating behavior across time in addition to within each day.
Strong vs Weak Assumptions Think about a regression of sales of ice cream on advertising, where the error comes from the effect of weather, which is unobserved to you, but not to the ice cream man or his customers. You care abo
45,044
R - How are the significance codes determined when summarizing a logistic regression model?
Firstly, the z or t value (depending on what family you run) is the coefficient divided by the standard error. The p value is then derived from the normal or t distributions using this z or t value. The stars don't really add much in my view. You will see underneath the table of coefficients that there is a line which starts 'Signif. codes'. This gives the key. So a coefficient marked *** is one whose p value < 0.001. One whose coefficient is marked ** is p < 0.01. And so on. For example (taken from https://stats.idre.ucla.edu/r/dae/logit-regression/): mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv") mydata$rank <- factor(mydata$rank) mylogit <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") summary(mylogit) Gives the following output: Call: glm(formula = admit ~ gre + gpa + rank, family = "binomial", data = mydata) Deviance Residuals: Min 1Q Median 3Q Max -1.6268 -0.8662 -0.6388 1.1490 2.0790 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -3.989979 1.139951 -3.500 0.000465 *** gre 0.002264 0.001094 2.070 0.038465 * gpa 0.804038 0.331819 2.423 0.015388 * rank2 -0.675443 0.316490 -2.134 0.032829 * rank3 -1.340204 0.345306 -3.881 0.000104 *** rank4 -1.551464 0.417832 -3.713 0.000205 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 499.98 on 399 degrees of freedom Residual deviance: 458.52 on 394 degrees of freedom AIC: 470.52 Number of Fisher Scoring iterations: 4 You can see that gre has a p value = 0.038. This has one asterisk by it because that is < 0.05. rank4 has a p value = 0.0002 and so has three asterisks because this is < 0.001. I just use the asterisks to quickly scan the table but I never look at them beyond that.
R - How are the significance codes determined when summarizing a logistic regression model?
Firstly, the z or t value (depending on what family you run) is the coefficient divided by the standard error. The p value is then derived from the normal or t distributions using this z or t value. T
R - How are the significance codes determined when summarizing a logistic regression model? Firstly, the z or t value (depending on what family you run) is the coefficient divided by the standard error. The p value is then derived from the normal or t distributions using this z or t value. The stars don't really add much in my view. You will see underneath the table of coefficients that there is a line which starts 'Signif. codes'. This gives the key. So a coefficient marked *** is one whose p value < 0.001. One whose coefficient is marked ** is p < 0.01. And so on. For example (taken from https://stats.idre.ucla.edu/r/dae/logit-regression/): mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv") mydata$rank <- factor(mydata$rank) mylogit <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") summary(mylogit) Gives the following output: Call: glm(formula = admit ~ gre + gpa + rank, family = "binomial", data = mydata) Deviance Residuals: Min 1Q Median 3Q Max -1.6268 -0.8662 -0.6388 1.1490 2.0790 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -3.989979 1.139951 -3.500 0.000465 *** gre 0.002264 0.001094 2.070 0.038465 * gpa 0.804038 0.331819 2.423 0.015388 * rank2 -0.675443 0.316490 -2.134 0.032829 * rank3 -1.340204 0.345306 -3.881 0.000104 *** rank4 -1.551464 0.417832 -3.713 0.000205 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 499.98 on 399 degrees of freedom Residual deviance: 458.52 on 394 degrees of freedom AIC: 470.52 Number of Fisher Scoring iterations: 4 You can see that gre has a p value = 0.038. This has one asterisk by it because that is < 0.05. rank4 has a p value = 0.0002 and so has three asterisks because this is < 0.001. I just use the asterisks to quickly scan the table but I never look at them beyond that.
R - How are the significance codes determined when summarizing a logistic regression model? Firstly, the z or t value (depending on what family you run) is the coefficient divided by the standard error. The p value is then derived from the normal or t distributions using this z or t value. T
45,045
R - How are the significance codes determined when summarizing a logistic regression model?
If you want to know right away which variables (independent variables, IVs) impacts your dependent varibale (DV) the most, you could use the following: install.packages("caret") # run install only if you've never installed it before library(caret) fit<-lm(DV~IV1+IV2+IV3, data=mydata) varImp(fit, scale = FALSE) overall IV1 -4.3 IV2 7.65 IV3 12.37 The IV with the highest value has the most impact. Make sure you don't use 2 IVs with a high correlation between themselves to avoid multicollinearity.
R - How are the significance codes determined when summarizing a logistic regression model?
If you want to know right away which variables (independent variables, IVs) impacts your dependent varibale (DV) the most, you could use the following: install.packages("caret") # run install only if
R - How are the significance codes determined when summarizing a logistic regression model? If you want to know right away which variables (independent variables, IVs) impacts your dependent varibale (DV) the most, you could use the following: install.packages("caret") # run install only if you've never installed it before library(caret) fit<-lm(DV~IV1+IV2+IV3, data=mydata) varImp(fit, scale = FALSE) overall IV1 -4.3 IV2 7.65 IV3 12.37 The IV with the highest value has the most impact. Make sure you don't use 2 IVs with a high correlation between themselves to avoid multicollinearity.
R - How are the significance codes determined when summarizing a logistic regression model? If you want to know right away which variables (independent variables, IVs) impacts your dependent varibale (DV) the most, you could use the following: install.packages("caret") # run install only if
45,046
Do I need to guess a distribution to use MLE? [duplicate]
To apply parametric MLE, you need to specify a parametric distribution. For non-parametric MLE, you do not specify a parametric distribution. The most popular of the non-parametric MLE approaches is called Empirical Likelihood https://en.wikipedia.org/wiki/Empirical_likelihood (not much of a write up on that page). The classic book in the field is "Empirical Likelihood" by Art B, Owen https://www.amazon.com/Empirical-Likelihood-Art-B-Owen/dp/1584880716 . The freely accessible paper "Empirical Likelihood", Art B, Owen, Annals of Statistics 1990, Vol. 18, pp. 90-120 https://projecteuclid.org/download/pdf_1/euclid.aos/1176347494 will give you a pretty good idea of the field. Freely available slides by Owen are at http://statweb.stanford.edu/~owen/pubtalks/DASprott.pdf . Basically, Empirical Likelihood makes use of the empirical distribution of the data, as the basis for forming an empirical likelihood. This empirical likelihood can be maximized, subject to various constraints, sometimes in closed form, but often requiring numerical constrained nonlinear optimization methods. It can be used as the basis for computing non-parametric likelihood ratio tests and confidence regions (not necessarily ellipsoidal or symmetric). There are relationships between empirical likelihood and bootstrapping, and indeed, the two can be combined. If you don't have a solid rationale for use of a particular parametric distribution, you're generally better off using a non-parametric method, such as empirical likelihood. The downside may be that the computations are more computationally intensive, and the confidence regions which result do not look like those most people have come to expect based on, for instance, Normal distribution assumptions.
Do I need to guess a distribution to use MLE? [duplicate]
To apply parametric MLE, you need to specify a parametric distribution. For non-parametric MLE, you do not specify a parametric distribution. The most popular of the non-parametric MLE approaches is
Do I need to guess a distribution to use MLE? [duplicate] To apply parametric MLE, you need to specify a parametric distribution. For non-parametric MLE, you do not specify a parametric distribution. The most popular of the non-parametric MLE approaches is called Empirical Likelihood https://en.wikipedia.org/wiki/Empirical_likelihood (not much of a write up on that page). The classic book in the field is "Empirical Likelihood" by Art B, Owen https://www.amazon.com/Empirical-Likelihood-Art-B-Owen/dp/1584880716 . The freely accessible paper "Empirical Likelihood", Art B, Owen, Annals of Statistics 1990, Vol. 18, pp. 90-120 https://projecteuclid.org/download/pdf_1/euclid.aos/1176347494 will give you a pretty good idea of the field. Freely available slides by Owen are at http://statweb.stanford.edu/~owen/pubtalks/DASprott.pdf . Basically, Empirical Likelihood makes use of the empirical distribution of the data, as the basis for forming an empirical likelihood. This empirical likelihood can be maximized, subject to various constraints, sometimes in closed form, but often requiring numerical constrained nonlinear optimization methods. It can be used as the basis for computing non-parametric likelihood ratio tests and confidence regions (not necessarily ellipsoidal or symmetric). There are relationships between empirical likelihood and bootstrapping, and indeed, the two can be combined. If you don't have a solid rationale for use of a particular parametric distribution, you're generally better off using a non-parametric method, such as empirical likelihood. The downside may be that the computations are more computationally intensive, and the confidence regions which result do not look like those most people have come to expect based on, for instance, Normal distribution assumptions.
Do I need to guess a distribution to use MLE? [duplicate] To apply parametric MLE, you need to specify a parametric distribution. For non-parametric MLE, you do not specify a parametric distribution. The most popular of the non-parametric MLE approaches is
45,047
Do I need to guess a distribution to use MLE? [duplicate]
To apply MLE you need to assume a distribution. So, yes, you need to have a distribution in mind, usually. The standard intro texts use Gaussian. For instance, they'd show you how Gaussian distribution leads to MLE in linear model to the same estimators as in least squares regression. Gaussian distribution with independence (random sample) assumption is a popular choice. However, other distributions are used when they're more suitable for a problem. Often, you don't have to "guess" the distribution, but already know what family it belongs to. Maybe you know it must be Poisson, for instance. In this case you plug it into MLE equations and derive the appropriate likelihood function to estimate the parameter of the distribution
Do I need to guess a distribution to use MLE? [duplicate]
To apply MLE you need to assume a distribution. So, yes, you need to have a distribution in mind, usually. The standard intro texts use Gaussian. For instance, they'd show you how Gaussian distributio
Do I need to guess a distribution to use MLE? [duplicate] To apply MLE you need to assume a distribution. So, yes, you need to have a distribution in mind, usually. The standard intro texts use Gaussian. For instance, they'd show you how Gaussian distribution leads to MLE in linear model to the same estimators as in least squares regression. Gaussian distribution with independence (random sample) assumption is a popular choice. However, other distributions are used when they're more suitable for a problem. Often, you don't have to "guess" the distribution, but already know what family it belongs to. Maybe you know it must be Poisson, for instance. In this case you plug it into MLE equations and derive the appropriate likelihood function to estimate the parameter of the distribution
Do I need to guess a distribution to use MLE? [duplicate] To apply MLE you need to assume a distribution. So, yes, you need to have a distribution in mind, usually. The standard intro texts use Gaussian. For instance, they'd show you how Gaussian distributio
45,048
Do I need to guess a distribution to use MLE? [duplicate]
How can I make this guess? As pointed out in other answers, sometimes you know what the distribution must be due to the nature of the data generating process. Consider Generalized Extreme Value Distribution as described in Wikipedia: By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Of course, this is an asymptotic result, but you may count on it for sufficiently large samples. Other times you may just have a rough idea and would not know exactly. However, this may suffice in the framework of quasi maximum likelihood estimation (QMLE). QMLE allows consistently estimating model parameters and doing inference when the assumed distribution does not match the true distribution. Even though it does not work universally (not all distributions can be assumed in place of other distributions), it can still be pretty useful. (For an intuitive explanation of why and how QMLE works see Idea and intuition behind quasi maximum likelihood estimation (QMLE).)
Do I need to guess a distribution to use MLE? [duplicate]
How can I make this guess? As pointed out in other answers, sometimes you know what the distribution must be due to the nature of the data generating process. Consider Generalized Extreme Value Distr
Do I need to guess a distribution to use MLE? [duplicate] How can I make this guess? As pointed out in other answers, sometimes you know what the distribution must be due to the nature of the data generating process. Consider Generalized Extreme Value Distribution as described in Wikipedia: By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Of course, this is an asymptotic result, but you may count on it for sufficiently large samples. Other times you may just have a rough idea and would not know exactly. However, this may suffice in the framework of quasi maximum likelihood estimation (QMLE). QMLE allows consistently estimating model parameters and doing inference when the assumed distribution does not match the true distribution. Even though it does not work universally (not all distributions can be assumed in place of other distributions), it can still be pretty useful. (For an intuitive explanation of why and how QMLE works see Idea and intuition behind quasi maximum likelihood estimation (QMLE).)
Do I need to guess a distribution to use MLE? [duplicate] How can I make this guess? As pointed out in other answers, sometimes you know what the distribution must be due to the nature of the data generating process. Consider Generalized Extreme Value Distr
45,049
Do I need to guess a distribution to use MLE? [duplicate]
In general, no you cannot use MLE to find which family of distributions might provide a good parametric model for an outcome. That's not to say that there aren't some exploratory techniques that could shed some light on possibilities. But, as we know from statistics, using the same data as a hypothesis generating and hypothesis confirming tool will lead to increased false positive errors. Ideally a family of distributions is chosen before the data are collected. You can often think about the data generating mechanism and/or draw parallels between what other researchers have used and discussed. For instance, Poisson variables come from independent exponential interarrival times, and 3 parameter Weibull models can flexibly describe time-to-event curves. You can also rely on the fact that predictions and inference coming from similar probability models tends to be quite similar, for instance, inference from the t-test tends to be quite similar to the z-test even in moderately small samples. Another thing to consider is that Tukey was quoted as having said, "Build your model as big as a house!" within the limits of the data themselves, making oversimplified assumptions tends to be unnecessary when more flexible nested parametric models are available. For instance, instead of exponential time-to-event models, you could consider Weibull as a bigger class, or 3 parameter Weibull as an even bigger class of models. For counting processes, negative binomial models are basically two parameter Poisson models. You can even consider mixtures or empirical likelihood as ways of describing densities with a minimal number of assumptions.
Do I need to guess a distribution to use MLE? [duplicate]
In general, no you cannot use MLE to find which family of distributions might provide a good parametric model for an outcome. That's not to say that there aren't some exploratory techniques that could
Do I need to guess a distribution to use MLE? [duplicate] In general, no you cannot use MLE to find which family of distributions might provide a good parametric model for an outcome. That's not to say that there aren't some exploratory techniques that could shed some light on possibilities. But, as we know from statistics, using the same data as a hypothesis generating and hypothesis confirming tool will lead to increased false positive errors. Ideally a family of distributions is chosen before the data are collected. You can often think about the data generating mechanism and/or draw parallels between what other researchers have used and discussed. For instance, Poisson variables come from independent exponential interarrival times, and 3 parameter Weibull models can flexibly describe time-to-event curves. You can also rely on the fact that predictions and inference coming from similar probability models tends to be quite similar, for instance, inference from the t-test tends to be quite similar to the z-test even in moderately small samples. Another thing to consider is that Tukey was quoted as having said, "Build your model as big as a house!" within the limits of the data themselves, making oversimplified assumptions tends to be unnecessary when more flexible nested parametric models are available. For instance, instead of exponential time-to-event models, you could consider Weibull as a bigger class, or 3 parameter Weibull as an even bigger class of models. For counting processes, negative binomial models are basically two parameter Poisson models. You can even consider mixtures or empirical likelihood as ways of describing densities with a minimal number of assumptions.
Do I need to guess a distribution to use MLE? [duplicate] In general, no you cannot use MLE to find which family of distributions might provide a good parametric model for an outcome. That's not to say that there aren't some exploratory techniques that could
45,050
Overfitting due to a unique identifier among features
The predictive value of an ID field will vary considerable from dataset to dataset, so in some cases it's probably ok to leave in, and in others not. One case where it could have high predictive value (and should be taken out) is where you're trying to predict the age of something, and the ID is assigned according to age (e.g., new accounts or users' IDs are incremented). Other than cases like this, a continuous feature wouldn't provide much value. An overfit tree, which makes so many splits that each terminal node has only one observation, is just as likely to overfit on an ID column as any other column. Playing devil's advocate, one might even argue for leaving it in so as to have some sort of benchmark in your dataset for identifying variables that have no value.
Overfitting due to a unique identifier among features
The predictive value of an ID field will vary considerable from dataset to dataset, so in some cases it's probably ok to leave in, and in others not. One case where it could have high predictive value
Overfitting due to a unique identifier among features The predictive value of an ID field will vary considerable from dataset to dataset, so in some cases it's probably ok to leave in, and in others not. One case where it could have high predictive value (and should be taken out) is where you're trying to predict the age of something, and the ID is assigned according to age (e.g., new accounts or users' IDs are incremented). Other than cases like this, a continuous feature wouldn't provide much value. An overfit tree, which makes so many splits that each terminal node has only one observation, is just as likely to overfit on an ID column as any other column. Playing devil's advocate, one might even argue for leaving it in so as to have some sort of benchmark in your dataset for identifying variables that have no value.
Overfitting due to a unique identifier among features The predictive value of an ID field will vary considerable from dataset to dataset, so in some cases it's probably ok to leave in, and in others not. One case where it could have high predictive value
45,051
Overfitting due to a unique identifier among features
"Certainly, any continuous column (with enough resolution) has the power to uniquely identify the example." - that's not true for a predictor that is treated as continuous. For instance, if you generate Y and X at random using uniform distribution, then all of the values of X will most likely be distinct. However, when you regress Y on X using simple linear regression, the fit (in terms of $R^2$) will not be perfect. That's because there is a restriction on relationship between Y and X in the form $Y = x'\beta + \epsilon$. There are only two parameters in the model. If you treat X as a categorical predictor with N levels where N is the sample size, then there is no such restriction: for each level of X, you are allowed estimate a distinct parameter that describes the average response at that level of X. That is, you end up with N observations and N parameters. For each such parameter, its estimate will be equal to the observed value of Y, so you will end up with a perfect fit. One implication is that if your subject ID is formatted as a number and the ML package will see it as a quantitative predictor, then you won't have a problem with it, apart from including a meaningless predictor in your model.
Overfitting due to a unique identifier among features
"Certainly, any continuous column (with enough resolution) has the power to uniquely identify the example." - that's not true for a predictor that is treated as continuous. For instance, if you genera
Overfitting due to a unique identifier among features "Certainly, any continuous column (with enough resolution) has the power to uniquely identify the example." - that's not true for a predictor that is treated as continuous. For instance, if you generate Y and X at random using uniform distribution, then all of the values of X will most likely be distinct. However, when you regress Y on X using simple linear regression, the fit (in terms of $R^2$) will not be perfect. That's because there is a restriction on relationship between Y and X in the form $Y = x'\beta + \epsilon$. There are only two parameters in the model. If you treat X as a categorical predictor with N levels where N is the sample size, then there is no such restriction: for each level of X, you are allowed estimate a distinct parameter that describes the average response at that level of X. That is, you end up with N observations and N parameters. For each such parameter, its estimate will be equal to the observed value of Y, so you will end up with a perfect fit. One implication is that if your subject ID is formatted as a number and the ML package will see it as a quantitative predictor, then you won't have a problem with it, apart from including a meaningless predictor in your model.
Overfitting due to a unique identifier among features "Certainly, any continuous column (with enough resolution) has the power to uniquely identify the example." - that's not true for a predictor that is treated as continuous. For instance, if you genera
45,052
Overfitting due to a unique identifier among features
Another way to look at it is from the POV of a neural network. It is much easier to memorize which classes belong to which target, than to learn meaningful features. Including the identifier also encourages co-adaptation, which overfits on the training set.
Overfitting due to a unique identifier among features
Another way to look at it is from the POV of a neural network. It is much easier to memorize which classes belong to which target, than to learn meaningful features. Including the identifier also enc
Overfitting due to a unique identifier among features Another way to look at it is from the POV of a neural network. It is much easier to memorize which classes belong to which target, than to learn meaningful features. Including the identifier also encourages co-adaptation, which overfits on the training set.
Overfitting due to a unique identifier among features Another way to look at it is from the POV of a neural network. It is much easier to memorize which classes belong to which target, than to learn meaningful features. Including the identifier also enc
45,053
How can I compute the standard error of the Wald estimator?
Here is my answer to my question. I hope there is no mistake in calculus. We have : $y_{1,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_1}$ $y_{0,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_0}$ $x_{1,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{x_1}$ $x_{0,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{x_0}$ The Wald estimator is defined as : $$ \beta_{Wald} = \frac{\mu_{y_1} - \mu_{y_0}}{\mu_{x_1} - \mu_{x_0}} $$ This can be estimated using the plug-in estimator : $$ \widehat{\beta_{Wald}} = \frac{\bar{y_1} - \bar{y_0}}{\bar{x_1} - \bar{x_0}} $$ I want to know the distribution of $\widehat{\beta_{Wald}}$. Since $\bar{y_1} - \bar{y_0}$ and $\bar{x_1} - \bar{x_0}$ converge to a normal distribution, I know that I can derive the distribution of $\widehat{\beta_{Wald}}$ using the Delta method (See Larry Wasserman, All of Statistics : A Concise Course in Statistical Inference, Springer, coll. « Springer Texts in Statistics », 2004, page 79). I define two new variables : $U = \bar{y_1} - \bar{y_0}$ $V = \bar{X_1} - \bar{X_0}$ I know that : $U \xrightarrow{L}\, \mathcal{N}(\mu_U, \sigma^2_U)$ $V \xrightarrow{L}\, \mathcal{N}(\mu_V, \sigma^2_V)$ I define the function $g(U,V) = U/V$. According to the Delta method, I know that : $$ g(U,V) \xrightarrow{L}\, \mathcal{N}\left(g(\mu_U, \mu_V), Dg(\mu_U, \mu_V)^T\Sigma Dg(\mu_U, \mu_V)\right) $$ with $Dg(\mu_U, \mu_V)$ the Jacobian matrix of function g and $\Sigma$ the variance-covariance matrix of vector $(U,V)$. So I compute the Jacobian : $$Dg \left( \begin{array}{c} \mu_U \\ \mu_V \end{array} \right) = \left(\begin{array}{rcl} \frac{1}{\mu_V} \\ \frac{-\mu_U}{\mu_V^2} \end{array} \right) $$ and I have the variance-covariance matrix : $$ \Sigma = \left( \begin{array}{cc} \sigma^2_U & \sigma_{U,V} \\ \sigma_{U,V} & \sigma^2_V \end{array} \right) $$ So the variance of $g(U/V)$ is : $$ Dg(\mu_U, \mu_V)^T \Sigma Dg(\mu_U, \mu_V) = \frac{\sigma^2_U}{\mu_V^2} - 2 \frac{\mu_U}{\mu_V^3} \sigma_{U,V} + \frac{\mu_U^2}{\mu_V^4} \sigma^2_V $$ In this case, since $y_i$ follow a Bernouilli distribution, its variance is just $\mu_{y_i} (1-\mu_{y_i})$ and can be estimated using the plug-in estimator as $\bar{y_i} (1-\bar{y_i})$. Therefore I can estimate the following quantities : $$ \begin{eqnarray} \sigma^2_U & = & V(\bar{y_1} - \bar{y_0})\\ & = & V(\bar{y_1}) + V(\bar{y_0}) & = & \frac{1}{N_1} \bar{y_1} (1 - \bar{y_1}) + \frac{1}{N_0} \bar{y_0} (1 - \bar{y_0}) \end{eqnarray} $$ $$ \begin{eqnarray} \sigma^2_V & = & V(\bar{x_1} - \bar{x_0})\\ & = & V(\bar{x_1}) + V(\bar{x_0}) & = & \frac{1}{N_1} \bar{x_1} (1 - \bar{x_1}) + \frac{1}{N_0} \bar{x_0} (1 - \bar{x_0}) \end{eqnarray} $$ $$ \begin{eqnarray} \sigma_{U,V} & = & cov(\bar{y_1} - \bar{y_0}, \bar{x_1} - \bar{x_0})\\ & = & \beta_1 V(\bar{x_1}) + \beta_1 V(\bar{x_0})\\ & = & \beta_1 \left( \frac{1}{N_1} \bar{x_1} (1 - \bar{x_1}) + \frac{1}{N_0} (\bar{x_0} (1-\bar{x_0}))\right) \end{eqnarray} $$ So I can have sample estimates of all quantities in the equation $\frac{\sigma^2_U}{\mu_V^2} - 2 \frac{\mu_U}{\mu_V^3} \sigma_{U,V} + \frac{\mu_U^2}{\mu_V^4} \sigma^2_V$. Therefore I can get the variance of my Wald estimator and compute my standard error !
How can I compute the standard error of the Wald estimator?
Here is my answer to my question. I hope there is no mistake in calculus. We have : $y_{1,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_1}$ $y_{0,i}$ a
How can I compute the standard error of the Wald estimator? Here is my answer to my question. I hope there is no mistake in calculus. We have : $y_{1,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_1}$ $y_{0,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_0}$ $x_{1,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{x_1}$ $x_{0,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{x_0}$ The Wald estimator is defined as : $$ \beta_{Wald} = \frac{\mu_{y_1} - \mu_{y_0}}{\mu_{x_1} - \mu_{x_0}} $$ This can be estimated using the plug-in estimator : $$ \widehat{\beta_{Wald}} = \frac{\bar{y_1} - \bar{y_0}}{\bar{x_1} - \bar{x_0}} $$ I want to know the distribution of $\widehat{\beta_{Wald}}$. Since $\bar{y_1} - \bar{y_0}$ and $\bar{x_1} - \bar{x_0}$ converge to a normal distribution, I know that I can derive the distribution of $\widehat{\beta_{Wald}}$ using the Delta method (See Larry Wasserman, All of Statistics : A Concise Course in Statistical Inference, Springer, coll. « Springer Texts in Statistics », 2004, page 79). I define two new variables : $U = \bar{y_1} - \bar{y_0}$ $V = \bar{X_1} - \bar{X_0}$ I know that : $U \xrightarrow{L}\, \mathcal{N}(\mu_U, \sigma^2_U)$ $V \xrightarrow{L}\, \mathcal{N}(\mu_V, \sigma^2_V)$ I define the function $g(U,V) = U/V$. According to the Delta method, I know that : $$ g(U,V) \xrightarrow{L}\, \mathcal{N}\left(g(\mu_U, \mu_V), Dg(\mu_U, \mu_V)^T\Sigma Dg(\mu_U, \mu_V)\right) $$ with $Dg(\mu_U, \mu_V)$ the Jacobian matrix of function g and $\Sigma$ the variance-covariance matrix of vector $(U,V)$. So I compute the Jacobian : $$Dg \left( \begin{array}{c} \mu_U \\ \mu_V \end{array} \right) = \left(\begin{array}{rcl} \frac{1}{\mu_V} \\ \frac{-\mu_U}{\mu_V^2} \end{array} \right) $$ and I have the variance-covariance matrix : $$ \Sigma = \left( \begin{array}{cc} \sigma^2_U & \sigma_{U,V} \\ \sigma_{U,V} & \sigma^2_V \end{array} \right) $$ So the variance of $g(U/V)$ is : $$ Dg(\mu_U, \mu_V)^T \Sigma Dg(\mu_U, \mu_V) = \frac{\sigma^2_U}{\mu_V^2} - 2 \frac{\mu_U}{\mu_V^3} \sigma_{U,V} + \frac{\mu_U^2}{\mu_V^4} \sigma^2_V $$ In this case, since $y_i$ follow a Bernouilli distribution, its variance is just $\mu_{y_i} (1-\mu_{y_i})$ and can be estimated using the plug-in estimator as $\bar{y_i} (1-\bar{y_i})$. Therefore I can estimate the following quantities : $$ \begin{eqnarray} \sigma^2_U & = & V(\bar{y_1} - \bar{y_0})\\ & = & V(\bar{y_1}) + V(\bar{y_0}) & = & \frac{1}{N_1} \bar{y_1} (1 - \bar{y_1}) + \frac{1}{N_0} \bar{y_0} (1 - \bar{y_0}) \end{eqnarray} $$ $$ \begin{eqnarray} \sigma^2_V & = & V(\bar{x_1} - \bar{x_0})\\ & = & V(\bar{x_1}) + V(\bar{x_0}) & = & \frac{1}{N_1} \bar{x_1} (1 - \bar{x_1}) + \frac{1}{N_0} \bar{x_0} (1 - \bar{x_0}) \end{eqnarray} $$ $$ \begin{eqnarray} \sigma_{U,V} & = & cov(\bar{y_1} - \bar{y_0}, \bar{x_1} - \bar{x_0})\\ & = & \beta_1 V(\bar{x_1}) + \beta_1 V(\bar{x_0})\\ & = & \beta_1 \left( \frac{1}{N_1} \bar{x_1} (1 - \bar{x_1}) + \frac{1}{N_0} (\bar{x_0} (1-\bar{x_0}))\right) \end{eqnarray} $$ So I can have sample estimates of all quantities in the equation $\frac{\sigma^2_U}{\mu_V^2} - 2 \frac{\mu_U}{\mu_V^3} \sigma_{U,V} + \frac{\mu_U^2}{\mu_V^4} \sigma^2_V$. Therefore I can get the variance of my Wald estimator and compute my standard error !
How can I compute the standard error of the Wald estimator? Here is my answer to my question. I hope there is no mistake in calculus. We have : $y_{1,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_1}$ $y_{0,i}$ a
45,054
How can I compute the standard error of the Wald estimator?
Consider pages 287 - 290 in the original Wald(1940) paper. It walks you through the derivation of the variance.
How can I compute the standard error of the Wald estimator?
Consider pages 287 - 290 in the original Wald(1940) paper. It walks you through the derivation of the variance.
How can I compute the standard error of the Wald estimator? Consider pages 287 - 290 in the original Wald(1940) paper. It walks you through the derivation of the variance.
How can I compute the standard error of the Wald estimator? Consider pages 287 - 290 in the original Wald(1940) paper. It walks you through the derivation of the variance.
45,055
How can I compute the standard error of the Wald estimator?
For future readers: the $\beta_1$ in @MichaelChirico's post is $\beta_\mathrm{wald}$ (the average treatment effect, in usual use), and the covariance formula follows from $E[Y|X] = \beta_0 + \beta_1 X$ without loss of generality (since X is binary). (Apologies for the extra answer; I have insufficient reputation to comment)
How can I compute the standard error of the Wald estimator?
For future readers: the $\beta_1$ in @MichaelChirico's post is $\beta_\mathrm{wald}$ (the average treatment effect, in usual use), and the covariance formula follows from $E[Y|X] = \beta_0 + \beta_1 X
How can I compute the standard error of the Wald estimator? For future readers: the $\beta_1$ in @MichaelChirico's post is $\beta_\mathrm{wald}$ (the average treatment effect, in usual use), and the covariance formula follows from $E[Y|X] = \beta_0 + \beta_1 X$ without loss of generality (since X is binary). (Apologies for the extra answer; I have insufficient reputation to comment)
How can I compute the standard error of the Wald estimator? For future readers: the $\beta_1$ in @MichaelChirico's post is $\beta_\mathrm{wald}$ (the average treatment effect, in usual use), and the covariance formula follows from $E[Y|X] = \beta_0 + \beta_1 X
45,056
Probability of obtaining same sequence in $10$ tosses of a coin
This question has two reasonable interpretations. Neither requires any calculation at all, but only careful reasoning about independence and disjoint outcomes. Conditional on the first ten flips, what is the chance the next ten flips match them in sequence? If we were to re-order the sequences of both flips in the same way, the question would be the same and therefore would have to have the same answer. Consequently the answer does not depend on the specific sequence of results in the first ten flips, but only on how many heads appeared (say $x$ of them). We now only have to compute the chance for one specific such sequence, such as first flipping $x$ heads in a row and then flipping $10-x$ tails in a row. The independence assumption immediately gives the answer, which I leave to the reader. What is the unconditional chance? That is, what is the chance before any coin is flipped that the second sequence of ten results will be identical to the first sequence? The independence of the flips allows you to consider them in any order. Consider the first and eleventh flips together, which constitute two independent flips. The chance that they match is the chance that both are heads or both are tails, which (since these two events are disjoint) must be the sum of the two probabilities $$\Pr(\text{match})=\Pr(HH) + \Pr(TT)=p^2 + (1-p)^2.$$ Similarly pair flip $i$ with flip $10+i$ for all $i=1,2,\ldots, 10$. The two sequences match if and only if all $10$ pairs are matches. Since the pairs are disjoint, their outcomes remain independent, producing an immediate answer, which again I leave to the reader. An interesting way to check the answers is to note that the second answer must equal the first answer, summed over all possible outcomes of the first ten flips multiplied by their chances. Equating the two, generalizing from $10$ to $n$, and applying the Binomial distribution for the value of $x$ in the first $n$ flips gives the curious but easily proven identity $$\sum_{x=0}^n\left[p^x(1-p)^{n-x}\right]\;\binom{n}{x}p^x(1-p)^{n-x} = \left(p^2 + (1-p)^2\right)^n.$$ A more pedestrian, but very effective, check is to compare the answers with a simulation. Here is R code to estimate the unconditional chance for specified $n$ (such as $10$) and chance of heads $p$. In each iteration, it counts the number of differences between the first $n$ and second $n$ sets of flips. The estimated probability is the proportion of simulations where those counts were zero. The code outputs the estimate and its standard error. If the estimate lies within a couple standard errors of your calculated result, you're probably correct. When I ran it for $p=1/3$ and $n=10$ the output was Estimate Std.err 0.00288 0.00005 It took one second to run a million iterations. n <- 10 p <- 1/3 n.sim <- 1e6 x <- colSums(matrix((runif(n.sim*n) < p) != (runif(n.sim*n) < p), nrow=n))==0 (round(c(Estimate=mean(x), Std.err=sd(x)/sqrt(n.sim)), 5))
Probability of obtaining same sequence in $10$ tosses of a coin
This question has two reasonable interpretations. Neither requires any calculation at all, but only careful reasoning about independence and disjoint outcomes. Conditional on the first ten flips, wh
Probability of obtaining same sequence in $10$ tosses of a coin This question has two reasonable interpretations. Neither requires any calculation at all, but only careful reasoning about independence and disjoint outcomes. Conditional on the first ten flips, what is the chance the next ten flips match them in sequence? If we were to re-order the sequences of both flips in the same way, the question would be the same and therefore would have to have the same answer. Consequently the answer does not depend on the specific sequence of results in the first ten flips, but only on how many heads appeared (say $x$ of them). We now only have to compute the chance for one specific such sequence, such as first flipping $x$ heads in a row and then flipping $10-x$ tails in a row. The independence assumption immediately gives the answer, which I leave to the reader. What is the unconditional chance? That is, what is the chance before any coin is flipped that the second sequence of ten results will be identical to the first sequence? The independence of the flips allows you to consider them in any order. Consider the first and eleventh flips together, which constitute two independent flips. The chance that they match is the chance that both are heads or both are tails, which (since these two events are disjoint) must be the sum of the two probabilities $$\Pr(\text{match})=\Pr(HH) + \Pr(TT)=p^2 + (1-p)^2.$$ Similarly pair flip $i$ with flip $10+i$ for all $i=1,2,\ldots, 10$. The two sequences match if and only if all $10$ pairs are matches. Since the pairs are disjoint, their outcomes remain independent, producing an immediate answer, which again I leave to the reader. An interesting way to check the answers is to note that the second answer must equal the first answer, summed over all possible outcomes of the first ten flips multiplied by their chances. Equating the two, generalizing from $10$ to $n$, and applying the Binomial distribution for the value of $x$ in the first $n$ flips gives the curious but easily proven identity $$\sum_{x=0}^n\left[p^x(1-p)^{n-x}\right]\;\binom{n}{x}p^x(1-p)^{n-x} = \left(p^2 + (1-p)^2\right)^n.$$ A more pedestrian, but very effective, check is to compare the answers with a simulation. Here is R code to estimate the unconditional chance for specified $n$ (such as $10$) and chance of heads $p$. In each iteration, it counts the number of differences between the first $n$ and second $n$ sets of flips. The estimated probability is the proportion of simulations where those counts were zero. The code outputs the estimate and its standard error. If the estimate lies within a couple standard errors of your calculated result, you're probably correct. When I ran it for $p=1/3$ and $n=10$ the output was Estimate Std.err 0.00288 0.00005 It took one second to run a million iterations. n <- 10 p <- 1/3 n.sim <- 1e6 x <- colSums(matrix((runif(n.sim*n) < p) != (runif(n.sim*n) < p), nrow=n))==0 (round(c(Estimate=mean(x), Std.err=sd(x)/sqrt(n.sim)), 5))
Probability of obtaining same sequence in $10$ tosses of a coin This question has two reasonable interpretations. Neither requires any calculation at all, but only careful reasoning about independence and disjoint outcomes. Conditional on the first ten flips, wh
45,057
Probability of obtaining same sequence in $10$ tosses of a coin
You need to consider whether the assumption that given a coin gave a "Head" in the earlier toss, is the occurrence of a "Tail" dependent on the previous event or not? The sequence should only matter if the path taken to the last coin toss impacts the last coin toss. This would be true if we were sampling without replacement, but outcomes of coin tosses should be independent of each other. Taking an example of 3 coin tosses, lets say the first instance was {H,T,H}, you can build a tree of possible outcomes: First Toss / \ Head Tail | | 2nd Toss 2nd Toss | | | | Head Tail Head Tail | | | | 3rd Toss 3rd Toss 3rd Toss 3rd Toss | | | | | | | | Head Tail *Head* Tail Head Tail Head Tail The node marked with asterisk (*) denotes achieving the exact same sequence as the first time i.e. {H,T,H}. Here, the total probability of the sequence which you have calculated as p.(1-p).p is the same irrespective of the sequence. This can be simulated in R as follows: ranvec <- runif(3 * 1000 * 1000)>0.5 ranDataFr <- as.data.frame( matrix( data=ranvec, nrow=1000000, ncol=3) ) names( ranDataFr) <- c("Toss1", "Toss2", "Toss3") prob_of_HTH <- length( ranDataFr[ ranDataFr$Toss1==TRUE & ranDataFr$Toss2==FALSE & ranDataFr$Toss3==TRUE , 1] )/1000000 prob_of_HTH [1] 0.125456 Which, after accommodating to float precision errors is approximately the same as the analytically derived value: 0.5 x (1 - 0.5) x 0.5 = 0.125 You can expand this for the 10-toss sequence in a similar manner.
Probability of obtaining same sequence in $10$ tosses of a coin
You need to consider whether the assumption that given a coin gave a "Head" in the earlier toss, is the occurrence of a "Tail" dependent on the previous event or not? The sequence should only matter i
Probability of obtaining same sequence in $10$ tosses of a coin You need to consider whether the assumption that given a coin gave a "Head" in the earlier toss, is the occurrence of a "Tail" dependent on the previous event or not? The sequence should only matter if the path taken to the last coin toss impacts the last coin toss. This would be true if we were sampling without replacement, but outcomes of coin tosses should be independent of each other. Taking an example of 3 coin tosses, lets say the first instance was {H,T,H}, you can build a tree of possible outcomes: First Toss / \ Head Tail | | 2nd Toss 2nd Toss | | | | Head Tail Head Tail | | | | 3rd Toss 3rd Toss 3rd Toss 3rd Toss | | | | | | | | Head Tail *Head* Tail Head Tail Head Tail The node marked with asterisk (*) denotes achieving the exact same sequence as the first time i.e. {H,T,H}. Here, the total probability of the sequence which you have calculated as p.(1-p).p is the same irrespective of the sequence. This can be simulated in R as follows: ranvec <- runif(3 * 1000 * 1000)>0.5 ranDataFr <- as.data.frame( matrix( data=ranvec, nrow=1000000, ncol=3) ) names( ranDataFr) <- c("Toss1", "Toss2", "Toss3") prob_of_HTH <- length( ranDataFr[ ranDataFr$Toss1==TRUE & ranDataFr$Toss2==FALSE & ranDataFr$Toss3==TRUE , 1] )/1000000 prob_of_HTH [1] 0.125456 Which, after accommodating to float precision errors is approximately the same as the analytically derived value: 0.5 x (1 - 0.5) x 0.5 = 0.125 You can expand this for the 10-toss sequence in a similar manner.
Probability of obtaining same sequence in $10$ tosses of a coin You need to consider whether the assumption that given a coin gave a "Head" in the earlier toss, is the occurrence of a "Tail" dependent on the previous event or not? The sequence should only matter i
45,058
Probability of obtaining same sequence in $10$ tosses of a coin
Suppose each coin toss is independent and that the first sequence of (ten) flips is $X_1, X_2, \dots, X_{10}$, where $\forall i \in \{1,\dots,10\}: X_i \in \{H,T\} $ meaning each flip is either $H $ heads or $T $ tails. Let the observed value of the $i^{th}$ flip be denoted $F_i $ so that $\mathbb{P}(X_{i+10} = F_i) $ is the probability that the $i + 10$ flip is equal to the observed value of flip $i $. What we want to solve for is then the probability of repeating the same sequence in the next ten flips (flips 11 to 20) $$\mathbb{P}(X_{11}=F_1,\dots,X_{20}=F_{10} \mid X_1=F_1,\dots,X_{10}=F_{10})$$ Since each flip is independent, (or as others like to say, the coin is "memoryless") it doesn't matter that we condition on the past. Rather, we just need to calculate $$\mathbb{P}(X_{11}=F_1,\dots,X_{20}=F_{10} )$$ Again, by independence, each flip in this sequence does not depend on the flips before or after ("memoryless"), reducing our calculation to the product $$\prod_{i=1}^{10} \mathbb{P}(X_{i+10} = F_i) = \prod_{i=1}^{10} p\cdot I(F_i = H) + (1-p) \cdot I(F_i = T)$$ where $I (\cdot) $ is the standard Boolean indicator function. You'll find that this expression evaluates to $p^n (1-p)^{10-n} $ where $n $ is the number of heads in the first ten flips.
Probability of obtaining same sequence in $10$ tosses of a coin
Suppose each coin toss is independent and that the first sequence of (ten) flips is $X_1, X_2, \dots, X_{10}$, where $\forall i \in \{1,\dots,10\}: X_i \in \{H,T\} $ meaning each flip is either $H $ h
Probability of obtaining same sequence in $10$ tosses of a coin Suppose each coin toss is independent and that the first sequence of (ten) flips is $X_1, X_2, \dots, X_{10}$, where $\forall i \in \{1,\dots,10\}: X_i \in \{H,T\} $ meaning each flip is either $H $ heads or $T $ tails. Let the observed value of the $i^{th}$ flip be denoted $F_i $ so that $\mathbb{P}(X_{i+10} = F_i) $ is the probability that the $i + 10$ flip is equal to the observed value of flip $i $. What we want to solve for is then the probability of repeating the same sequence in the next ten flips (flips 11 to 20) $$\mathbb{P}(X_{11}=F_1,\dots,X_{20}=F_{10} \mid X_1=F_1,\dots,X_{10}=F_{10})$$ Since each flip is independent, (or as others like to say, the coin is "memoryless") it doesn't matter that we condition on the past. Rather, we just need to calculate $$\mathbb{P}(X_{11}=F_1,\dots,X_{20}=F_{10} )$$ Again, by independence, each flip in this sequence does not depend on the flips before or after ("memoryless"), reducing our calculation to the product $$\prod_{i=1}^{10} \mathbb{P}(X_{i+10} = F_i) = \prod_{i=1}^{10} p\cdot I(F_i = H) + (1-p) \cdot I(F_i = T)$$ where $I (\cdot) $ is the standard Boolean indicator function. You'll find that this expression evaluates to $p^n (1-p)^{10-n} $ where $n $ is the number of heads in the first ten flips.
Probability of obtaining same sequence in $10$ tosses of a coin Suppose each coin toss is independent and that the first sequence of (ten) flips is $X_1, X_2, \dots, X_{10}$, where $\forall i \in \{1,\dots,10\}: X_i \in \{H,T\} $ meaning each flip is either $H $ h
45,059
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
You must have some autocorrelation in your data. In most cases, if one ignores correlation structure in the data (pseudolikelihood), the effect is that the estimated error in the data is too small. Suppose you considered the weather on two consecutive days, they are far more likely to be similar than the weather on two randomly selected days in the year. Basically, you have done the test/training selection incorrectly. You must select at random from an entire sample and not contiguous rows. This is why simple random sampling is unbiased but convenience sampling is. Sampling contiguous rows of data, which are ordered in some sense, is effectively convenience sampling. The graphic that you have used should be a scrambling of different colors for each of the training/test/validation sets.
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
You must have some autocorrelation in your data. In most cases, if one ignores correlation structure in the data (pseudolikelihood), the effect is that the estimated error in the data is too small. Su
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? You must have some autocorrelation in your data. In most cases, if one ignores correlation structure in the data (pseudolikelihood), the effect is that the estimated error in the data is too small. Suppose you considered the weather on two consecutive days, they are far more likely to be similar than the weather on two randomly selected days in the year. Basically, you have done the test/training selection incorrectly. You must select at random from an entire sample and not contiguous rows. This is why simple random sampling is unbiased but convenience sampling is. Sampling contiguous rows of data, which are ordered in some sense, is effectively convenience sampling. The graphic that you have used should be a scrambling of different colors for each of the training/test/validation sets.
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? You must have some autocorrelation in your data. In most cases, if one ignores correlation structure in the data (pseudolikelihood), the effect is that the estimated error in the data is too small. Su
45,060
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
Hypothesis 1: You have applied cross-validation incorrectly. The information encoded in position is somehow related to the outcome. To ameliorate this, you could try not selecting your sets to be adjacent but instead a random partition. That might be enough to "average out" the effect of position. Hypothesis 2: By ignoring the data in position, you're discarding information. But even better would be to leverage the information encoded in position as a feature of the model in some way. I don't know what form that would take here because I don't understand what problem you're trying to solve, but if you're prediction weather on the basis of day of the year, you would want to incorporate the knowledge that yesterday's and tomorrow's temperatures are good predictors of today's. In the weather analogy, your model is getting good marks for predicting tomorrow as similar to today, but failing at predicting three months ahead. Hypothesis 3: You actually want nested cross-validation. You've only one hold-out set, which is an interval of position. Instead, you want an additional $k$ randomly-selected partitions of position as hold-outs. This is the natural extension of hypothesis 1 (folds should be composed as random samples of observations, not convenient samples) to the notion of why cross-validation is desirable (use each observation to score the model, not just some observations).
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
Hypothesis 1: You have applied cross-validation incorrectly. The information encoded in position is somehow related to the outcome. To ameliorate this, you could try not selecting your sets to be adja
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? Hypothesis 1: You have applied cross-validation incorrectly. The information encoded in position is somehow related to the outcome. To ameliorate this, you could try not selecting your sets to be adjacent but instead a random partition. That might be enough to "average out" the effect of position. Hypothesis 2: By ignoring the data in position, you're discarding information. But even better would be to leverage the information encoded in position as a feature of the model in some way. I don't know what form that would take here because I don't understand what problem you're trying to solve, but if you're prediction weather on the basis of day of the year, you would want to incorporate the knowledge that yesterday's and tomorrow's temperatures are good predictors of today's. In the weather analogy, your model is getting good marks for predicting tomorrow as similar to today, but failing at predicting three months ahead. Hypothesis 3: You actually want nested cross-validation. You've only one hold-out set, which is an interval of position. Instead, you want an additional $k$ randomly-selected partitions of position as hold-outs. This is the natural extension of hypothesis 1 (folds should be composed as random samples of observations, not convenient samples) to the notion of why cross-validation is desirable (use each observation to score the model, not just some observations).
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? Hypothesis 1: You have applied cross-validation incorrectly. The information encoded in position is somehow related to the outcome. To ameliorate this, you could try not selecting your sets to be adja
45,061
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
I think you have an issue with the way the position is encoded, as above, but my take on it is a little bit different, in that I think you have it encoded as an absolute distance to a reference point. If so your NN will work well only within the boundaries of the farthest point. To obviate this problem, one solution will be make the position relative, say to the center of the test set or something like that. Let me know if this helps with the issue. P.S. if you think this was irrelevant to your question I will be glad to delete it .. was just hoping it would be helpful in solving your problem
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
I think you have an issue with the way the position is encoded, as above, but my take on it is a little bit different, in that I think you have it encoded as an absolute distance to a reference point.
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? I think you have an issue with the way the position is encoded, as above, but my take on it is a little bit different, in that I think you have it encoded as an absolute distance to a reference point. If so your NN will work well only within the boundaries of the farthest point. To obviate this problem, one solution will be make the position relative, say to the center of the test set or something like that. Let me know if this helps with the issue. P.S. if you think this was irrelevant to your question I will be glad to delete it .. was just hoping it would be helpful in solving your problem
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? I think you have an issue with the way the position is encoded, as above, but my take on it is a little bit different, in that I think you have it encoded as an absolute distance to a reference point.
45,062
Convexity of linear regression
2 parameters and a single data point is not strictly convex because the rank of the matrix of observations and predictors is deficient. Indeed, as you observe, there is a line of many "equally good" solutions, and this is because for any choice of $x$ there is a corresponding $y$ which achieves the minimum: how many points satisfy $x+y=2$? Add more observations than predictors and the problem is (strictly) convex.
Convexity of linear regression
2 parameters and a single data point is not strictly convex because the rank of the matrix of observations and predictors is deficient. Indeed, as you observe, there is a line of many "equally good" s
Convexity of linear regression 2 parameters and a single data point is not strictly convex because the rank of the matrix of observations and predictors is deficient. Indeed, as you observe, there is a line of many "equally good" solutions, and this is because for any choice of $x$ there is a corresponding $y$ which achieves the minimum: how many points satisfy $x+y=2$? Add more observations than predictors and the problem is (strictly) convex.
Convexity of linear regression 2 parameters and a single data point is not strictly convex because the rank of the matrix of observations and predictors is deficient. Indeed, as you observe, there is a line of many "equally good" s
45,063
Convexity of linear regression
$$(x+y-2)^2=0$$ $$x+y=2$$ $$y=2-x$$ You can pick any $x$, and get a corresponding $y$, i.e. there's no unique solution. With two unknowns and one observation, there's not going to be a unique solution
Convexity of linear regression
$$(x+y-2)^2=0$$ $$x+y=2$$ $$y=2-x$$ You can pick any $x$, and get a corresponding $y$, i.e. there's no unique solution. With two unknowns and one observation, there's not going to be a unique solution
Convexity of linear regression $$(x+y-2)^2=0$$ $$x+y=2$$ $$y=2-x$$ You can pick any $x$, and get a corresponding $y$, i.e. there's no unique solution. With two unknowns and one observation, there's not going to be a unique solution
Convexity of linear regression $$(x+y-2)^2=0$$ $$x+y=2$$ $$y=2-x$$ You can pick any $x$, and get a corresponding $y$, i.e. there's no unique solution. With two unknowns and one observation, there's not going to be a unique solution
45,064
Why use mean of posterior distribution instead of probability?
railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has. The example concerns with so called German tank problem. From what I see, Allen B. Downey does not suggest that taking mean of posterior distribution enables us to calculate posterior probability. The problem is about guessing the number of locomotives given only the information that there exists locomotive numbered 60. Bayesian analysis of this problem leads to using this data and uniform prior to obtain posterior distribution. The "best guess" about number of locomotives is the mean of this distribution. In this case we are not interested in probabilities, or distribution of the parameter of interest, but about point estimate for it. Mean of posterior distribution is one of such point estimates we can use. As mentioned in the comments and in @peuhp's answer, in this case mean minimizes L2 norm (squared difference), but we could choose different estimators as well, e.g. median that minimizes L1 norm (absolute difference), mode that minimizes L0 norm etc. All this depends on the loss function that you want to minimize, i.e. the criteria that you use to choose when deciding on what is the "best guess". You could be interested also in reading about maximum a posteriori estimation.
Why use mean of posterior distribution instead of probability?
railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has. The example concerns with so called German tank pr
Why use mean of posterior distribution instead of probability? railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has. The example concerns with so called German tank problem. From what I see, Allen B. Downey does not suggest that taking mean of posterior distribution enables us to calculate posterior probability. The problem is about guessing the number of locomotives given only the information that there exists locomotive numbered 60. Bayesian analysis of this problem leads to using this data and uniform prior to obtain posterior distribution. The "best guess" about number of locomotives is the mean of this distribution. In this case we are not interested in probabilities, or distribution of the parameter of interest, but about point estimate for it. Mean of posterior distribution is one of such point estimates we can use. As mentioned in the comments and in @peuhp's answer, in this case mean minimizes L2 norm (squared difference), but we could choose different estimators as well, e.g. median that minimizes L1 norm (absolute difference), mode that minimizes L0 norm etc. All this depends on the loss function that you want to minimize, i.e. the criteria that you use to choose when deciding on what is the "best guess". You could be interested also in reading about maximum a posteriori estimation.
Why use mean of posterior distribution instead of probability? railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has. The example concerns with so called German tank pr
45,065
Why use mean of posterior distribution instead of probability?
Indeed, the mean of the posterior says nothing that the posterior density itself does not contain. However as it minmises the loss function $$ mean(p(\theta|x)) = argmin_{\theta^{*}} \int_{\theta} ||\theta^{*}-\theta||^2 \cdot p(\theta|x) \cdot d\theta $$ it provides a number (which can be interpreted more easily than a full distribution) that can be interpreted as a satisfying best guest estimate of the quantity of interest. Moreover sometimes the density itself is hardly available in close form or estimatable using algorithm while the mean can be derivated/estimated more easily.
Why use mean of posterior distribution instead of probability?
Indeed, the mean of the posterior says nothing that the posterior density itself does not contain. However as it minmises the loss function $$ mean(p(\theta|x)) = argmin_{\theta^{*}} \int_{\theta} ||
Why use mean of posterior distribution instead of probability? Indeed, the mean of the posterior says nothing that the posterior density itself does not contain. However as it minmises the loss function $$ mean(p(\theta|x)) = argmin_{\theta^{*}} \int_{\theta} ||\theta^{*}-\theta||^2 \cdot p(\theta|x) \cdot d\theta $$ it provides a number (which can be interpreted more easily than a full distribution) that can be interpreted as a satisfying best guest estimate of the quantity of interest. Moreover sometimes the density itself is hardly available in close form or estimatable using algorithm while the mean can be derivated/estimated more easily.
Why use mean of posterior distribution instead of probability? Indeed, the mean of the posterior says nothing that the posterior density itself does not contain. However as it minmises the loss function $$ mean(p(\theta|x)) = argmin_{\theta^{*}} \int_{\theta} ||
45,066
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
I am convinced that it is incorrect to use the Vuong test -- in any of its forms -- as a test for zero-inflation. I have had a paper "The misuse of the Vuong test for non-nested models to test for zero-inflation" published that explains why. See http://cybermetrics.wlv.ac.uk/paperdata/misusevuong.pdf. I have also presented the paper at major statistics conferences and no one disagreed with me. If you are still working on this, or zero-inflation in general, get in touch if you wish.
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
I am convinced that it is incorrect to use the Vuong test -- in any of its forms -- as a test for zero-inflation. I have had a paper "The misuse of the Vuong test for non-nested models to test for zer
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results I am convinced that it is incorrect to use the Vuong test -- in any of its forms -- as a test for zero-inflation. I have had a paper "The misuse of the Vuong test for non-nested models to test for zero-inflation" published that explains why. See http://cybermetrics.wlv.ac.uk/paperdata/misusevuong.pdf. I have also presented the paper at major statistics conferences and no one disagreed with me. If you are still working on this, or zero-inflation in general, get in touch if you wish.
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results I am convinced that it is incorrect to use the Vuong test -- in any of its forms -- as a test for zero-inflation. I have had a paper "The misuse of the Vuong test for non-nested models to test for zer
45,067
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
Great question, with a very un-great answer: it depends. It depends on whether or not there actually is zero-inflation in the DGP. To say it another way, the vuong test is conditional - not a diagnosis, so there will be much to justify when it comes to your results. The best explanation I have found is in Desmarais and Harden (2013). First, what these are: 1) I assume you know what the Vuong test statistic (raw) is, and how it is estimated. 2) The AIC and BIC statistics are corrections on the log-likelihood estimation, because the standard (raw) statistic is biased. So which is best? They say this (very basically): 1) If the DGP is not zero-inflated, then the BIC correction will perform the best, followed by the AIC correction, and then by the raw statistic. 2) If the DGP is zero-inflated, then each of the three tests performs similarly well. Additional: 1) According to Desmarais and Harden, this applies just to Poisson models, and these corrections perform differently when looking at NB models. 2) Part of the reason why BIC corrections fail or succeed is because of the number of observations and (more importantly) the number of (and accuracy of) the covariates included in the inflation equation. In other words, you'll likely need to make some assumptions about the DGP of your sample, and then choose an appropriate method based on varying the model specifications. Simple, right?
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
Great question, with a very un-great answer: it depends. It depends on whether or not there actually is zero-inflation in the DGP. To say it another way, the vuong test is conditional - not a diagnosi
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results Great question, with a very un-great answer: it depends. It depends on whether or not there actually is zero-inflation in the DGP. To say it another way, the vuong test is conditional - not a diagnosis, so there will be much to justify when it comes to your results. The best explanation I have found is in Desmarais and Harden (2013). First, what these are: 1) I assume you know what the Vuong test statistic (raw) is, and how it is estimated. 2) The AIC and BIC statistics are corrections on the log-likelihood estimation, because the standard (raw) statistic is biased. So which is best? They say this (very basically): 1) If the DGP is not zero-inflated, then the BIC correction will perform the best, followed by the AIC correction, and then by the raw statistic. 2) If the DGP is zero-inflated, then each of the three tests performs similarly well. Additional: 1) According to Desmarais and Harden, this applies just to Poisson models, and these corrections perform differently when looking at NB models. 2) Part of the reason why BIC corrections fail or succeed is because of the number of observations and (more importantly) the number of (and accuracy of) the covariates included in the inflation equation. In other words, you'll likely need to make some assumptions about the DGP of your sample, and then choose an appropriate method based on varying the model specifications. Simple, right?
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results Great question, with a very un-great answer: it depends. It depends on whether or not there actually is zero-inflation in the DGP. To say it another way, the vuong test is conditional - not a diagnosi
45,068
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
This is a very interesting question I'm also searching for. Unfortunately, I couldn't find an answer yet. That's why I cannot help you with explaining the difference between Raw, AIC and BIC. However, I can help you with your initial question, which model you should choose. The AIC- and BIC-corrected tests are based checking model 1 > model 2 while the raw tests model 2 > model 1. So in your case, the Poisson regression should fit perfectly.
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
This is a very interesting question I'm also searching for. Unfortunately, I couldn't find an answer yet. That's why I cannot help you with explaining the difference between Raw, AIC and BIC. However,
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results This is a very interesting question I'm also searching for. Unfortunately, I couldn't find an answer yet. That's why I cannot help you with explaining the difference between Raw, AIC and BIC. However, I can help you with your initial question, which model you should choose. The AIC- and BIC-corrected tests are based checking model 1 > model 2 while the raw tests model 2 > model 1. So in your case, the Poisson regression should fit perfectly.
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results This is a very interesting question I'm also searching for. Unfortunately, I couldn't find an answer yet. That's why I cannot help you with explaining the difference between Raw, AIC and BIC. However,
45,069
How do I understand ANCOVA in basic layman's terms?
To answer your question, I would like to invite you to think of a broader picture for, then, take you back to your original question. First, I would like to introduce a comparison between ANOVA and linear regression with one categorical independent variable; second, I would like to introduce a comparison between ANCOVA and linear regression with one categorical independent variable and one quantitative independent variable; then, I would like to introduce the comparison between ANCOVA and ANOVA. Let's say, for illustration's sake, that you have three groups of individual (the categorical independent variable with three levels) and one given quantitative additional information about each individual (the quantitative independent variable), beyond their individual responses, of course. You can use ANOVA to evaluate whether the three groups have the same average response or not, given a set of assumptions with which we will not be concerned here just for pedagogical reasons. You could take that same data to run a linear regression with 2 dummy variables, one for indicating whether individuals belong to the second group and one for indicating whether individuals belong to the third group. Individual belonging to neither the second nor the third group would, by elimination, belong to the first group, which could be thought of as a "reference group". You would use that regression to assess by how much the second and third groups differ in average response from the first group (given a set of assumptions, etc., etc., etc.). One can show that both formulations are mathematically equivalent, so the difference lays in the semantic field. Statistical technicalities apart, what would be the difference between those two analyses in terms of interpretation? In the first, the main concern is to detect whether all groups have the same mean or not. In the second, the main concern is to detect by how much the second and the third group differ from the first, sweeping under the rug the discussion about the difference between the second and the third groups. In terms of interpretation alone: potato/potahto? Perhaps. Now look at ANCOVA. Beyond the average response of the group, you consider also that quantitative additional information that might or might not influence as well the expected response of the individuals. ANCOVA, like ANOVA, will be focused in detecting whether the groups might have or not the same average response, considering the additional influence of that quantitative information. A linear regression with the two dummy variables above plus a quantitative regressor would be focused in evaluating also the effect of that quantitative regressor on the expected individual response. Likewise, one can show that there is mathematical equivalence between those two formulations. Now, what could be the difference in terms of interpretation? In the regression, you are actually interested in evaluating the effect of the quantitative regressor, while in the ANCOVA all you want is to discount the effect of the quantitative regressor before comparing the average responses among the groups, because you couldn't care less about that quantitative regressor were it not associated with the individual responses. So, your interest is to diminish the non explained variability on the comparison by accounting some of that variability to a source (that dang quantitative regressor). In regression, if the effect of the regressor is not statistically significant, you exclude it from your model and adjust it again only with your dummies. In ANCOVA, you don't even test the quantitative regressor for significance, because, significant or not, it has already played its role of controlling for unaccountable variability in the individual responses, external to the group influences and to the attributable systematic variability due to the effect of that quantitative regressor. Finally, heading to your question, ANCOVA controls for systematic variability that you can attribute to specific sources, reducing the unexplained asystematic variability that is captured in the residual sum of squares, while the ANOVA doesn't. Okey-dokey? =)
How do I understand ANCOVA in basic layman's terms?
To answer your question, I would like to invite you to think of a broader picture for, then, take you back to your original question. First, I would like to introduce a comparison between ANOVA and li
How do I understand ANCOVA in basic layman's terms? To answer your question, I would like to invite you to think of a broader picture for, then, take you back to your original question. First, I would like to introduce a comparison between ANOVA and linear regression with one categorical independent variable; second, I would like to introduce a comparison between ANCOVA and linear regression with one categorical independent variable and one quantitative independent variable; then, I would like to introduce the comparison between ANCOVA and ANOVA. Let's say, for illustration's sake, that you have three groups of individual (the categorical independent variable with three levels) and one given quantitative additional information about each individual (the quantitative independent variable), beyond their individual responses, of course. You can use ANOVA to evaluate whether the three groups have the same average response or not, given a set of assumptions with which we will not be concerned here just for pedagogical reasons. You could take that same data to run a linear regression with 2 dummy variables, one for indicating whether individuals belong to the second group and one for indicating whether individuals belong to the third group. Individual belonging to neither the second nor the third group would, by elimination, belong to the first group, which could be thought of as a "reference group". You would use that regression to assess by how much the second and third groups differ in average response from the first group (given a set of assumptions, etc., etc., etc.). One can show that both formulations are mathematically equivalent, so the difference lays in the semantic field. Statistical technicalities apart, what would be the difference between those two analyses in terms of interpretation? In the first, the main concern is to detect whether all groups have the same mean or not. In the second, the main concern is to detect by how much the second and the third group differ from the first, sweeping under the rug the discussion about the difference between the second and the third groups. In terms of interpretation alone: potato/potahto? Perhaps. Now look at ANCOVA. Beyond the average response of the group, you consider also that quantitative additional information that might or might not influence as well the expected response of the individuals. ANCOVA, like ANOVA, will be focused in detecting whether the groups might have or not the same average response, considering the additional influence of that quantitative information. A linear regression with the two dummy variables above plus a quantitative regressor would be focused in evaluating also the effect of that quantitative regressor on the expected individual response. Likewise, one can show that there is mathematical equivalence between those two formulations. Now, what could be the difference in terms of interpretation? In the regression, you are actually interested in evaluating the effect of the quantitative regressor, while in the ANCOVA all you want is to discount the effect of the quantitative regressor before comparing the average responses among the groups, because you couldn't care less about that quantitative regressor were it not associated with the individual responses. So, your interest is to diminish the non explained variability on the comparison by accounting some of that variability to a source (that dang quantitative regressor). In regression, if the effect of the regressor is not statistically significant, you exclude it from your model and adjust it again only with your dummies. In ANCOVA, you don't even test the quantitative regressor for significance, because, significant or not, it has already played its role of controlling for unaccountable variability in the individual responses, external to the group influences and to the attributable systematic variability due to the effect of that quantitative regressor. Finally, heading to your question, ANCOVA controls for systematic variability that you can attribute to specific sources, reducing the unexplained asystematic variability that is captured in the residual sum of squares, while the ANOVA doesn't. Okey-dokey? =)
How do I understand ANCOVA in basic layman's terms? To answer your question, I would like to invite you to think of a broader picture for, then, take you back to your original question. First, I would like to introduce a comparison between ANOVA and li
45,070
How do I understand ANCOVA in basic layman's terms?
ANOVA looks at the influence of one or more grouping variables (factors) on some continuous dependent measure. ANCOVA includes at least one grouping variable, but also includes interval-or-ratio-scaled variables on the IV side that are assumed to relate to the DV in linear fashion as in a regression. ANOVA will let me assess whether extroversion scores (DV) are different for the combinations of sex (grouping variable) and country-of-birth (grouping variable). ANCOVA will let me assess whether extroversion scores (DV) are different for people of different sex (groups) and different ages (ratio-scaled variable) - though the interaction of these two variables is often left unexamined.
How do I understand ANCOVA in basic layman's terms?
ANOVA looks at the influence of one or more grouping variables (factors) on some continuous dependent measure. ANCOVA includes at least one grouping variable, but also includes interval-or-ratio-scale
How do I understand ANCOVA in basic layman's terms? ANOVA looks at the influence of one or more grouping variables (factors) on some continuous dependent measure. ANCOVA includes at least one grouping variable, but also includes interval-or-ratio-scaled variables on the IV side that are assumed to relate to the DV in linear fashion as in a regression. ANOVA will let me assess whether extroversion scores (DV) are different for the combinations of sex (grouping variable) and country-of-birth (grouping variable). ANCOVA will let me assess whether extroversion scores (DV) are different for people of different sex (groups) and different ages (ratio-scaled variable) - though the interaction of these two variables is often left unexamined.
How do I understand ANCOVA in basic layman's terms? ANOVA looks at the influence of one or more grouping variables (factors) on some continuous dependent measure. ANCOVA includes at least one grouping variable, but also includes interval-or-ratio-scale
45,071
Data augmentation step in Krizhevsky et al. paper
They say they increased the size of the training set by a factor of 2048. Does this mean they trained on a total of 2024 X 1.2 millions images? Yes, in the paper: The first form of data augmentation consists of generating image translations and horizontal reflections. We do this by extracting random 224x224 patches (and their horizontal reflections) from the 256x256 images and training our network on these extracted patches They are generating those extra patches 'on the fly' from the original images, said here: In our implementation, the transformed images are generated in Python code on the CPU while the GPU is training on the previous batch of images. So these data augmentation schemes are, in effect, computationally free. What do they mean they extracted five 224 × 224 patches (corner, center and horizontal)? And why does it result in ten patches in total? The original images have size 256x256, so they are getting a patch by cropping the original picture on the upper left corner with a size 224x224. Same thing for upper right, lower left, lower right and center. So that's making 5 patches. And for each of those patch they are mirroring the picture, so they get 5 more patches. Total 10, and then they take the average prediction.
Data augmentation step in Krizhevsky et al. paper
They say they increased the size of the training set by a factor of 2048. Does this mean they trained on a total of 2024 X 1.2 millions images? Yes, in the paper: The first form of data augmentatio
Data augmentation step in Krizhevsky et al. paper They say they increased the size of the training set by a factor of 2048. Does this mean they trained on a total of 2024 X 1.2 millions images? Yes, in the paper: The first form of data augmentation consists of generating image translations and horizontal reflections. We do this by extracting random 224x224 patches (and their horizontal reflections) from the 256x256 images and training our network on these extracted patches They are generating those extra patches 'on the fly' from the original images, said here: In our implementation, the transformed images are generated in Python code on the CPU while the GPU is training on the previous batch of images. So these data augmentation schemes are, in effect, computationally free. What do they mean they extracted five 224 × 224 patches (corner, center and horizontal)? And why does it result in ten patches in total? The original images have size 256x256, so they are getting a patch by cropping the original picture on the upper left corner with a size 224x224. Same thing for upper right, lower left, lower right and center. So that's making 5 patches. And for each of those patch they are mirroring the picture, so they get 5 more patches. Total 10, and then they take the average prediction.
Data augmentation step in Krizhevsky et al. paper They say they increased the size of the training set by a factor of 2048. Does this mean they trained on a total of 2024 X 1.2 millions images? Yes, in the paper: The first form of data augmentatio
45,072
Data augmentation step in Krizhevsky et al. paper
I think they've trained only on 1.2M images. Here is why: Even if they could get 0.001s per forward and backward pass (with 1 Titan X and cuDNN), it would take this much time to train on 2048*1.28M images for 90 epochs with mini-batch SGD: 0.001*2048*1280000*90/60/60/24 = ~ 2730 days = ~ 7.5 years
Data augmentation step in Krizhevsky et al. paper
I think they've trained only on 1.2M images. Here is why: Even if they could get 0.001s per forward and backward pass (with 1 Titan X and cuDNN), it would take this much time to train on 2048*1.28M im
Data augmentation step in Krizhevsky et al. paper I think they've trained only on 1.2M images. Here is why: Even if they could get 0.001s per forward and backward pass (with 1 Titan X and cuDNN), it would take this much time to train on 2048*1.28M images for 90 epochs with mini-batch SGD: 0.001*2048*1280000*90/60/60/24 = ~ 2730 days = ~ 7.5 years
Data augmentation step in Krizhevsky et al. paper I think they've trained only on 1.2M images. Here is why: Even if they could get 0.001s per forward and backward pass (with 1 Titan X and cuDNN), it would take this much time to train on 2048*1.28M im
45,073
Data augmentation step in Krizhevsky et al. paper
They are actually training on 1.2 million * 2048 training images. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images For each training image of size 256x256, if you extract patches of size 224x224, you can get up to 1024 224x224 patches from the image ((256-224)*(256-224)). And for each such patch you take a horizontal reflection. In total 2048 patches from a single image.
Data augmentation step in Krizhevsky et al. paper
They are actually training on 1.2 million * 2048 training images. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images For each training ima
Data augmentation step in Krizhevsky et al. paper They are actually training on 1.2 million * 2048 training images. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images For each training image of size 256x256, if you extract patches of size 224x224, you can get up to 1024 224x224 patches from the image ((256-224)*(256-224)). And for each such patch you take a horizontal reflection. In total 2048 patches from a single image.
Data augmentation step in Krizhevsky et al. paper They are actually training on 1.2 million * 2048 training images. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images For each training ima
45,074
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
You're hitting the wall because you're exhausting limitations of the first fourier transform fourier(1:n,i,m1). As RandomDude correctly pointed out above, # of transforms i should be less than half period (m1). However, if, with your code, you run 2 cycles -- one for i, and another for j, where j would be # of transforms for the second seasonality cycle fourier(1:n,j,m3), you would still have a lot of room for model improvement. This is what I've got from your data, even without dummies, only based on AR, MA, and data seasonality: library(forecast) y <- msts(ts, c(7,365)) # multiseasonal ts fit <- auto.arima(y, seasonal=F, xreg=fourier(y, K=c(3,30))) fit_f <- forecast(fit, xreg= fourierf(y, K=c(3,30), 180), 180) plot(fit_f) I suspect the performance will even improve when holidays are added.
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
You're hitting the wall because you're exhausting limitations of the first fourier transform fourier(1:n,i,m1). As RandomDude correctly pointed out above, # of transforms i should be less than half pe
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors You're hitting the wall because you're exhausting limitations of the first fourier transform fourier(1:n,i,m1). As RandomDude correctly pointed out above, # of transforms i should be less than half period (m1). However, if, with your code, you run 2 cycles -- one for i, and another for j, where j would be # of transforms for the second seasonality cycle fourier(1:n,j,m3), you would still have a lot of room for model improvement. This is what I've got from your data, even without dummies, only based on AR, MA, and data seasonality: library(forecast) y <- msts(ts, c(7,365)) # multiseasonal ts fit <- auto.arima(y, seasonal=F, xreg=fourier(y, K=c(3,30))) fit_f <- forecast(fit, xreg= fourierf(y, K=c(3,30), 180), 180) plot(fit_f) I suspect the performance will even improve when holidays are added.
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors You're hitting the wall because you're exhausting limitations of the first fourier transform fourier(1:n,i,m1). As RandomDude correctly pointed out above, # of transforms i should be less than half pe
45,075
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
Is there a reason why you are not using the fourier() function in the forecast package? When you try to build a fourier term of a seasonal time series object your K must be smaller than period/2. Otherwise you get an error: fourier(ts(test, frequency=7),4) #3 works, 4+ doesn't Error in ...fourier(x, K, 1:length(x)) : K must be not be greater than period/2 Quote from ?fourier() fourier(x,K) When x is a ts object, the value of K should be an integer and specifies the number of sine and cosine terms to return. Thus, the matrix returned has 2*K columns. I don't have a theoretical explanation + i don't have enough reputation to write a comment under your post (answer was the only option). Hope i could still help you somehow!
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
Is there a reason why you are not using the fourier() function in the forecast package? When you try to build a fourier term of a seasonal time series object your K must be smaller than period/2. Othe
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors Is there a reason why you are not using the fourier() function in the forecast package? When you try to build a fourier term of a seasonal time series object your K must be smaller than period/2. Otherwise you get an error: fourier(ts(test, frequency=7),4) #3 works, 4+ doesn't Error in ...fourier(x, K, 1:length(x)) : K must be not be greater than period/2 Quote from ?fourier() fourier(x,K) When x is a ts object, the value of K should be an integer and specifies the number of sine and cosine terms to return. Thus, the matrix returned has 2*K columns. I don't have a theoretical explanation + i don't have enough reputation to write a comment under your post (answer was the only option). Hope i could still help you somehow!
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors Is there a reason why you are not using the fourier() function in the forecast package? When you try to build a fourier term of a seasonal time series object your K must be smaller than period/2. Othe
45,076
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
Optimization of Fourier pairs based on AICc values. This is for yearly and monthly seasonality on data without weekends. The ranges 0:10 and 1:20 should be changed accordingly for different seasonal periods. Or increased for a broader search. msts_test <- msts( test , seasonal.periods = c(21.66,260)) my_aic_df <- matrix(ncol = 10 , nrow = 20) for(i in 1:10){ for(j in 1:20){ fn <- fourier( msts_test , K= c(i , j) ) FourierFit <- auto.arima( msts_test , seasonal=FALSE, xreg=fn ) my_aic_df[(j),(i+1)] <- FourierFit$aicc } } which(my_aic_df == min(my_aic_df), arr.ind = TRUE)
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
Optimization of Fourier pairs based on AICc values. This is for yearly and monthly seasonality on data without weekends. The ranges 0:10 and 1:20 should be changed accordingly for different seasonal p
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors Optimization of Fourier pairs based on AICc values. This is for yearly and monthly seasonality on data without weekends. The ranges 0:10 and 1:20 should be changed accordingly for different seasonal periods. Or increased for a broader search. msts_test <- msts( test , seasonal.periods = c(21.66,260)) my_aic_df <- matrix(ncol = 10 , nrow = 20) for(i in 1:10){ for(j in 1:20){ fn <- fourier( msts_test , K= c(i , j) ) FourierFit <- auto.arima( msts_test , seasonal=FALSE, xreg=fn ) my_aic_df[(j),(i+1)] <- FourierFit$aicc } } which(my_aic_df == min(my_aic_df), arr.ind = TRUE)
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors Optimization of Fourier pairs based on AICc values. This is for yearly and monthly seasonality on data without weekends. The ranges 0:10 and 1:20 should be changed accordingly for different seasonal p
45,077
Class weights in caret [closed]
I haven't gotten around to implementing it for all the models that can accept weights. Right now, it should work for rpart variants, glmnet, gamSpline, glmboost, gamboost, evtree, ctree, ctree2, chaid, cforest, blackboost, treebag, glm, glmStepAIC, and bayesglm. Note that ksvm function does not have a weight parameter, so those models won't be enabled.
Class weights in caret [closed]
I haven't gotten around to implementing it for all the models that can accept weights. Right now, it should work for rpart variants, glmnet, gamSpline, glmboost, gamboost, evtree, ctree, ctree2, chaid
Class weights in caret [closed] I haven't gotten around to implementing it for all the models that can accept weights. Right now, it should work for rpart variants, glmnet, gamSpline, glmboost, gamboost, evtree, ctree, ctree2, chaid, cforest, blackboost, treebag, glm, glmStepAIC, and bayesglm. Note that ksvm function does not have a weight parameter, so those models won't be enabled.
Class weights in caret [closed] I haven't gotten around to implementing it for all the models that can accept weights. Right now, it should work for rpart variants, glmnet, gamSpline, glmboost, gamboost, evtree, ctree, ctree2, chaid
45,078
PCA: Eigenvectors of opposite sign and not being able to compute eigenvectors with `solve` in R
1) The definition of eigenvector $Ax = \lambda x$ is ambidextrous. If $x$ is an eigenvector, so is $-x$, for then $$A(-x) = -Ax = -\lambda x = \lambda (-x)$$ So the definition of an eigenbasis is ambiguous of sign. 2) It's hard to know for sure, but I have a strong suspicion of what is happening here. Your equation $$ (A - \lambda)x = 0 $$ is technically incorrect. The correct equation is $$ (A - \lambda I)x$$ The first equation is often used as a shorthand for the second. In general, this is unambiguous, because there is no real mathematical way to subtract a vector from a square matrix, but it is abuse of notation. In R though, you have broadcasting. So if you do > M <- matrix(c(1, 1, 1, 1), nrow=2) > M - .5 [,1] [,2] [1,] 0.5 0.5 [2,] 0.5 0.5 its not really what you want. The proper way would be > M - diag(.5, 2) [,1] [,2] [1,] 0.5 1.0 [2,] 1.0 0.5 The reason you are getting zero solutions is that the matrix you are starting with $A$ is invertible. More than likely (almost surely), the matrix you get by subtracting the same number from every entry will also be invertible. For invertible matrices, the only solution to $Ax = 0$ is the zero vector.
PCA: Eigenvectors of opposite sign and not being able to compute eigenvectors with `solve` in R
1) The definition of eigenvector $Ax = \lambda x$ is ambidextrous. If $x$ is an eigenvector, so is $-x$, for then $$A(-x) = -Ax = -\lambda x = \lambda (-x)$$ So the definition of an eigenbasis is am
PCA: Eigenvectors of opposite sign and not being able to compute eigenvectors with `solve` in R 1) The definition of eigenvector $Ax = \lambda x$ is ambidextrous. If $x$ is an eigenvector, so is $-x$, for then $$A(-x) = -Ax = -\lambda x = \lambda (-x)$$ So the definition of an eigenbasis is ambiguous of sign. 2) It's hard to know for sure, but I have a strong suspicion of what is happening here. Your equation $$ (A - \lambda)x = 0 $$ is technically incorrect. The correct equation is $$ (A - \lambda I)x$$ The first equation is often used as a shorthand for the second. In general, this is unambiguous, because there is no real mathematical way to subtract a vector from a square matrix, but it is abuse of notation. In R though, you have broadcasting. So if you do > M <- matrix(c(1, 1, 1, 1), nrow=2) > M - .5 [,1] [,2] [1,] 0.5 0.5 [2,] 0.5 0.5 its not really what you want. The proper way would be > M - diag(.5, 2) [,1] [,2] [1,] 0.5 1.0 [2,] 1.0 0.5 The reason you are getting zero solutions is that the matrix you are starting with $A$ is invertible. More than likely (almost surely), the matrix you get by subtracting the same number from every entry will also be invertible. For invertible matrices, the only solution to $Ax = 0$ is the zero vector.
PCA: Eigenvectors of opposite sign and not being able to compute eigenvectors with `solve` in R 1) The definition of eigenvector $Ax = \lambda x$ is ambidextrous. If $x$ is an eigenvector, so is $-x$, for then $$A(-x) = -Ax = -\lambda x = \lambda (-x)$$ So the definition of an eigenbasis is am
45,079
Probability that one sum of squared standard normals is greater than a constant times another such sum
You can use the relation of the F-distribution to the chi-squared $$F_{m,n}=\frac{\chi_m^2/m}{\chi_n^2/n}$$ $P\{Y_3^2 + Y_4^2+ ... + Y_n^2 \geq \alpha ( Y_1^2 + Y_2^2)\}=P(\chi_{n-2}^2\ge\alpha\chi_2^2)=P(\frac{\chi_{n-2}^2}{\chi_2^2}\ge\alpha)$ Now you can adjust it to be in the form of an $F$. $$P\Bigg(F_{n-2,2}\ge\frac{2}{n-2}\alpha\Bigg)$$
Probability that one sum of squared standard normals is greater than a constant times another such s
You can use the relation of the F-distribution to the chi-squared $$F_{m,n}=\frac{\chi_m^2/m}{\chi_n^2/n}$$ $P\{Y_3^2 + Y_4^2+ ... + Y_n^2 \geq \alpha ( Y_1^2 + Y_2^2)\}=P(\chi_{n-2}^2\ge\alpha\chi_2
Probability that one sum of squared standard normals is greater than a constant times another such sum You can use the relation of the F-distribution to the chi-squared $$F_{m,n}=\frac{\chi_m^2/m}{\chi_n^2/n}$$ $P\{Y_3^2 + Y_4^2+ ... + Y_n^2 \geq \alpha ( Y_1^2 + Y_2^2)\}=P(\chi_{n-2}^2\ge\alpha\chi_2^2)=P(\frac{\chi_{n-2}^2}{\chi_2^2}\ge\alpha)$ Now you can adjust it to be in the form of an $F$. $$P\Bigg(F_{n-2,2}\ge\frac{2}{n-2}\alpha\Bigg)$$
Probability that one sum of squared standard normals is greater than a constant times another such s You can use the relation of the F-distribution to the chi-squared $$F_{m,n}=\frac{\chi_m^2/m}{\chi_n^2/n}$$ $P\{Y_3^2 + Y_4^2+ ... + Y_n^2 \geq \alpha ( Y_1^2 + Y_2^2)\}=P(\chi_{n-2}^2\ge\alpha\chi_2
45,080
Probability that one sum of squared standard normals is greater than a constant times another such sum
In keeping with the self-study policy I will leave some hints rather than post a complete answer, but also try to explore a little about why this type of question "works". Probably I have to use the fact that the sum of squared standard gaussians is chi-distributed random variable Yes, but you need to make that "the sum of independent squared standard gaussians is a chi-distributed random variable" — the independence is an important part of this question. Fortunately you were told in the beginning that $Y_i$ were "i.i.d.", which means independent and identically distributed. but the $\alpha$ messes things up Actually it's not really the $\alpha$ which is causing you the problem. Consider the special case that $\alpha = 1$. Does that improve things? We still get a chi-squared variable on the left-hand side and another chi-squared variable on the right-hand side. But as before, we are unable to combine them into a single chi-squared variable, because they are on different sides of the equation. If you had moved the $Y_1^2$ and $Y_2^2$ across from the right to the left by subtraction, you would still have been unable to combine into a single chi-squared variable, because the coefficients on $Y_1^2$ and $Y_2^2$ would have been $-1$ while the coefficients on the the other $Y_i^2$ would have been $+1$. So the problem is no harder with or without the $\alpha$. You are bound to get two different chi-squared variables, and so your solution strategy should be to exploit this fact, rather than try to reduce it to a single chi-squared. Something else that I noticed is the symmetry i.e. this probability should be the same for any 2 random variables that I have on the right side of the inequality. Moreover, note that they are different $Y_i$s on the right-hand and left-hand sides, and therefore are independent of each other. As a result, the chi-squared variables you get on the left-hand and right-hand sides are also independent of each other. Let's write $X_1 \sim \chi^2_{\nu_1}$ and $X_2 \sim \chi^2_{\nu_2}$ with $X_1$ and $X_2$ independent; we are seeking $\Pr(X_1 \geq \alpha X_2)$. Can you think of a distributional fact that relates two independent chi-squared variables to each other? Perhaps look at the list of relationships in the chi-squared article in Wikipedia. A couple of candidates present themselves. $\frac{X_1}{X_1 + X_2} \sim \text{Beta}(\frac{\nu_1}{2}, \frac{\nu_2}{2})$ looks a little tricky to apply since we don't have the sum of two chi-squared variables at the moment. But we could e.g. add $\alpha X_1$ to both sides to get: $$(\alpha + 1) X_1 \geq \alpha (X_1 + X_2)$$ $\frac{X_1/\nu_1}{X_2/\nu_2} \sim F(\nu_1, \nu_2)$ could be applied more easily; it requires the ratio of two independent chi-squared variables, and since we have a chi-squared as a factor on each side, the desired fraction is only a short manipulation away. One neat thing about the Beta distribution method, though, is that (particularly with the nice numbers given in the question) you will obtain a probability that you can easily integrate out to give you a formula for the probability as a rational function of $\alpha$. The Beta distribution's density function will turn out to be "nice" because the two degrees of freedom on the right-hand chi-squared become one degree of freedom in the Beta distribution, and in the PDF we only raise the corresponding factor to a power one less than the degrees of freedom; this ease of integration will also apply to the normalizing factor, which is a Beta function. Such a solution feels, at least to me, more satisfying than a solution in terms of the $F$ distribution's CDF. It's not even that much harder to keep $n$ general, rather than substituting a specific value, to yield a formula in terms of $\alpha$ and $n$. Try it!
Probability that one sum of squared standard normals is greater than a constant times another such s
In keeping with the self-study policy I will leave some hints rather than post a complete answer, but also try to explore a little about why this type of question "works". Probably I have to use the
Probability that one sum of squared standard normals is greater than a constant times another such sum In keeping with the self-study policy I will leave some hints rather than post a complete answer, but also try to explore a little about why this type of question "works". Probably I have to use the fact that the sum of squared standard gaussians is chi-distributed random variable Yes, but you need to make that "the sum of independent squared standard gaussians is a chi-distributed random variable" — the independence is an important part of this question. Fortunately you were told in the beginning that $Y_i$ were "i.i.d.", which means independent and identically distributed. but the $\alpha$ messes things up Actually it's not really the $\alpha$ which is causing you the problem. Consider the special case that $\alpha = 1$. Does that improve things? We still get a chi-squared variable on the left-hand side and another chi-squared variable on the right-hand side. But as before, we are unable to combine them into a single chi-squared variable, because they are on different sides of the equation. If you had moved the $Y_1^2$ and $Y_2^2$ across from the right to the left by subtraction, you would still have been unable to combine into a single chi-squared variable, because the coefficients on $Y_1^2$ and $Y_2^2$ would have been $-1$ while the coefficients on the the other $Y_i^2$ would have been $+1$. So the problem is no harder with or without the $\alpha$. You are bound to get two different chi-squared variables, and so your solution strategy should be to exploit this fact, rather than try to reduce it to a single chi-squared. Something else that I noticed is the symmetry i.e. this probability should be the same for any 2 random variables that I have on the right side of the inequality. Moreover, note that they are different $Y_i$s on the right-hand and left-hand sides, and therefore are independent of each other. As a result, the chi-squared variables you get on the left-hand and right-hand sides are also independent of each other. Let's write $X_1 \sim \chi^2_{\nu_1}$ and $X_2 \sim \chi^2_{\nu_2}$ with $X_1$ and $X_2$ independent; we are seeking $\Pr(X_1 \geq \alpha X_2)$. Can you think of a distributional fact that relates two independent chi-squared variables to each other? Perhaps look at the list of relationships in the chi-squared article in Wikipedia. A couple of candidates present themselves. $\frac{X_1}{X_1 + X_2} \sim \text{Beta}(\frac{\nu_1}{2}, \frac{\nu_2}{2})$ looks a little tricky to apply since we don't have the sum of two chi-squared variables at the moment. But we could e.g. add $\alpha X_1$ to both sides to get: $$(\alpha + 1) X_1 \geq \alpha (X_1 + X_2)$$ $\frac{X_1/\nu_1}{X_2/\nu_2} \sim F(\nu_1, \nu_2)$ could be applied more easily; it requires the ratio of two independent chi-squared variables, and since we have a chi-squared as a factor on each side, the desired fraction is only a short manipulation away. One neat thing about the Beta distribution method, though, is that (particularly with the nice numbers given in the question) you will obtain a probability that you can easily integrate out to give you a formula for the probability as a rational function of $\alpha$. The Beta distribution's density function will turn out to be "nice" because the two degrees of freedom on the right-hand chi-squared become one degree of freedom in the Beta distribution, and in the PDF we only raise the corresponding factor to a power one less than the degrees of freedom; this ease of integration will also apply to the normalizing factor, which is a Beta function. Such a solution feels, at least to me, more satisfying than a solution in terms of the $F$ distribution's CDF. It's not even that much harder to keep $n$ general, rather than substituting a specific value, to yield a formula in terms of $\alpha$ and $n$. Try it!
Probability that one sum of squared standard normals is greater than a constant times another such s In keeping with the self-study policy I will leave some hints rather than post a complete answer, but also try to explore a little about why this type of question "works". Probably I have to use the
45,081
Which formula is this?
Scheaffer et al[1] call this the "estimated variance of $\bar{y}$", or $\widehat{V}(\bar{y})$ See equation 4.2, p83 (Well, they have $\left(1-\frac{n}{N}\right)\frac{s^2}{n}$, but it's trivial to show they're the same. They call $\left(1-\frac{n}{N}\right)$ the $\text{fpc}$, for finite population correction.) They say that this is an unbiased estimate of $V(\bar{y})=\frac{\sigma^2}{n}\left(\frac{N-n}{N-1}\right)$ (see p81 same reference) (I was able to locate and view both of those pages in google books, by use of appropriate searches, but what you can see may vary from country to country) Similar treatment can be found in a number of other books. [1] Scheaffer, R., William Mendenhall, III, R. Ott, Kenneth Gerow, Elementary Survey Sampling, 7e Brooks/Cole
Which formula is this?
Scheaffer et al[1] call this the "estimated variance of $\bar{y}$", or $\widehat{V}(\bar{y})$ See equation 4.2, p83 (Well, they have $\left(1-\frac{n}{N}\right)\frac{s^2}{n}$, but it's trivial to show
Which formula is this? Scheaffer et al[1] call this the "estimated variance of $\bar{y}$", or $\widehat{V}(\bar{y})$ See equation 4.2, p83 (Well, they have $\left(1-\frac{n}{N}\right)\frac{s^2}{n}$, but it's trivial to show they're the same. They call $\left(1-\frac{n}{N}\right)$ the $\text{fpc}$, for finite population correction.) They say that this is an unbiased estimate of $V(\bar{y})=\frac{\sigma^2}{n}\left(\frac{N-n}{N-1}\right)$ (see p81 same reference) (I was able to locate and view both of those pages in google books, by use of appropriate searches, but what you can see may vary from country to country) Similar treatment can be found in a number of other books. [1] Scheaffer, R., William Mendenhall, III, R. Ott, Kenneth Gerow, Elementary Survey Sampling, 7e Brooks/Cole
Which formula is this? Scheaffer et al[1] call this the "estimated variance of $\bar{y}$", or $\widehat{V}(\bar{y})$ See equation 4.2, p83 (Well, they have $\left(1-\frac{n}{N}\right)\frac{s^2}{n}$, but it's trivial to show
45,082
Sum of binomial coefficients with increasing $n$
Let's add in an initial value of $1 = \binom{n}{0}$. The fundamental relationship $$\binom{n}{k-1} + \binom{n}{k} = \binom{n+1}{k}\tag{1}$$ makes the sum telescope: $$\eqalign{ &\color{Blue}{\binom{n}{0} + \binom{n}{1}} &+\binom{n+1}{2} &+ \binom{n+2}{3} + \cdots &+ \binom{n+m}{m+1} \\ & =\color{Blue}{\binom{n+1}{1}} & +\color{Blue}{\binom{n+1}{2}} &+ \binom{n+2}{3} + \cdots &+ \binom{n+m}{m+1} \\ & &=\color{Blue}{\binom{n+2}{2}} & +\color{Blue}{\binom{n+2}{3}}+ \cdots &+ \binom{n+m}{m+1} \\ & & & = \cdots \\ & & & = \color{Blue}{\binom{n+m}{m}} &+\color{Blue}{ \binom{n+m}{m+1}} \\ & & & &= \color{Blue}{\binom{n+m+1}{m+1}}. }$$ Subtracting the $1$ originally added in gives the answer, $\binom{n+m+1}{m+1}-1$. Antoni Parellada kindly points out (in a comment below) that this fundamental relationship $(1)$ is now often called "Pascal's Rule." This appendix shows how intimately connected that eponym is with the present question. Here, from Pascal's Complete Works (republished by Hachette, Paris, in 1858 as Oeuvres Completes), is his original diagram of the "Arithmetical Triangle" as it appeared in the Traité du Triangle Arithmetique (1654). Pascal has labeled many of the cells with Greek and latin letters for reference. After giving its rule of construction--the fundamental relationship above--he draws 18 "consequences." The second consequence is the present result. Like many of his time, Pascal's proof proceeds by demonstrating the relationship for a specific case in a way that obviously generalizes: (From a translation at http://cerebro.xu.edu/math/Sources/Pascal/Sources/arith_triangle.pdf. I discovered the original some years ago on the Cambridge University Library Web site but apparently it has been put behind a private wall since then.) It's a nice graphical way to display the telescoping.
Sum of binomial coefficients with increasing $n$
Let's add in an initial value of $1 = \binom{n}{0}$. The fundamental relationship $$\binom{n}{k-1} + \binom{n}{k} = \binom{n+1}{k}\tag{1}$$ makes the sum telescope: $$\eqalign{ &\color{Blue}{\binom{n
Sum of binomial coefficients with increasing $n$ Let's add in an initial value of $1 = \binom{n}{0}$. The fundamental relationship $$\binom{n}{k-1} + \binom{n}{k} = \binom{n+1}{k}\tag{1}$$ makes the sum telescope: $$\eqalign{ &\color{Blue}{\binom{n}{0} + \binom{n}{1}} &+\binom{n+1}{2} &+ \binom{n+2}{3} + \cdots &+ \binom{n+m}{m+1} \\ & =\color{Blue}{\binom{n+1}{1}} & +\color{Blue}{\binom{n+1}{2}} &+ \binom{n+2}{3} + \cdots &+ \binom{n+m}{m+1} \\ & &=\color{Blue}{\binom{n+2}{2}} & +\color{Blue}{\binom{n+2}{3}}+ \cdots &+ \binom{n+m}{m+1} \\ & & & = \cdots \\ & & & = \color{Blue}{\binom{n+m}{m}} &+\color{Blue}{ \binom{n+m}{m+1}} \\ & & & &= \color{Blue}{\binom{n+m+1}{m+1}}. }$$ Subtracting the $1$ originally added in gives the answer, $\binom{n+m+1}{m+1}-1$. Antoni Parellada kindly points out (in a comment below) that this fundamental relationship $(1)$ is now often called "Pascal's Rule." This appendix shows how intimately connected that eponym is with the present question. Here, from Pascal's Complete Works (republished by Hachette, Paris, in 1858 as Oeuvres Completes), is his original diagram of the "Arithmetical Triangle" as it appeared in the Traité du Triangle Arithmetique (1654). Pascal has labeled many of the cells with Greek and latin letters for reference. After giving its rule of construction--the fundamental relationship above--he draws 18 "consequences." The second consequence is the present result. Like many of his time, Pascal's proof proceeds by demonstrating the relationship for a specific case in a way that obviously generalizes: (From a translation at http://cerebro.xu.edu/math/Sources/Pascal/Sources/arith_triangle.pdf. I discovered the original some years ago on the Cambridge University Library Web site but apparently it has been put behind a private wall since then.) It's a nice graphical way to display the telescoping.
Sum of binomial coefficients with increasing $n$ Let's add in an initial value of $1 = \binom{n}{0}$. The fundamental relationship $$\binom{n}{k-1} + \binom{n}{k} = \binom{n+1}{k}\tag{1}$$ makes the sum telescope: $$\eqalign{ &\color{Blue}{\binom{n
45,083
Sum of binomial coefficients with increasing $n$
$\binom{n+1}{2} = \binom{n}{1} + \binom{n}{2} = \binom{n}{1} + \frac{n-1}{2} \binom{n}{1}$ $\binom{n+2}{3} = \binom{n+1}{2} + \binom{n+1}{3} = \binom{n+1}{2} + \frac{n-1}{2} \binom{n+1}{2}$ $\binom{n+3}{4} = \binom{n+2}{3} + \binom{n+2}{4} = \binom{n+2}{3} + \frac{n-1}{3} \binom{n+2}{3}$ ... ... $\binom{n+m}{m} = \binom{n+m-1}{m-1} + \binom{n+m-2}{m} = \binom{n+m-1}{m-1} + \frac{n-1}{m-1}\binom{n+m-1}{m-1} $ Start with $n$ and store each value starting from $\binom{n+1}{2}$ to calculate $\binom{n+2}{3}$ [O(1) operation] and so on.
Sum of binomial coefficients with increasing $n$
$\binom{n+1}{2} = \binom{n}{1} + \binom{n}{2} = \binom{n}{1} + \frac{n-1}{2} \binom{n}{1}$ $\binom{n+2}{3} = \binom{n+1}{2} + \binom{n+1}{3} = \binom{n+1}{2} + \frac{n-1}{2} \binom{n+1}{2}$ $\binom{n+
Sum of binomial coefficients with increasing $n$ $\binom{n+1}{2} = \binom{n}{1} + \binom{n}{2} = \binom{n}{1} + \frac{n-1}{2} \binom{n}{1}$ $\binom{n+2}{3} = \binom{n+1}{2} + \binom{n+1}{3} = \binom{n+1}{2} + \frac{n-1}{2} \binom{n+1}{2}$ $\binom{n+3}{4} = \binom{n+2}{3} + \binom{n+2}{4} = \binom{n+2}{3} + \frac{n-1}{3} \binom{n+2}{3}$ ... ... $\binom{n+m}{m} = \binom{n+m-1}{m-1} + \binom{n+m-2}{m} = \binom{n+m-1}{m-1} + \frac{n-1}{m-1}\binom{n+m-1}{m-1} $ Start with $n$ and store each value starting from $\binom{n+1}{2}$ to calculate $\binom{n+2}{3}$ [O(1) operation] and so on.
Sum of binomial coefficients with increasing $n$ $\binom{n+1}{2} = \binom{n}{1} + \binom{n}{2} = \binom{n}{1} + \frac{n-1}{2} \binom{n}{1}$ $\binom{n+2}{3} = \binom{n+1}{2} + \binom{n+1}{3} = \binom{n+1}{2} + \frac{n-1}{2} \binom{n+1}{2}$ $\binom{n+
45,084
Which stats tests to use? Bee flight activity quantified during an eclipse
Here is a graph of the two days' data (Day 1, gold line with black markers, Day 2 black line with gold markers; I coded time in terms of minutes elapsed since the first measurement of the day): The trouble with asking "whether there was a difference between Day 1 and 2" is that the answer is Yes. There are many differences: At every time of measurement, the bees' area of flight activity is different (even during the period where there was no eclipse). Within each day, there are differences in bees' area of flight activity: both Day 1 and Day 2 experience a dip and then something like an increase peaking, as it so happens, between 2:45 and 3:15 (90 minutes in), and then dropping in activity thereafter. Bees area of flight activity at any point in time seems to be slightly different than its activity 15 minutes earlier, except from about 3:30 to 4:00 (105 to 120 minutes). The peak measurement of bees' area of flight activity is at 3:15 on Day 1, and different by 30 minutes (earlier) on Day 2. The mean bees' area of flight activity is 4.94 on Day 1, but 4.77 on Day 2; similarly the variance is a good bit different (greater) on Day 1 (0.0131) than Day 2 (0.00874). There are probably lots more ways to conceive of "differences" between the two days. So: statistics will be more useful to you when you can more precisely articulate which differences are important to you. Here's another graphical take, but this time it's Day 1 activity minus Day 2 activity: If both days were very similar, we might expect the curve to center vertically on 0. If we had the data for eclipse activity (inactivity?) for the same times of bee measurement, we could ask whether the curve of eclipsing over this time looks like the bee difference curve. Your question about the Pearson's correlation coefficient would be one way to do this. Regression would be another, and would give a bit more information. You could perform a test in such a circumstance (e.g., a test that Pearson's $\rho = 0$, or a test that the slope of the line, $\beta = 0$). But again, you might want to think carefully about what difference is meaningful to you.
Which stats tests to use? Bee flight activity quantified during an eclipse
Here is a graph of the two days' data (Day 1, gold line with black markers, Day 2 black line with gold markers; I coded time in terms of minutes elapsed since the first measurement of the day): The t
Which stats tests to use? Bee flight activity quantified during an eclipse Here is a graph of the two days' data (Day 1, gold line with black markers, Day 2 black line with gold markers; I coded time in terms of minutes elapsed since the first measurement of the day): The trouble with asking "whether there was a difference between Day 1 and 2" is that the answer is Yes. There are many differences: At every time of measurement, the bees' area of flight activity is different (even during the period where there was no eclipse). Within each day, there are differences in bees' area of flight activity: both Day 1 and Day 2 experience a dip and then something like an increase peaking, as it so happens, between 2:45 and 3:15 (90 minutes in), and then dropping in activity thereafter. Bees area of flight activity at any point in time seems to be slightly different than its activity 15 minutes earlier, except from about 3:30 to 4:00 (105 to 120 minutes). The peak measurement of bees' area of flight activity is at 3:15 on Day 1, and different by 30 minutes (earlier) on Day 2. The mean bees' area of flight activity is 4.94 on Day 1, but 4.77 on Day 2; similarly the variance is a good bit different (greater) on Day 1 (0.0131) than Day 2 (0.00874). There are probably lots more ways to conceive of "differences" between the two days. So: statistics will be more useful to you when you can more precisely articulate which differences are important to you. Here's another graphical take, but this time it's Day 1 activity minus Day 2 activity: If both days were very similar, we might expect the curve to center vertically on 0. If we had the data for eclipse activity (inactivity?) for the same times of bee measurement, we could ask whether the curve of eclipsing over this time looks like the bee difference curve. Your question about the Pearson's correlation coefficient would be one way to do this. Regression would be another, and would give a bit more information. You could perform a test in such a circumstance (e.g., a test that Pearson's $\rho = 0$, or a test that the slope of the line, $\beta = 0$). But again, you might want to think carefully about what difference is meaningful to you.
Which stats tests to use? Bee flight activity quantified during an eclipse Here is a graph of the two days' data (Day 1, gold line with black markers, Day 2 black line with gold markers; I coded time in terms of minutes elapsed since the first measurement of the day): The t
45,085
Clustering high dimensional data (p > n) in R
First some background: R is good choice and have so many clustering methods in different packages. The functions include Hierarchical Clustering, Partitioning Clustering, Model-Based Clustering, and Cluster-wise Regression. Connectivity based clustering or Hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types: Agglomerative: This is a "bottom up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. Divisive: This is a "top down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy. In this method, at different distances, different clusters will form, which can be represented using a dendrogram. Centroid-based clustering: In centroid-based clustering, clusters are represented by a central vector, which may not necessarily be a member of the data set. When the number of clusters is fixed to k, k-means clustering gives a formal definition as an optimization problem: find the k cluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized. km <- kmeans(iris[,1:4], 3) plot(iris[,1], iris[,2], col=km$cluster) points(km$centers[,c(1,2)], col=1:3, pch=8, cex=2) table(km$cluster, iris$Species) Distribution-based clustering: the clustering model most closely related to statistics is based on distribution models. Clusters can then easily be defined as objects belonging most likely to the same distribution. The problem is overfitting. Density-based clustering: n density-based clustering,[8] clusters are defined as areas of higher density than the remainder of the data set. Objects in these sparse areas - that are required to separate clusters - are usually considered to be noise and border points. In density based cluster, a cluster is extend along the density distribution. Two parameters is important: "eps" defines the radius of neighborhood of each point, and "minpts" is the number of neighbors within my "eps" radius. The basic algorithm called DBscan proceeds as follows: First scan: For each point, compute the distance with all other points. Increment a neighbor count if it is smaller than "eps". Second scan: For each point, mark it as a core point if its neighbor count is greater than "mints" Third scan: For each core point, if it is not already assigned a cluster, create a new cluster and assign that to this core point as well as all of its neighbors within "eps" radius. Unlike other cluster, density based cluster can have some outliers (data points that doesn't belong to any clusters). On the other hand, it can detect cluster of arbitrary shapes (doesn't have to be circular at all). library(fpc) # eps is radius of neighborhood, MinPts is no of neighbors # within eps cluster <- dbscan(sampleiris[,-5], eps=0.6, MinPts=4) plot(cluster, sampleiris) plot(cluster, sampleiris[,c(1,4)]) # Notice points in cluster 0 are unassigned outliers table(cluster$cluster, sampleiris$Species) With the recent need to process larger and larger data sets (also known as big data), the willingness to trade semantic meaning of the generated clusters for performance has been increasing. This led to the development of pre-clustering methods such as canopy clustering, which can process huge data sets efficiently, but the resulting "clusters" are merely a rough pre-partitioning of the data set to then analyze the partitions with existing slower methods such as k-means clustering. For high-dimensional data, many of the existing methods fail due to the curse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces. This led to new clustering algorithms for high-dimensional data that focus on subspace clustering (where only some attributes are used, and cluster models include the relevant attributes for the cluster) and correlation clustering that also looks for arbitrary rotated ("correlated") subspace clusters that can be modeled by giving a correlation of their attributes. The three clustering algorithms include PROCLUS, P3C and STATPC. To your question: The package Package ‘orclus’ is available to perform subspace clustering and classification. The following is example from the manual: # definition of a function for parameterized data simulation sim.orclus <- function(k = 3, nk = 100, d = 10, l = 4, sd.cl = 0.05, sd.rest = 1, locshift = 1){ ### input parameters for data generation # k # nk # d # l # sd.cl # sd.rest # locshift number of clusters observations per cluster original dimension of the data subspace dimension where the clusters are concentrated (univariate) standard deviations for data generation (within cluster subspace concentration) standard deviations in the remaining space parameter of a uniform distribution to sample different cluster means x <- NULL for(i in 1:k){ # cluster centers apts <- locshift*matrix(runif(l*k), ncol = l) # sample points in original space xi.original <- cbind(matrix(rnorm(nk * l, sd = sd.cl), ncol=l) + matrix(rep(apts[i,], nk), ncol = l, byrow = TRUE) matrix(rnorm(nk * (d-l), sd = sd.rest), ncol = (d-l))) # subspace generation sym.mat <- matrix(nrow=d, ncol=d) for(m in 1:d){ for(n in 1:m){ sym.mat[m,n] <- sym.mat[n,m] <- runif(1) } } subspace <- eigen(sym.mat)$vectors # transformation xi.transformed <- xi.original %*% subspace x <- rbind(x, xi.transformed) } clids <- rep(1:k, each = nk) result <- list(x = x, cluster = clids) return(result) } # simulate data of 2 classes where class 1 consists of 2 subclasses simdata <- sim.orclus(k = 3, nk = 200, d = 15, l = 4, sd.cl = 0.05, sd.rest = 1, locshift = 1) x <- simdata$x y <- c(rep(1,400), rep(2,200)) res <- orclass(x, y, k = 3, l = 4, k0 = 15, a = 0.75) res # compare results table(res$predict.train$class, y) You may also be interested in HDclassif (An R Package for Model-Based Clustering and Discriminant Analysis of High-Dimensional Data).
Clustering high dimensional data (p > n) in R
First some background: R is good choice and have so many clustering methods in different packages. The functions include Hierarchical Clustering, Partitioning Clustering, Model-Based Clustering, and
Clustering high dimensional data (p > n) in R First some background: R is good choice and have so many clustering methods in different packages. The functions include Hierarchical Clustering, Partitioning Clustering, Model-Based Clustering, and Cluster-wise Regression. Connectivity based clustering or Hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types: Agglomerative: This is a "bottom up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. Divisive: This is a "top down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy. In this method, at different distances, different clusters will form, which can be represented using a dendrogram. Centroid-based clustering: In centroid-based clustering, clusters are represented by a central vector, which may not necessarily be a member of the data set. When the number of clusters is fixed to k, k-means clustering gives a formal definition as an optimization problem: find the k cluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized. km <- kmeans(iris[,1:4], 3) plot(iris[,1], iris[,2], col=km$cluster) points(km$centers[,c(1,2)], col=1:3, pch=8, cex=2) table(km$cluster, iris$Species) Distribution-based clustering: the clustering model most closely related to statistics is based on distribution models. Clusters can then easily be defined as objects belonging most likely to the same distribution. The problem is overfitting. Density-based clustering: n density-based clustering,[8] clusters are defined as areas of higher density than the remainder of the data set. Objects in these sparse areas - that are required to separate clusters - are usually considered to be noise and border points. In density based cluster, a cluster is extend along the density distribution. Two parameters is important: "eps" defines the radius of neighborhood of each point, and "minpts" is the number of neighbors within my "eps" radius. The basic algorithm called DBscan proceeds as follows: First scan: For each point, compute the distance with all other points. Increment a neighbor count if it is smaller than "eps". Second scan: For each point, mark it as a core point if its neighbor count is greater than "mints" Third scan: For each core point, if it is not already assigned a cluster, create a new cluster and assign that to this core point as well as all of its neighbors within "eps" radius. Unlike other cluster, density based cluster can have some outliers (data points that doesn't belong to any clusters). On the other hand, it can detect cluster of arbitrary shapes (doesn't have to be circular at all). library(fpc) # eps is radius of neighborhood, MinPts is no of neighbors # within eps cluster <- dbscan(sampleiris[,-5], eps=0.6, MinPts=4) plot(cluster, sampleiris) plot(cluster, sampleiris[,c(1,4)]) # Notice points in cluster 0 are unassigned outliers table(cluster$cluster, sampleiris$Species) With the recent need to process larger and larger data sets (also known as big data), the willingness to trade semantic meaning of the generated clusters for performance has been increasing. This led to the development of pre-clustering methods such as canopy clustering, which can process huge data sets efficiently, but the resulting "clusters" are merely a rough pre-partitioning of the data set to then analyze the partitions with existing slower methods such as k-means clustering. For high-dimensional data, many of the existing methods fail due to the curse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces. This led to new clustering algorithms for high-dimensional data that focus on subspace clustering (where only some attributes are used, and cluster models include the relevant attributes for the cluster) and correlation clustering that also looks for arbitrary rotated ("correlated") subspace clusters that can be modeled by giving a correlation of their attributes. The three clustering algorithms include PROCLUS, P3C and STATPC. To your question: The package Package ‘orclus’ is available to perform subspace clustering and classification. The following is example from the manual: # definition of a function for parameterized data simulation sim.orclus <- function(k = 3, nk = 100, d = 10, l = 4, sd.cl = 0.05, sd.rest = 1, locshift = 1){ ### input parameters for data generation # k # nk # d # l # sd.cl # sd.rest # locshift number of clusters observations per cluster original dimension of the data subspace dimension where the clusters are concentrated (univariate) standard deviations for data generation (within cluster subspace concentration) standard deviations in the remaining space parameter of a uniform distribution to sample different cluster means x <- NULL for(i in 1:k){ # cluster centers apts <- locshift*matrix(runif(l*k), ncol = l) # sample points in original space xi.original <- cbind(matrix(rnorm(nk * l, sd = sd.cl), ncol=l) + matrix(rep(apts[i,], nk), ncol = l, byrow = TRUE) matrix(rnorm(nk * (d-l), sd = sd.rest), ncol = (d-l))) # subspace generation sym.mat <- matrix(nrow=d, ncol=d) for(m in 1:d){ for(n in 1:m){ sym.mat[m,n] <- sym.mat[n,m] <- runif(1) } } subspace <- eigen(sym.mat)$vectors # transformation xi.transformed <- xi.original %*% subspace x <- rbind(x, xi.transformed) } clids <- rep(1:k, each = nk) result <- list(x = x, cluster = clids) return(result) } # simulate data of 2 classes where class 1 consists of 2 subclasses simdata <- sim.orclus(k = 3, nk = 200, d = 15, l = 4, sd.cl = 0.05, sd.rest = 1, locshift = 1) x <- simdata$x y <- c(rep(1,400), rep(2,200)) res <- orclass(x, y, k = 3, l = 4, k0 = 15, a = 0.75) res # compare results table(res$predict.train$class, y) You may also be interested in HDclassif (An R Package for Model-Based Clustering and Discriminant Analysis of High-Dimensional Data).
Clustering high dimensional data (p > n) in R First some background: R is good choice and have so many clustering methods in different packages. The functions include Hierarchical Clustering, Partitioning Clustering, Model-Based Clustering, and
45,086
Linear mixed effect model vs. Ordered Probit vs. Ordered Logit with ordinal response
This is going to be at best a partial answer but hope it helps a little. Given that your response is ordinal you have to ask yourself whether the distance between different categories is different depending on the starting position. In other words. If you think the gap between 1 and 3 is not necessarily the same gap as the gap between 2 and 4, then using a cumulative link model (e.g. logit or probit) is the best option. I'd recommend reading the tutorial and extra information from Christensen (2013) on the ordinal package to help you along the way. Why people prefer lmer() has probably less to do with good statistics or econometrics and more with habit, and method institutionalisation. I know from experience that coming up with a CLM model while most people use a GLS or OLS or so can be ill-advised not because the CLM is not a better model, but because you are basically telling your community of readers "so far you guys were wrong" which is not that easy to swallow. Centering and standardizing is often done because people think it will alleviate specific concerns of collinearity and so. There is much debate all over the place about whether this is true but in my opinion (and the same goes for log-transformations) you are reducing variance and changing the data which is only a good idea if you have a theoretical motivation for this, if not, work with the actual data and change your model. As to the choice of logit versus probit. The difference is not that big in general. Once again I would be guided by the data. You can start as follows: # Finding the best matching link function links <- c("logit", "probit", "cloglog", "loglog", "cauchit") sapply(links, function(link){ clm(formula, data=df, link=link)$logLik}) # See which one fits best # Finding the best threshold function thresholds <- c("symmetric", "flexible", "equidistant") sapply(thresholds, function(threshold){ clm(formula, data=df, link="select best fitting link function",threshold=threshold)$logLik }) This will give you the best fit between your data depending on the model (lowest log-likelihood). In general, the probit is better if you have a lot of "extreme" values (i.e. worst and best in your case) because it is tied to the normal distribution which has fatter tails than the logistic one (logit). Finally, here is some simple code that will give you a very good idea of how well your model (and X variables) allow you to predict the response variable. This code will plot the incidence of right responses and the number of wrong predictions (and how wrong they are) in your model. pred. <- predict(FORMULA, type = "class")$fit plot(df$RESPONSE,pred.,type="p", pch=15,cex = sqrt(table(df$RESPONSE,pred.))/5) # YOU WILL SEE THIS PLOT IS NOT THAT USEFUL results <- data.frame(cbind(as.numeric(as.character(df$RESPONSE)),as.numeric(as.character(pred.)),as.numeric(as.character(df$RESPONSE))-as.numeric(as.character(pred.)))) sum(results[,3]) # THIS WILL GIVE YOU A FAST IDEA ABOUT WHETHER YOU OVERESTIMATE (LARGE POSITIVE VALUE, OR UNDERESTIMATE, THE ACTUAL RESPONSE results$dum <- 1 tmp <- data.frame(with(results, tapply(dum, results[,3],sum))) tmp$z <- seq(min(results[,3]),max(results[,3]),by=1) plot(tmp$z,tmp[,1], type="h", xlab ="Deviation from correct prediction",ylab="Number of Predictions") Let me know what you think! Simon
Linear mixed effect model vs. Ordered Probit vs. Ordered Logit with ordinal response
This is going to be at best a partial answer but hope it helps a little. Given that your response is ordinal you have to ask yourself whether the distance between different categories is different dep
Linear mixed effect model vs. Ordered Probit vs. Ordered Logit with ordinal response This is going to be at best a partial answer but hope it helps a little. Given that your response is ordinal you have to ask yourself whether the distance between different categories is different depending on the starting position. In other words. If you think the gap between 1 and 3 is not necessarily the same gap as the gap between 2 and 4, then using a cumulative link model (e.g. logit or probit) is the best option. I'd recommend reading the tutorial and extra information from Christensen (2013) on the ordinal package to help you along the way. Why people prefer lmer() has probably less to do with good statistics or econometrics and more with habit, and method institutionalisation. I know from experience that coming up with a CLM model while most people use a GLS or OLS or so can be ill-advised not because the CLM is not a better model, but because you are basically telling your community of readers "so far you guys were wrong" which is not that easy to swallow. Centering and standardizing is often done because people think it will alleviate specific concerns of collinearity and so. There is much debate all over the place about whether this is true but in my opinion (and the same goes for log-transformations) you are reducing variance and changing the data which is only a good idea if you have a theoretical motivation for this, if not, work with the actual data and change your model. As to the choice of logit versus probit. The difference is not that big in general. Once again I would be guided by the data. You can start as follows: # Finding the best matching link function links <- c("logit", "probit", "cloglog", "loglog", "cauchit") sapply(links, function(link){ clm(formula, data=df, link=link)$logLik}) # See which one fits best # Finding the best threshold function thresholds <- c("symmetric", "flexible", "equidistant") sapply(thresholds, function(threshold){ clm(formula, data=df, link="select best fitting link function",threshold=threshold)$logLik }) This will give you the best fit between your data depending on the model (lowest log-likelihood). In general, the probit is better if you have a lot of "extreme" values (i.e. worst and best in your case) because it is tied to the normal distribution which has fatter tails than the logistic one (logit). Finally, here is some simple code that will give you a very good idea of how well your model (and X variables) allow you to predict the response variable. This code will plot the incidence of right responses and the number of wrong predictions (and how wrong they are) in your model. pred. <- predict(FORMULA, type = "class")$fit plot(df$RESPONSE,pred.,type="p", pch=15,cex = sqrt(table(df$RESPONSE,pred.))/5) # YOU WILL SEE THIS PLOT IS NOT THAT USEFUL results <- data.frame(cbind(as.numeric(as.character(df$RESPONSE)),as.numeric(as.character(pred.)),as.numeric(as.character(df$RESPONSE))-as.numeric(as.character(pred.)))) sum(results[,3]) # THIS WILL GIVE YOU A FAST IDEA ABOUT WHETHER YOU OVERESTIMATE (LARGE POSITIVE VALUE, OR UNDERESTIMATE, THE ACTUAL RESPONSE results$dum <- 1 tmp <- data.frame(with(results, tapply(dum, results[,3],sum))) tmp$z <- seq(min(results[,3]),max(results[,3]),by=1) plot(tmp$z,tmp[,1], type="h", xlab ="Deviation from correct prediction",ylab="Number of Predictions") Let me know what you think! Simon
Linear mixed effect model vs. Ordered Probit vs. Ordered Logit with ordinal response This is going to be at best a partial answer but hope it helps a little. Given that your response is ordinal you have to ask yourself whether the distance between different categories is different dep
45,087
Monte Carlo integration aim for maximum variance
50% is wrong: the closer you can get to 100% the better off you are. Let the measure of the target region $T$ be $t$ and the measure of the enclosing (or "probe") region $V$ be $v$. The chance of a uniformly random point in $V$ to lie in $T$ therefore is $t/v$. This Bernoulli distribution has variance $$\frac{t}{v}\left(1-\frac{t}{v}\right).$$ The estimate of $t$ is based on $n$ independent uniform samples of $V$, which is therefore a Binomial$(n, t/v)$ variate with variance $n$ times greater than that of a single sample. When its outcome is $X$ the estimate will be the proportion $\hat{t}_n = v\left(\frac{X}{n}\right) = \left(\frac{v}{n}\right)X$. Its variance is $$\text{Var}(\hat{t}_n) = \left(\frac{v}{n}\right)^2\text{Var}(X) = \left(\frac{v}{n}\right)^2 n\left(\frac{t}{v}\left(1-\frac{t}{v}\right)\right) = \frac{1}{n}t(v-t).$$ Because $V$ encloses $T$, $v\ge t$. The variance, being a linear function of $v$ in this interval, obviously is minimized at $v=t$. Ergo, the correct rule is to find an enclosing volume that is as close to $T$ as possible. Ideally, $V$ will be $T$ itself and the correct answer will be available upon taking $n=0$ samples! As a check, I performed $500$ Monte-Carlo integrations of $\int_2^3\int_6^7 x y\ dy dx$ using enclosing volumes of constant heights $21$, $32.5$, and $84$ and $n=1000$ iterations per integration. For instance, here are the results of one of the integrations for $21$ (showing the target region $T$ beneath the blue surface graphing $xy$ over the square $[2,3]\times[6,7]$). The black dots fall within $T$ while the red dots, although still within $V$, fall outside $T$. The results of these $500$ trials are Height: 21 32.5 84 ------------------------------ t/v: 77% 50% 19% Mean: 16.258 16.235 16.280 Variance: 0.077 0.271 1.061 t(v-t)/n: 0.077 0.264 1.100 The true mean of $65/4$ was adequately estimated on average in all three situations. Their variances are comfortably close to the value of $t(v-t)/n$ previously derived. Clearly 50% (the middle column) is not more accurate: its estimated variance of $0.271$ is almost four times worse than the estimated variance of $0.077$ achieved when the target region is $77\%$ of the probe region (left column). It would therefore take approximately $0.271/0.077 = 3.5$ times as many iterations in the $50\%$ configuration to achieve the same level of accuracy as the $77\%$ configuration. In fact, when I redid the $50\%$ calculation using $n=3500$ iterations per integral, the variance of $500$ trials was $0.065$. This does not differ significantly from $0.077$. The import of this variance calculation is plain: lower variances mean either less computation or better accuracy (or both).
Monte Carlo integration aim for maximum variance
50% is wrong: the closer you can get to 100% the better off you are. Let the measure of the target region $T$ be $t$ and the measure of the enclosing (or "probe") region $V$ be $v$. The chance of a un
Monte Carlo integration aim for maximum variance 50% is wrong: the closer you can get to 100% the better off you are. Let the measure of the target region $T$ be $t$ and the measure of the enclosing (or "probe") region $V$ be $v$. The chance of a uniformly random point in $V$ to lie in $T$ therefore is $t/v$. This Bernoulli distribution has variance $$\frac{t}{v}\left(1-\frac{t}{v}\right).$$ The estimate of $t$ is based on $n$ independent uniform samples of $V$, which is therefore a Binomial$(n, t/v)$ variate with variance $n$ times greater than that of a single sample. When its outcome is $X$ the estimate will be the proportion $\hat{t}_n = v\left(\frac{X}{n}\right) = \left(\frac{v}{n}\right)X$. Its variance is $$\text{Var}(\hat{t}_n) = \left(\frac{v}{n}\right)^2\text{Var}(X) = \left(\frac{v}{n}\right)^2 n\left(\frac{t}{v}\left(1-\frac{t}{v}\right)\right) = \frac{1}{n}t(v-t).$$ Because $V$ encloses $T$, $v\ge t$. The variance, being a linear function of $v$ in this interval, obviously is minimized at $v=t$. Ergo, the correct rule is to find an enclosing volume that is as close to $T$ as possible. Ideally, $V$ will be $T$ itself and the correct answer will be available upon taking $n=0$ samples! As a check, I performed $500$ Monte-Carlo integrations of $\int_2^3\int_6^7 x y\ dy dx$ using enclosing volumes of constant heights $21$, $32.5$, and $84$ and $n=1000$ iterations per integration. For instance, here are the results of one of the integrations for $21$ (showing the target region $T$ beneath the blue surface graphing $xy$ over the square $[2,3]\times[6,7]$). The black dots fall within $T$ while the red dots, although still within $V$, fall outside $T$. The results of these $500$ trials are Height: 21 32.5 84 ------------------------------ t/v: 77% 50% 19% Mean: 16.258 16.235 16.280 Variance: 0.077 0.271 1.061 t(v-t)/n: 0.077 0.264 1.100 The true mean of $65/4$ was adequately estimated on average in all three situations. Their variances are comfortably close to the value of $t(v-t)/n$ previously derived. Clearly 50% (the middle column) is not more accurate: its estimated variance of $0.271$ is almost four times worse than the estimated variance of $0.077$ achieved when the target region is $77\%$ of the probe region (left column). It would therefore take approximately $0.271/0.077 = 3.5$ times as many iterations in the $50\%$ configuration to achieve the same level of accuracy as the $77\%$ configuration. In fact, when I redid the $50\%$ calculation using $n=3500$ iterations per integral, the variance of $500$ trials was $0.065$. This does not differ significantly from $0.077$. The import of this variance calculation is plain: lower variances mean either less computation or better accuracy (or both).
Monte Carlo integration aim for maximum variance 50% is wrong: the closer you can get to 100% the better off you are. Let the measure of the target region $T$ be $t$ and the measure of the enclosing (or "probe") region $V$ be $v$. The chance of a un
45,088
Monte Carlo integration aim for maximum variance
Partial solution that explains why 2 % is worse than 50 %, but does not arrive at the 50 % guideline. The variance of the estimate $\hat{p}$ of the proportion $|T|/V$, \begin{equation} \textrm{Var}\left(\hat{p}\right) = \frac{p(1-p)}{N}, \end{equation} is indeed maximized when $V=2|T|$. However, we are actually interested in the estimation error in the estimate of the volume $|T|$, not the proportion. Let us compute the variance of the estimate of $|T|$: \begin{equation} \textrm{Var}\left(\hat{|T|}\right) = \textrm{Var}\left(V \hat{p}\right) = V^2 \textrm{Var}\left(\hat{p}\right) = \frac{V^2 p(1-p)}{N}. \end{equation} Now, apply the fact that $p$ depends on $|T|$ and $V$: \begin{equation} =\frac{V^2\,(|T|\,(V-|T|))/V}{N} = \frac{|T|V^2-|T|^2V}{N}. \end{equation} Now, for constant sample size $N$ and true volume $|T|$, the variance of the estimator is a second-degree polynomial of $V$ which is increasing at $V\geq |T|$. However, the optimal solution based on this line of thought would be $V = |T|$! Intuitively, if $S$ is huge, the proportion of samples in $T$ can be (in absolute terms) estimated pretty accurately to be about 0, but that does not tell us much about the volume of T. On the other hand, if we are able to set $S=T$ and know $V$, then we also know $|T|$, and thus this would be optimal. I don't know where the 50 % guideline comes from, as the derivation in my answer would suggest to use the smallest possible $S$ that satisfies the conditions i) $S$ covers $T$, ii) we know the volume $|S|=V$ iii) we can sample points in $S$.
Monte Carlo integration aim for maximum variance
Partial solution that explains why 2 % is worse than 50 %, but does not arrive at the 50 % guideline. The variance of the estimate $\hat{p}$ of the proportion $|T|/V$, \begin{equation} \textrm{Var}\le
Monte Carlo integration aim for maximum variance Partial solution that explains why 2 % is worse than 50 %, but does not arrive at the 50 % guideline. The variance of the estimate $\hat{p}$ of the proportion $|T|/V$, \begin{equation} \textrm{Var}\left(\hat{p}\right) = \frac{p(1-p)}{N}, \end{equation} is indeed maximized when $V=2|T|$. However, we are actually interested in the estimation error in the estimate of the volume $|T|$, not the proportion. Let us compute the variance of the estimate of $|T|$: \begin{equation} \textrm{Var}\left(\hat{|T|}\right) = \textrm{Var}\left(V \hat{p}\right) = V^2 \textrm{Var}\left(\hat{p}\right) = \frac{V^2 p(1-p)}{N}. \end{equation} Now, apply the fact that $p$ depends on $|T|$ and $V$: \begin{equation} =\frac{V^2\,(|T|\,(V-|T|))/V}{N} = \frac{|T|V^2-|T|^2V}{N}. \end{equation} Now, for constant sample size $N$ and true volume $|T|$, the variance of the estimator is a second-degree polynomial of $V$ which is increasing at $V\geq |T|$. However, the optimal solution based on this line of thought would be $V = |T|$! Intuitively, if $S$ is huge, the proportion of samples in $T$ can be (in absolute terms) estimated pretty accurately to be about 0, but that does not tell us much about the volume of T. On the other hand, if we are able to set $S=T$ and know $V$, then we also know $|T|$, and thus this would be optimal. I don't know where the 50 % guideline comes from, as the derivation in my answer would suggest to use the smallest possible $S$ that satisfies the conditions i) $S$ covers $T$, ii) we know the volume $|S|=V$ iii) we can sample points in $S$.
Monte Carlo integration aim for maximum variance Partial solution that explains why 2 % is worse than 50 %, but does not arrive at the 50 % guideline. The variance of the estimate $\hat{p}$ of the proportion $|T|/V$, \begin{equation} \textrm{Var}\le
45,089
Monte Carlo integration aim for maximum variance
It's easy. Let's say T is a unit circle, and S is a square than contains it. When you sample from S, you want to pick points which are closer to where the circle's bound is, right? If you sample point in the center of a square, you know that they're going to be inside the circle too. There's very little information gained from checking whether (0,0) is inside the circle, in fact there's no information at all: we know it's inside the circle. And the same for (1,1): we know that it's definitely outside. So, it makes a sense to go for points farther from the center, e.g. (0.7,0.6) - is this inside or outside the circle? Picking this point and checking will bring useful results. That was the intuition: you want to sample from the regions where the boundary of T goes, and in these regions the probabilities will tend to be far from both 0 and 1, and the farthest you can get is 0.5 UPDATE: Less intuitive but precise answer is that your sampling distribution should be proportional to the integrand to minimize the variance. Which the same as saying that you want to sample more often from regions where your volume boundary goes through.
Monte Carlo integration aim for maximum variance
It's easy. Let's say T is a unit circle, and S is a square than contains it. When you sample from S, you want to pick points which are closer to where the circle's bound is, right? If you sample point
Monte Carlo integration aim for maximum variance It's easy. Let's say T is a unit circle, and S is a square than contains it. When you sample from S, you want to pick points which are closer to where the circle's bound is, right? If you sample point in the center of a square, you know that they're going to be inside the circle too. There's very little information gained from checking whether (0,0) is inside the circle, in fact there's no information at all: we know it's inside the circle. And the same for (1,1): we know that it's definitely outside. So, it makes a sense to go for points farther from the center, e.g. (0.7,0.6) - is this inside or outside the circle? Picking this point and checking will bring useful results. That was the intuition: you want to sample from the regions where the boundary of T goes, and in these regions the probabilities will tend to be far from both 0 and 1, and the farthest you can get is 0.5 UPDATE: Less intuitive but precise answer is that your sampling distribution should be proportional to the integrand to minimize the variance. Which the same as saying that you want to sample more often from regions where your volume boundary goes through.
Monte Carlo integration aim for maximum variance It's easy. Let's say T is a unit circle, and S is a square than contains it. When you sample from S, you want to pick points which are closer to where the circle's bound is, right? If you sample point
45,090
If any two variables follow a normal bivariate distribution does it also have a multivariate normal distribution?
Here's a counter-example: Let $X$, $Y$, $Z$ be independent standard normal, and let $W = |Z|\cdot \text{sign}(XY)$. Then $(W,X)$, $(W,Y)$ and $(X,Y)$ are bivariate normal, but $(W,X,Y)$ is not trivariate normal, since $WXY$ is never negative. What's happening is that the trivariate distribution has been constructed so that probability is only in four of the eight octants, in such a way that each quadrant of the pairwise margins gets an octant with probability and an octant without. To help visualize what's going on, see the following simulation: x=rnorm(1000) y=rnorm(1000) z=rnorm(1000) w=abs(z)*sign(x*y) Here are the pairwise samples: Here's the sample bivariate distribution of $X$ and $Y$ when $W$ is restricted to be positive: (when $W$ is restricted to be negative, the $(X,Y)$ values are in the other two quadrants) And here's a particular projection of the trivariate distribution; you should, for example, be able to make out that there's a low-density "gap" at the bottom. That might seem like a somewhat artificial counterexample, but it's not an issue just with some odd edge-cases. More generally, the trivariate distribution may be quite different from trivariate normal, in any number of smooth or not-smooth ways. The same goes for more than three variables. Copulas give us a way of constructing infinities of such counterexamples with various characteristics.
If any two variables follow a normal bivariate distribution does it also have a multivariate normal
Here's a counter-example: Let $X$, $Y$, $Z$ be independent standard normal, and let $W = |Z|\cdot \text{sign}(XY)$. Then $(W,X)$, $(W,Y)$ and $(X,Y)$ are bivariate normal, but $(W,X,Y)$ is not trivari
If any two variables follow a normal bivariate distribution does it also have a multivariate normal distribution? Here's a counter-example: Let $X$, $Y$, $Z$ be independent standard normal, and let $W = |Z|\cdot \text{sign}(XY)$. Then $(W,X)$, $(W,Y)$ and $(X,Y)$ are bivariate normal, but $(W,X,Y)$ is not trivariate normal, since $WXY$ is never negative. What's happening is that the trivariate distribution has been constructed so that probability is only in four of the eight octants, in such a way that each quadrant of the pairwise margins gets an octant with probability and an octant without. To help visualize what's going on, see the following simulation: x=rnorm(1000) y=rnorm(1000) z=rnorm(1000) w=abs(z)*sign(x*y) Here are the pairwise samples: Here's the sample bivariate distribution of $X$ and $Y$ when $W$ is restricted to be positive: (when $W$ is restricted to be negative, the $(X,Y)$ values are in the other two quadrants) And here's a particular projection of the trivariate distribution; you should, for example, be able to make out that there's a low-density "gap" at the bottom. That might seem like a somewhat artificial counterexample, but it's not an issue just with some odd edge-cases. More generally, the trivariate distribution may be quite different from trivariate normal, in any number of smooth or not-smooth ways. The same goes for more than three variables. Copulas give us a way of constructing infinities of such counterexamples with various characteristics.
If any two variables follow a normal bivariate distribution does it also have a multivariate normal Here's a counter-example: Let $X$, $Y$, $Z$ be independent standard normal, and let $W = |Z|\cdot \text{sign}(XY)$. Then $(W,X)$, $(W,Y)$ and $(X,Y)$ are bivariate normal, but $(W,X,Y)$ is not trivari
45,091
If any two variables follow a normal bivariate distribution does it also have a multivariate normal distribution?
No. In theory, exceptions like @Glen_b's answer may apply. In practice, multivariate normality partly depends on how precisely your variables follow their normal uni/bivariate distributions. It is rare that any real distribution has exactly zero skewness and zero excess kurtosis, after all. Therefore, if you're inferring that all the bivariate distributions are normal because you fail to reject the null hypothesis of a multivariate normality test on each, the result of the same test for all three variables might just cross your rejection threshold if the bivariate results are all $p = .06$ and your $\alpha = .05$. This answer may not matter to you if you've already avoided such problems with significance tests of normality though, and have determined through better means that your variables are normally distributed (whether exactly, or close enough). Even so, in practice, one may find that subsets of data exhibit problematic relationships like that in Glen_b's example. Acceptable deviation from bivariate normality, once mixed with even limited instances of such multivariate problems, may exceed your purpose's tolerance for non-normality, which is bound to appear to some extent in most real sample data.
If any two variables follow a normal bivariate distribution does it also have a multivariate normal
No. In theory, exceptions like @Glen_b's answer may apply. In practice, multivariate normality partly depends on how precisely your variables follow their normal uni/bivariate distributions. It is ra
If any two variables follow a normal bivariate distribution does it also have a multivariate normal distribution? No. In theory, exceptions like @Glen_b's answer may apply. In practice, multivariate normality partly depends on how precisely your variables follow their normal uni/bivariate distributions. It is rare that any real distribution has exactly zero skewness and zero excess kurtosis, after all. Therefore, if you're inferring that all the bivariate distributions are normal because you fail to reject the null hypothesis of a multivariate normality test on each, the result of the same test for all three variables might just cross your rejection threshold if the bivariate results are all $p = .06$ and your $\alpha = .05$. This answer may not matter to you if you've already avoided such problems with significance tests of normality though, and have determined through better means that your variables are normally distributed (whether exactly, or close enough). Even so, in practice, one may find that subsets of data exhibit problematic relationships like that in Glen_b's example. Acceptable deviation from bivariate normality, once mixed with even limited instances of such multivariate problems, may exceed your purpose's tolerance for non-normality, which is bound to appear to some extent in most real sample data.
If any two variables follow a normal bivariate distribution does it also have a multivariate normal No. In theory, exceptions like @Glen_b's answer may apply. In practice, multivariate normality partly depends on how precisely your variables follow their normal uni/bivariate distributions. It is ra
45,092
How to specify/restrict the sign of coefficients in a GLM or similar model in R
The negative estimated coefficient on something that you KNOW is positive comes from omitted variable bias and/or colinearity between your regressors. For prediction, this isn't so problematic, so long as you are sampling new data to predict the outcome (price?) of from the same population as your sample. The negative coefficient comes because the variable is highly correlated with something else, making the coefficient estimate highly variable, OR because it is correlated with something important that is omitted from your model, and the negative sign is picking up the effect of that omitted factor. But it sounds like you are also trying to do inference -- how much does an exogenous change in $X$ change $Y$. Causal inferential statistics uses different methods and has different priorities than predictive statistics. It is particularly well developed in econometrics. Basically you need to find strategies such that you can convince yourself that $E(\hat\beta|X,whatever)=\beta$, which generally involves making sure that the regressor of interest is not correlated with the error term, which is generally accomplished by controlling for observables (or unobservables in certain cases). Even if you get to that point however, colinearity will still give you highly variable coefficients, but negative signs on something that you KNOW is positive will generally come with huge standard errors (assuming no omitted variable bias). Edit: if your model is $$ price = g^{-1}\left(\alpha + country'\beta + \gamma distance + whatever + \epsilon\right) $$ then country will be correlated with distance. hence, if you are in Tajikistan and you are getting a spice from Vanuatu, then the coefficient on Vanuatu will be really high. After controlling for all of these country effects, the additional effect of distance may well not be positive. In this case, if you want to do inference and not prediction (and think that you can specify and estimate a model that gives a causal interpretation), then you may with to take out the country variables.
How to specify/restrict the sign of coefficients in a GLM or similar model in R
The negative estimated coefficient on something that you KNOW is positive comes from omitted variable bias and/or colinearity between your regressors. For prediction, this isn't so problematic, so l
How to specify/restrict the sign of coefficients in a GLM or similar model in R The negative estimated coefficient on something that you KNOW is positive comes from omitted variable bias and/or colinearity between your regressors. For prediction, this isn't so problematic, so long as you are sampling new data to predict the outcome (price?) of from the same population as your sample. The negative coefficient comes because the variable is highly correlated with something else, making the coefficient estimate highly variable, OR because it is correlated with something important that is omitted from your model, and the negative sign is picking up the effect of that omitted factor. But it sounds like you are also trying to do inference -- how much does an exogenous change in $X$ change $Y$. Causal inferential statistics uses different methods and has different priorities than predictive statistics. It is particularly well developed in econometrics. Basically you need to find strategies such that you can convince yourself that $E(\hat\beta|X,whatever)=\beta$, which generally involves making sure that the regressor of interest is not correlated with the error term, which is generally accomplished by controlling for observables (or unobservables in certain cases). Even if you get to that point however, colinearity will still give you highly variable coefficients, but negative signs on something that you KNOW is positive will generally come with huge standard errors (assuming no omitted variable bias). Edit: if your model is $$ price = g^{-1}\left(\alpha + country'\beta + \gamma distance + whatever + \epsilon\right) $$ then country will be correlated with distance. hence, if you are in Tajikistan and you are getting a spice from Vanuatu, then the coefficient on Vanuatu will be really high. After controlling for all of these country effects, the additional effect of distance may well not be positive. In this case, if you want to do inference and not prediction (and think that you can specify and estimate a model that gives a causal interpretation), then you may with to take out the country variables.
How to specify/restrict the sign of coefficients in a GLM or similar model in R The negative estimated coefficient on something that you KNOW is positive comes from omitted variable bias and/or colinearity between your regressors. For prediction, this isn't so problematic, so l
45,093
How to specify/restrict the sign of coefficients in a GLM or similar model in R
You can do this in R with Lavaan by specifying the model as a structural equation model and adding constraints. I'm not sure if it's a good idea, but it can be done. #load library and generate some data library(lavaan) d <- as.data.frame(matrix(rnorm(1:3000), ncol=3, dimnames=list(NULL, c("y", "x1", "x2")))) Run it with GLM: > summary(glm(y ~ x1 + x2, data=d)) Call: glm(formula = y ~ x1 + x2, data = d) Deviance Residuals: Min 1Q Median 3Q Max -3.6385 -0.5899 -0.0224 0.6024 3.0131 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.01855 0.03021 -0.614 0.539 x1 0.01208 0.03049 0.396 0.692 x2 -0.03676 0.03021 -1.217 0.224 (Dispersion parameter for gaussian family taken to be 0.912437) Null deviance: 911.2 on 999 degrees of freedom Residual deviance: 909.7 on 997 degrees of freedom AIC: 2751.2 Then run the same model with lavaan, to check equivalence: > model1.syntax <- ' + y ~ x1 + x2 + ' > summary(sem(model1.syntax, data=d)) lavaan (0.5-14) converged normally after 1 iterations Number of observations 1000 Estimator ML Minimum Function Test Statistic 0.000 Degrees of freedom 0 P-value (Chi-square) 1.000 Parameter estimates: Information Expected Standard Errors Standard Estimate Std.err Z-value P(>|z|) Regressions: y ~ x1 0.012 0.030 0.397 0.691 x2 -0.037 0.030 -1.219 0.223 Variances: y 0.910 0.041 In lavaan, you then add constraints, by naming the parameters and adding a constraint section: > model2.syntax <- ' + y ~ b1 * x1 + b2 * x2 + ' > > model2.constraints <- + ' + b1 > 0 + b2 > 0 + ' > > summary(sem(model=model2.syntax, constraints=model2.constraints, data=d)) lavaan (0.5-14) converged normally after 1 iterations Number of observations 1000 Estimator ML Minimum Function Test Statistic 1.484 Degrees of freedom 0 P-value (Chi-square) 0.000 Parameter estimates: Information Observed Standard Errors Standard Estimate Std.err Z-value P(>|z|) Regressions: y ~ x1 (b1) 0.012 NA x2 (b2) 0.000 NA Variances: y 0.911 0.041 Constraints: Slack (>=0) b1 - 0 0.012 b2 - 0 0.000 Instead of being negative, the b2 parameter is fixed to zero. Notice that you don't get any standard errors - if you want them, you have to bootstrap. (That's described in the lavaan manual).
How to specify/restrict the sign of coefficients in a GLM or similar model in R
You can do this in R with Lavaan by specifying the model as a structural equation model and adding constraints. I'm not sure if it's a good idea, but it can be done. #load library and generate some da
How to specify/restrict the sign of coefficients in a GLM or similar model in R You can do this in R with Lavaan by specifying the model as a structural equation model and adding constraints. I'm not sure if it's a good idea, but it can be done. #load library and generate some data library(lavaan) d <- as.data.frame(matrix(rnorm(1:3000), ncol=3, dimnames=list(NULL, c("y", "x1", "x2")))) Run it with GLM: > summary(glm(y ~ x1 + x2, data=d)) Call: glm(formula = y ~ x1 + x2, data = d) Deviance Residuals: Min 1Q Median 3Q Max -3.6385 -0.5899 -0.0224 0.6024 3.0131 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.01855 0.03021 -0.614 0.539 x1 0.01208 0.03049 0.396 0.692 x2 -0.03676 0.03021 -1.217 0.224 (Dispersion parameter for gaussian family taken to be 0.912437) Null deviance: 911.2 on 999 degrees of freedom Residual deviance: 909.7 on 997 degrees of freedom AIC: 2751.2 Then run the same model with lavaan, to check equivalence: > model1.syntax <- ' + y ~ x1 + x2 + ' > summary(sem(model1.syntax, data=d)) lavaan (0.5-14) converged normally after 1 iterations Number of observations 1000 Estimator ML Minimum Function Test Statistic 0.000 Degrees of freedom 0 P-value (Chi-square) 1.000 Parameter estimates: Information Expected Standard Errors Standard Estimate Std.err Z-value P(>|z|) Regressions: y ~ x1 0.012 0.030 0.397 0.691 x2 -0.037 0.030 -1.219 0.223 Variances: y 0.910 0.041 In lavaan, you then add constraints, by naming the parameters and adding a constraint section: > model2.syntax <- ' + y ~ b1 * x1 + b2 * x2 + ' > > model2.constraints <- + ' + b1 > 0 + b2 > 0 + ' > > summary(sem(model=model2.syntax, constraints=model2.constraints, data=d)) lavaan (0.5-14) converged normally after 1 iterations Number of observations 1000 Estimator ML Minimum Function Test Statistic 1.484 Degrees of freedom 0 P-value (Chi-square) 0.000 Parameter estimates: Information Observed Standard Errors Standard Estimate Std.err Z-value P(>|z|) Regressions: y ~ x1 (b1) 0.012 NA x2 (b2) 0.000 NA Variances: y 0.911 0.041 Constraints: Slack (>=0) b1 - 0 0.012 b2 - 0 0.000 Instead of being negative, the b2 parameter is fixed to zero. Notice that you don't get any standard errors - if you want them, you have to bootstrap. (That's described in the lavaan manual).
How to specify/restrict the sign of coefficients in a GLM or similar model in R You can do this in R with Lavaan by specifying the model as a structural equation model and adding constraints. I'm not sure if it's a good idea, but it can be done. #load library and generate some da
45,094
How to specify/restrict the sign of coefficients in a GLM or similar model in R
For future references, if you don't mind switching to lasso type glm, you can use cv.glmnet with the argument lower.limits to specify which parameters should not go under 0. It also has the nice property of removing a lot of spurious correlations from the fit: using as reference model, the model with lambda = "lambda.1se" all parameters not really relevant, based on cross-validation, will be set to 0. In my experience on datasets with similar issues, just switching to lasso fixes most of the negative values, and obviously setting lower.limits fixes all.
How to specify/restrict the sign of coefficients in a GLM or similar model in R
For future references, if you don't mind switching to lasso type glm, you can use cv.glmnet with the argument lower.limits to specify which parameters should not go under 0. It also has the nice prop
How to specify/restrict the sign of coefficients in a GLM or similar model in R For future references, if you don't mind switching to lasso type glm, you can use cv.glmnet with the argument lower.limits to specify which parameters should not go under 0. It also has the nice property of removing a lot of spurious correlations from the fit: using as reference model, the model with lambda = "lambda.1se" all parameters not really relevant, based on cross-validation, will be set to 0. In my experience on datasets with similar issues, just switching to lasso fixes most of the negative values, and obviously setting lower.limits fixes all.
How to specify/restrict the sign of coefficients in a GLM or similar model in R For future references, if you don't mind switching to lasso type glm, you can use cv.glmnet with the argument lower.limits to specify which parameters should not go under 0. It also has the nice prop
45,095
Standard error from correlation coefficient
If you look at the Wikipedia page for the Pearson product-moment correlation, you will find sections that describe how confidence intervals can be calculated. Typically, people will use Fisher's $z$-transformation (arctan) to turn the $r$ into a variable that is approximately normally distributed: $$ z_r = \frac 1 2 \ln \frac{1 + r}{1 - r} $$ Having applied this transformation, the standard error will be approximately $^1/_{\sqrt{(N-3)}}$. With this you can form whatever length confidence interval you like. Once you've found the confidence limits you want, you can back-transform them to the original $r$ scale (i.e., $[-1, 1]$) like so: $$ \text{CI limit}_r = \frac{\exp(2z) - 1}{\exp(2z) + 1} $$ In other words, you can form a confidence interval for $r$ without the original data, so long as you have the original $N$. Notes: This approach is an approximation, there are exact formulae listed on the Wikipedia page, but they are harder to use. Although it doesn't say on the Wikipedia page, there are several conditions you want to meet in order for this approximation to be reasonable. The $N$ should be at least $30$ (IIRC), and the marginal distributions (i.e., the univariate distributions of the two variables being correlated) should be normal. For example, I'm not sure that this will be accurate if the correlation were composed of two vectors of $1$s and $0$s. However, higher $N$ should allow you to compensate for minor non-normality.
Standard error from correlation coefficient
If you look at the Wikipedia page for the Pearson product-moment correlation, you will find sections that describe how confidence intervals can be calculated. Typically, people will use Fisher's $z$-
Standard error from correlation coefficient If you look at the Wikipedia page for the Pearson product-moment correlation, you will find sections that describe how confidence intervals can be calculated. Typically, people will use Fisher's $z$-transformation (arctan) to turn the $r$ into a variable that is approximately normally distributed: $$ z_r = \frac 1 2 \ln \frac{1 + r}{1 - r} $$ Having applied this transformation, the standard error will be approximately $^1/_{\sqrt{(N-3)}}$. With this you can form whatever length confidence interval you like. Once you've found the confidence limits you want, you can back-transform them to the original $r$ scale (i.e., $[-1, 1]$) like so: $$ \text{CI limit}_r = \frac{\exp(2z) - 1}{\exp(2z) + 1} $$ In other words, you can form a confidence interval for $r$ without the original data, so long as you have the original $N$. Notes: This approach is an approximation, there are exact formulae listed on the Wikipedia page, but they are harder to use. Although it doesn't say on the Wikipedia page, there are several conditions you want to meet in order for this approximation to be reasonable. The $N$ should be at least $30$ (IIRC), and the marginal distributions (i.e., the univariate distributions of the two variables being correlated) should be normal. For example, I'm not sure that this will be accurate if the correlation were composed of two vectors of $1$s and $0$s. However, higher $N$ should allow you to compensate for minor non-normality.
Standard error from correlation coefficient If you look at the Wikipedia page for the Pearson product-moment correlation, you will find sections that describe how confidence intervals can be calculated. Typically, people will use Fisher's $z$-
45,096
Standard error from correlation coefficient
To add to gung's answer, one can also use the a lazy approach of directly calculating the standard error for the correlation. This will produce inaccurate results in some cases and may produce impossible out of range confidence intervals. But for most cases, it's fine. The equation is: Example calculation of confidence interval Assume that n=200, r=.3. I use the CIr function from psychometric to get the CIs based on Fisher's Z transformation. Then I calculate the standard error of the correlation based on the direct approach and find the same CI (95%): > psychometric::CIr(.3, 200) [1] 0.17 0.42 > sqrt((1-.3^2)/(200-2)) [1] 0.068 > .3 - 1.96 * 0.068 [1] 0.17 > .3 + 1.96 * 0.068 [1] 0.43 .17-.42 vs. .17-.43. Thus, we see that the approaches yield only a minor difference. The quick method gets more inaccurate the closer the |correlation| gets to 1 and with small n's. To illustrate, assume now that n=20, r=.9. Then: > psychometric::CIr(.9, 20) [1] 0.76 0.96 > sqrt((1-.9^2)/(20-2)) [1] 0.1 > .9 + 1.96 * 0.1 [1] 1.1 > .9 - 1.96 * 0.1 [1] 0.7 So, here the results are markedly different: .76-.96 vs. .7-.1.1! The latter is impossible, so we could reduce to .7-1.0. The two plots below show the difference in the lower and upper limits, respectively: So, blue indicates that the quick method produced too low values, red that it produced too high values and green when it gave correct answers. My takeaway is that when n is below 100, the quick method gives pretty imprecise results, but for larger n's, it doesn't matter so much. The equation is given in e.g.: Cohen, J., & Cohen, J. (Eds.). (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed). Mahwah, N.J: L. Erlbaum Associates. See also this question on SO.
Standard error from correlation coefficient
To add to gung's answer, one can also use the a lazy approach of directly calculating the standard error for the correlation. This will produce inaccurate results in some cases and may produce impossi
Standard error from correlation coefficient To add to gung's answer, one can also use the a lazy approach of directly calculating the standard error for the correlation. This will produce inaccurate results in some cases and may produce impossible out of range confidence intervals. But for most cases, it's fine. The equation is: Example calculation of confidence interval Assume that n=200, r=.3. I use the CIr function from psychometric to get the CIs based on Fisher's Z transformation. Then I calculate the standard error of the correlation based on the direct approach and find the same CI (95%): > psychometric::CIr(.3, 200) [1] 0.17 0.42 > sqrt((1-.3^2)/(200-2)) [1] 0.068 > .3 - 1.96 * 0.068 [1] 0.17 > .3 + 1.96 * 0.068 [1] 0.43 .17-.42 vs. .17-.43. Thus, we see that the approaches yield only a minor difference. The quick method gets more inaccurate the closer the |correlation| gets to 1 and with small n's. To illustrate, assume now that n=20, r=.9. Then: > psychometric::CIr(.9, 20) [1] 0.76 0.96 > sqrt((1-.9^2)/(20-2)) [1] 0.1 > .9 + 1.96 * 0.1 [1] 1.1 > .9 - 1.96 * 0.1 [1] 0.7 So, here the results are markedly different: .76-.96 vs. .7-.1.1! The latter is impossible, so we could reduce to .7-1.0. The two plots below show the difference in the lower and upper limits, respectively: So, blue indicates that the quick method produced too low values, red that it produced too high values and green when it gave correct answers. My takeaway is that when n is below 100, the quick method gives pretty imprecise results, but for larger n's, it doesn't matter so much. The equation is given in e.g.: Cohen, J., & Cohen, J. (Eds.). (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed). Mahwah, N.J: L. Erlbaum Associates. See also this question on SO.
Standard error from correlation coefficient To add to gung's answer, one can also use the a lazy approach of directly calculating the standard error for the correlation. This will produce inaccurate results in some cases and may produce impossi
45,097
Performance evaluation of auto.arima in R and UCM on one dataset
I employed AUTOBOX ( a piece of software that I have helped develop ) . The automatic model identification scheme detect a first difference model with an ar(3) component. . The test for constancy of parameters revealed a possible breakpoint at or around period 53 (year=1953 note that the UCM model declared a new trend at 1954) which suggests a possible regime change between 1-52 AND 53-83 . This is easily confirmed visually by examining a plot of the data. It is an exponential smoothing model ( a very particuLlar case of an ARIMA MODEL ) with a constant and an adjustment for the 75th data point. The residual plot is suggestive of sufficiency (at least with 31 values) with an ACF of . The next plot is the actuals/fit and forecast . The ARIMA model identified has a significant negative constant and thus provides downward guidance. The difference between AUTOBOX other approaches is that AUTOBOX tested for transient parameters and concluded that the older data (obs 1-52) was inconsistent with observations 53-83. This is the "elephant in the room" that nobody dares to mention. The assumption that all of the data comes from the same model with constant parameters needs to be verified NOT ignored. Just because we know that 83 values exist DOESN'T mean that we should use all of the data. Modelling the entire series does not necessarily model individual subsets. I must comment that evaluating a forecast from 1 origin is insufficient research . One must look at many origins and for possibly different lead times.
Performance evaluation of auto.arima in R and UCM on one dataset
I employed AUTOBOX ( a piece of software that I have helped develop ) . The automatic model identification scheme detect a first difference model with an ar(3) component. . The test for constancy of p
Performance evaluation of auto.arima in R and UCM on one dataset I employed AUTOBOX ( a piece of software that I have helped develop ) . The automatic model identification scheme detect a first difference model with an ar(3) component. . The test for constancy of parameters revealed a possible breakpoint at or around period 53 (year=1953 note that the UCM model declared a new trend at 1954) which suggests a possible regime change between 1-52 AND 53-83 . This is easily confirmed visually by examining a plot of the data. It is an exponential smoothing model ( a very particuLlar case of an ARIMA MODEL ) with a constant and an adjustment for the 75th data point. The residual plot is suggestive of sufficiency (at least with 31 values) with an ACF of . The next plot is the actuals/fit and forecast . The ARIMA model identified has a significant negative constant and thus provides downward guidance. The difference between AUTOBOX other approaches is that AUTOBOX tested for transient parameters and concluded that the older data (obs 1-52) was inconsistent with observations 53-83. This is the "elephant in the room" that nobody dares to mention. The assumption that all of the data comes from the same model with constant parameters needs to be verified NOT ignored. Just because we know that 83 values exist DOESN'T mean that we should use all of the data. Modelling the entire series does not necessarily model individual subsets. I must comment that evaluating a forecast from 1 origin is insufficient research . One must look at many origins and for possibly different lead times.
Performance evaluation of auto.arima in R and UCM on one dataset I employed AUTOBOX ( a piece of software that I have helped develop ) . The automatic model identification scheme detect a first difference model with an ar(3) component. . The test for constancy of p
45,098
Performance evaluation of auto.arima in R and UCM on one dataset
A recent change to the way regression coefficients are initialized in the estimation of ARIMA models means that a different model is now selected by auto.arima (using R3.0.2): > auto.arima(window(eggs,end=1983)) ARIMA(0,1,0) with drift Coefficients: drift -2.2665 s.e. 3.1133 sigma^2 estimated as 804.5: log likelihood=-390.15 AIC=784.31 AICc=784.46 BIC=789.14 See https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=15396 for discussion of the change in the initialization. This is relevant here as the drift is estimated as a linear regression. The change is in stats::arima, not in auto.arima. Note that the drift is not significant in any case. So you can hardly claim that the non-trended model is unreasonable. Of course, if you know something about the data, then you should use the knowledge you have. But if you want something completely automatic, you can't really complain if it gives you something that doesn't fit your preconceived ideas of what is reasonable. A state space unobserved component model will be very similar to an ARIMA model for these data. In fact, there will be an equivalent ARIMA model for any sensible UCM fitted to the data. So it makes no sense to say one forecasts better than the other. One might be easier to use than the other, or more interpretable than the other. Yes, there are benefits. ARIMA is much better at easily handling complicated short-term dynamics. UCM is much better at decomposing the series into interpretable components.
Performance evaluation of auto.arima in R and UCM on one dataset
A recent change to the way regression coefficients are initialized in the estimation of ARIMA models means that a different model is now selected by auto.arima (using R3.0.2): > auto.arima(window(eggs
Performance evaluation of auto.arima in R and UCM on one dataset A recent change to the way regression coefficients are initialized in the estimation of ARIMA models means that a different model is now selected by auto.arima (using R3.0.2): > auto.arima(window(eggs,end=1983)) ARIMA(0,1,0) with drift Coefficients: drift -2.2665 s.e. 3.1133 sigma^2 estimated as 804.5: log likelihood=-390.15 AIC=784.31 AICc=784.46 BIC=789.14 See https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=15396 for discussion of the change in the initialization. This is relevant here as the drift is estimated as a linear regression. The change is in stats::arima, not in auto.arima. Note that the drift is not significant in any case. So you can hardly claim that the non-trended model is unreasonable. Of course, if you know something about the data, then you should use the knowledge you have. But if you want something completely automatic, you can't really complain if it gives you something that doesn't fit your preconceived ideas of what is reasonable. A state space unobserved component model will be very similar to an ARIMA model for these data. In fact, there will be an equivalent ARIMA model for any sensible UCM fitted to the data. So it makes no sense to say one forecasts better than the other. One might be easier to use than the other, or more interpretable than the other. Yes, there are benefits. ARIMA is much better at easily handling complicated short-term dynamics. UCM is much better at decomposing the series into interpretable components.
Performance evaluation of auto.arima in R and UCM on one dataset A recent change to the way regression coefficients are initialized in the estimation of ARIMA models means that a different model is now selected by auto.arima (using R3.0.2): > auto.arima(window(eggs
45,099
Performance evaluation of auto.arima in R and UCM on one dataset
I don't get the same result as the OP (version 5.0 of the forecast package). If you run the following, the result is indeed a linear downward trend. install.packages("fma") library(fma) install.packages("forecast") library(forecast) #window as per the OP eggs2<-window(eggs, start=c(1900), end=c(1983)) plot(eggs2) #does produce a linear trend downward! model2<-auto.arima(eggs2) model2 plot(forecast(model2,10)) Maybe there was a change where allowdrift was not TRUE by default in the package used? Further, if you run auto.arima on the full data set (without withholding the last 10 years) which is I believe what Dr. Hyndman was doing, the down ward trend is picked up. model<-auto.arima(eggs) model Series: eggs ARIMA(0,1,1) with drift Coefficients: ma1 drift -0.1630 -2.3774 s.e. 0.1145 2.3229 sigma^2 estimated as 713: log likelihood=-432.26 AIC=870.51 AICc=870.78 BIC=878.11
Performance evaluation of auto.arima in R and UCM on one dataset
I don't get the same result as the OP (version 5.0 of the forecast package). If you run the following, the result is indeed a linear downward trend. install.packages("fma") library(fma) install.packag
Performance evaluation of auto.arima in R and UCM on one dataset I don't get the same result as the OP (version 5.0 of the forecast package). If you run the following, the result is indeed a linear downward trend. install.packages("fma") library(fma) install.packages("forecast") library(forecast) #window as per the OP eggs2<-window(eggs, start=c(1900), end=c(1983)) plot(eggs2) #does produce a linear trend downward! model2<-auto.arima(eggs2) model2 plot(forecast(model2,10)) Maybe there was a change where allowdrift was not TRUE by default in the package used? Further, if you run auto.arima on the full data set (without withholding the last 10 years) which is I believe what Dr. Hyndman was doing, the down ward trend is picked up. model<-auto.arima(eggs) model Series: eggs ARIMA(0,1,1) with drift Coefficients: ma1 drift -0.1630 -2.3774 s.e. 0.1145 2.3229 sigma^2 estimated as 713: log likelihood=-432.26 AIC=870.51 AICc=870.78 BIC=878.11
Performance evaluation of auto.arima in R and UCM on one dataset I don't get the same result as the OP (version 5.0 of the forecast package). If you run the following, the result is indeed a linear downward trend. install.packages("fma") library(fma) install.packag
45,100
AR(1) coefficient is correlation?
For a second-order stationary series it is the correlation coefficient between the dependent value and its lag. Specify $$y_{t+1} = \beta y_t + u_{t+1}\qquad u_{t+1}= \text{white noise}$$ The correlation coefficient between $y_{t+1}$ and $y_{t}$ is defined as usual $$\rho_{(1)} = \frac{\text{Cov}(y_{t+1},y_{t})}{\sigma(y_{t+1})\sigma(y_t)}$$ Now $$\text{Cov}(y_{t+1},y_{t}) = E(y_{t+1}y_{t}) - E(y_{t+1})E(y_{t})$$ $$ = E\Big((\beta y_t+u_{t+1})y_{t}\Big) - E(y_{t+1})E(y_{t}) = E\Big(\beta y_t^2+u_{t+1}y_{t}\Big) - E(y_{t+1})E(y_{t})$$ Now $u_{t+1}, y_{t}$ are independent. Also, the expected value of the $y$-series is zero, given the specification. Using these facts we end up with $$\text{Cov}(y_{t+1},y_{t}) =\beta E(y_t^2) = \beta\text{Var}(y_t) $$ Since we assume 2nd-order stationarity, $\text{Var}(y_t) = \text{Var}(y_{t+1}) = \text{Var}(y)$ Inserting all this back to the correlation coefficient $$\rho_{(1)} = \frac{\beta\text{Var}(y)}{\sigma(y)\sigma(y)} = \frac{\beta\text{Var}(y)}{\text{Var}(y)} = \beta. $$
AR(1) coefficient is correlation?
For a second-order stationary series it is the correlation coefficient between the dependent value and its lag. Specify $$y_{t+1} = \beta y_t + u_{t+1}\qquad u_{t+1}= \text{white noise}$$ The correlat
AR(1) coefficient is correlation? For a second-order stationary series it is the correlation coefficient between the dependent value and its lag. Specify $$y_{t+1} = \beta y_t + u_{t+1}\qquad u_{t+1}= \text{white noise}$$ The correlation coefficient between $y_{t+1}$ and $y_{t}$ is defined as usual $$\rho_{(1)} = \frac{\text{Cov}(y_{t+1},y_{t})}{\sigma(y_{t+1})\sigma(y_t)}$$ Now $$\text{Cov}(y_{t+1},y_{t}) = E(y_{t+1}y_{t}) - E(y_{t+1})E(y_{t})$$ $$ = E\Big((\beta y_t+u_{t+1})y_{t}\Big) - E(y_{t+1})E(y_{t}) = E\Big(\beta y_t^2+u_{t+1}y_{t}\Big) - E(y_{t+1})E(y_{t})$$ Now $u_{t+1}, y_{t}$ are independent. Also, the expected value of the $y$-series is zero, given the specification. Using these facts we end up with $$\text{Cov}(y_{t+1},y_{t}) =\beta E(y_t^2) = \beta\text{Var}(y_t) $$ Since we assume 2nd-order stationarity, $\text{Var}(y_t) = \text{Var}(y_{t+1}) = \text{Var}(y)$ Inserting all this back to the correlation coefficient $$\rho_{(1)} = \frac{\beta\text{Var}(y)}{\sigma(y)\sigma(y)} = \frac{\beta\text{Var}(y)}{\text{Var}(y)} = \beta. $$
AR(1) coefficient is correlation? For a second-order stationary series it is the correlation coefficient between the dependent value and its lag. Specify $$y_{t+1} = \beta y_t + u_{t+1}\qquad u_{t+1}= \text{white noise}$$ The correlat