idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
45,001
Does order of events matter in Bayesian update?
In order for this to be the case, the random variables must be exchangeable. Your example is a little different since $p>1/2$ isn't an event. An event should be in the support of the likelihood. In this case, events are only constituted by binomial random variables, or sums thereof.
Does order of events matter in Bayesian update?
In order for this to be the case, the random variables must be exchangeable. Your example is a little different since $p>1/2$ isn't an event. An event should be in the support of the likelihood. In
Does order of events matter in Bayesian update? In order for this to be the case, the random variables must be exchangeable. Your example is a little different since $p>1/2$ isn't an event. An event should be in the support of the likelihood. In this case, events are only constituted by binomial random variables, or...
Does order of events matter in Bayesian update? In order for this to be the case, the random variables must be exchangeable. Your example is a little different since $p>1/2$ isn't an event. An event should be in the support of the likelihood. In
45,002
How useful are linear hypotheses?
These linear hypotheses on the coefficient vector have three main uses: Testing the existence of relationships: We can test the existence of relationships between some subset of the explanatory variables and the response variable. To do this, let $\mathbf{e}_\mathcal{S}$ denote the indicator vector for the subset $\m...
How useful are linear hypotheses?
These linear hypotheses on the coefficient vector have three main uses: Testing the existence of relationships: We can test the existence of relationships between some subset of the explanatory varia
How useful are linear hypotheses? These linear hypotheses on the coefficient vector have three main uses: Testing the existence of relationships: We can test the existence of relationships between some subset of the explanatory variables and the response variable. To do this, let $\mathbf{e}_\mathcal{S}$ denote the i...
How useful are linear hypotheses? These linear hypotheses on the coefficient vector have three main uses: Testing the existence of relationships: We can test the existence of relationships between some subset of the explanatory varia
45,003
How useful are linear hypotheses?
When you fit a linear model, the statistical softwares give you the point estimate, confidence interval, test statistics, and p-values of the $\beta$_s. If you are just interesting in these $\beta$s, you can stop here (for example, the simple linear regression just have on intercept and one slope, so $\beta$s themselve...
How useful are linear hypotheses?
When you fit a linear model, the statistical softwares give you the point estimate, confidence interval, test statistics, and p-values of the $\beta$_s. If you are just interesting in these $\beta$s,
How useful are linear hypotheses? When you fit a linear model, the statistical softwares give you the point estimate, confidence interval, test statistics, and p-values of the $\beta$_s. If you are just interesting in these $\beta$s, you can stop here (for example, the simple linear regression just have on intercept an...
How useful are linear hypotheses? When you fit a linear model, the statistical softwares give you the point estimate, confidence interval, test statistics, and p-values of the $\beta$_s. If you are just interesting in these $\beta$s,
45,004
Power calculations using pilot effect sizes
This just happens to be a topic that has popped up in a few different areas lately: This interactive tool that accompanies on a pub on the topic: http://pilotpower.table1.org/ This Lakens pre-print: https://psyarxiv.com/b7z4q And this post from Andrew Gelman: http://andrewgelman.com/2018/03/20/purpose-pilot-study-demon...
Power calculations using pilot effect sizes
This just happens to be a topic that has popped up in a few different areas lately: This interactive tool that accompanies on a pub on the topic: http://pilotpower.table1.org/ This Lakens pre-print: h
Power calculations using pilot effect sizes This just happens to be a topic that has popped up in a few different areas lately: This interactive tool that accompanies on a pub on the topic: http://pilotpower.table1.org/ This Lakens pre-print: https://psyarxiv.com/b7z4q And this post from Andrew Gelman: http://andrewgel...
Power calculations using pilot effect sizes This just happens to be a topic that has popped up in a few different areas lately: This interactive tool that accompanies on a pub on the topic: http://pilotpower.table1.org/ This Lakens pre-print: h
45,005
Power calculations using pilot effect sizes
Perhaps worth expanding on one of the points @Tdisher makes. In his article available here entitled "On the use of a pilot sample for sample size determination" Browne discusses the role of estimating the standard deviation The abstract states To compute the sample size needed to achieve the planned power for a t‐test...
Power calculations using pilot effect sizes
Perhaps worth expanding on one of the points @Tdisher makes. In his article available here entitled "On the use of a pilot sample for sample size determination" Browne discusses the role of estimating
Power calculations using pilot effect sizes Perhaps worth expanding on one of the points @Tdisher makes. In his article available here entitled "On the use of a pilot sample for sample size determination" Browne discusses the role of estimating the standard deviation The abstract states To compute the sample size need...
Power calculations using pilot effect sizes Perhaps worth expanding on one of the points @Tdisher makes. In his article available here entitled "On the use of a pilot sample for sample size determination" Browne discusses the role of estimating
45,006
What is effect of increasing number of hidden layers in a Feed Forward NN? [duplicate]
I recommend you taking a look at http://www.deeplearningbook.org/ , they explain really well the concept of "Capacity" (Chapter 5, page 110), which might give you some answers to your questions. I'll try my best though, for the sake of my answer. 1) Increasing the number of hidden layers might improve the accuracy or m...
What is effect of increasing number of hidden layers in a Feed Forward NN? [duplicate]
I recommend you taking a look at http://www.deeplearningbook.org/ , they explain really well the concept of "Capacity" (Chapter 5, page 110), which might give you some answers to your questions. I'll
What is effect of increasing number of hidden layers in a Feed Forward NN? [duplicate] I recommend you taking a look at http://www.deeplearningbook.org/ , they explain really well the concept of "Capacity" (Chapter 5, page 110), which might give you some answers to your questions. I'll try my best though, for the sake ...
What is effect of increasing number of hidden layers in a Feed Forward NN? [duplicate] I recommend you taking a look at http://www.deeplearningbook.org/ , they explain really well the concept of "Capacity" (Chapter 5, page 110), which might give you some answers to your questions. I'll
45,007
Why is the correlation coefficient a limited measure of dependence?
This is explained in the Wikipedia entry for Correlation and Dependence. Correlation basically measures how close two variables are to having a linear relationship between them. Consider now $X \sim U(-1, 1)$, and $Y = X^2$. Then if you know $X$, you know $Y$ exactly, and if you know $Y$, you know $X$ up to its sign. H...
Why is the correlation coefficient a limited measure of dependence?
This is explained in the Wikipedia entry for Correlation and Dependence. Correlation basically measures how close two variables are to having a linear relationship between them. Consider now $X \sim U
Why is the correlation coefficient a limited measure of dependence? This is explained in the Wikipedia entry for Correlation and Dependence. Correlation basically measures how close two variables are to having a linear relationship between them. Consider now $X \sim U(-1, 1)$, and $Y = X^2$. Then if you know $X$, you k...
Why is the correlation coefficient a limited measure of dependence? This is explained in the Wikipedia entry for Correlation and Dependence. Correlation basically measures how close two variables are to having a linear relationship between them. Consider now $X \sim U
45,008
Why is the correlation coefficient a limited measure of dependence?
A simple example. The correlation between a random variable $x$ and its square $x^2$ is zero for any symmetrical distribution on $\mathbb{R}$. Here's the means of a variable and its square: $$\mu=\int x dF(x)=0$$ $$\sigma^2=\int x^2 dF(x)$$ Let's calculate Pearson correlation: $$\rho=\frac{\int x x^2 dF(x)}{\mu \sigma^...
Why is the correlation coefficient a limited measure of dependence?
A simple example. The correlation between a random variable $x$ and its square $x^2$ is zero for any symmetrical distribution on $\mathbb{R}$. Here's the means of a variable and its square: $$\mu=\int
Why is the correlation coefficient a limited measure of dependence? A simple example. The correlation between a random variable $x$ and its square $x^2$ is zero for any symmetrical distribution on $\mathbb{R}$. Here's the means of a variable and its square: $$\mu=\int x dF(x)=0$$ $$\sigma^2=\int x^2 dF(x)$$ Let's calcu...
Why is the correlation coefficient a limited measure of dependence? A simple example. The correlation between a random variable $x$ and its square $x^2$ is zero for any symmetrical distribution on $\mathbb{R}$. Here's the means of a variable and its square: $$\mu=\int
45,009
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by its transpose not give back the covariance matrix?
As explained in my comment, the inconvenient truth is that the Cholesky decomposition while usually defined as $K=LL^T$ where $L$ is lower triangular, is equally valid as $K=U^TU$ where $U$ is upper triangular. The implementation of Cholesky decomposition in LAPACK (the libraries our computer use to compute Linear Alge...
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by
As explained in my comment, the inconvenient truth is that the Cholesky decomposition while usually defined as $K=LL^T$ where $L$ is lower triangular, is equally valid as $K=U^TU$ where $U$ is upper t
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by its transpose not give back the covariance matrix? As explained in my comment, the inconvenient truth is that the Cholesky decomposition while usually defined as $K=LL^T$ where $L$ is lower triangular, is equally valid a...
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by As explained in my comment, the inconvenient truth is that the Cholesky decomposition while usually defined as $K=LL^T$ where $L$ is lower triangular, is equally valid as $K=U^TU$ where $U$ is upper t
45,010
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by its transpose not give back the covariance matrix?
So, why do you think that chol(S) returns your A and not A'? In fact it does return A' if you look at the values or read the documentation. It returns upper triangular, which corresponds to A' in your Wiki reference
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by
So, why do you think that chol(S) returns your A and not A'? In fact it does return A' if you look at the values or read the documentation. It returns upper triangular, which corresponds to A' in your
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by its transpose not give back the covariance matrix? So, why do you think that chol(S) returns your A and not A'? In fact it does return A' if you look at the values or read the documentation. It returns upper triangular, ...
Why does the resulting matrix from Cholesky decomposition of a covariance matrix when multiplied by So, why do you think that chol(S) returns your A and not A'? In fact it does return A' if you look at the values or read the documentation. It returns upper triangular, which corresponds to A' in your
45,011
Batch Learning w/Random Forest Sklearn [closed]
Yes, Batch Learning is certainly possible in scikit-learn. When you first initialize your RandomForestClassifier object you'll want to set the warm_start parameter to True. This means that successive calls to model.fit will not fit entirely new models, but add successive trees. Here's some pseudo-code to get you start...
Batch Learning w/Random Forest Sklearn [closed]
Yes, Batch Learning is certainly possible in scikit-learn. When you first initialize your RandomForestClassifier object you'll want to set the warm_start parameter to True. This means that successive
Batch Learning w/Random Forest Sklearn [closed] Yes, Batch Learning is certainly possible in scikit-learn. When you first initialize your RandomForestClassifier object you'll want to set the warm_start parameter to True. This means that successive calls to model.fit will not fit entirely new models, but add successive ...
Batch Learning w/Random Forest Sklearn [closed] Yes, Batch Learning is certainly possible in scikit-learn. When you first initialize your RandomForestClassifier object you'll want to set the warm_start parameter to True. This means that successive
45,012
Is $Pr(x \leq C)$ equal to $Pr(\sqrt{x} \leq \sqrt{C})$?
Yes, because $x \leq C \Leftrightarrow \sqrt{x} \leq \sqrt{C}$ for $x, C \geq 0$. This means that the set of events $\{A \in \Omega: x(A) \leq C\}$ equals the set $\{A \in \Omega: \sqrt{x(A)} \leq \sqrt{C}\}$, so are their probabilities.
Is $Pr(x \leq C)$ equal to $Pr(\sqrt{x} \leq \sqrt{C})$?
Yes, because $x \leq C \Leftrightarrow \sqrt{x} \leq \sqrt{C}$ for $x, C \geq 0$. This means that the set of events $\{A \in \Omega: x(A) \leq C\}$ equals the set $\{A \in \Omega: \sqrt{x(A)} \leq \s
Is $Pr(x \leq C)$ equal to $Pr(\sqrt{x} \leq \sqrt{C})$? Yes, because $x \leq C \Leftrightarrow \sqrt{x} \leq \sqrt{C}$ for $x, C \geq 0$. This means that the set of events $\{A \in \Omega: x(A) \leq C\}$ equals the set $\{A \in \Omega: \sqrt{x(A)} \leq \sqrt{C}\}$, so are their probabilities.
Is $Pr(x \leq C)$ equal to $Pr(\sqrt{x} \leq \sqrt{C})$? Yes, because $x \leq C \Leftrightarrow \sqrt{x} \leq \sqrt{C}$ for $x, C \geq 0$. This means that the set of events $\{A \in \Omega: x(A) \leq C\}$ equals the set $\{A \in \Omega: \sqrt{x(A)} \leq \s
45,013
How does perfect separation in logistic regression affect the AUC?
Why not try a simple simulation to try to figure it out? Here is one, coded in R: library(ROCR) # we'll use this package for the ROC & AUC set.seed(8365) # this makes the example exactly reproducible x = c(runif(50, min=0, max=4), # the x data have a gap from 4 to 6 runif...
How does perfect separation in logistic regression affect the AUC?
Why not try a simple simulation to try to figure it out? Here is one, coded in R: library(ROCR) # we'll use this package for the ROC & AUC set.seed(8365) # this
How does perfect separation in logistic regression affect the AUC? Why not try a simple simulation to try to figure it out? Here is one, coded in R: library(ROCR) # we'll use this package for the ROC & AUC set.seed(8365) # this makes the example exactly reproducible x = c(runif(5...
How does perfect separation in logistic regression affect the AUC? Why not try a simple simulation to try to figure it out? Here is one, coded in R: library(ROCR) # we'll use this package for the ROC & AUC set.seed(8365) # this
45,014
How does perfect separation in logistic regression affect the AUC?
@gung had a great answer. I just want to add more explanations on why "no matter what threshold you use, you will have perfect accuracy" If we add one more line to @gung's code to check the predicted probability, we can see this: essentially for all the data points the predicted probability ether 0 or 1, this is why th...
How does perfect separation in logistic regression affect the AUC?
@gung had a great answer. I just want to add more explanations on why "no matter what threshold you use, you will have perfect accuracy" If we add one more line to @gung's code to check the predicted
How does perfect separation in logistic regression affect the AUC? @gung had a great answer. I just want to add more explanations on why "no matter what threshold you use, you will have perfect accuracy" If we add one more line to @gung's code to check the predicted probability, we can see this: essentially for all the...
How does perfect separation in logistic regression affect the AUC? @gung had a great answer. I just want to add more explanations on why "no matter what threshold you use, you will have perfect accuracy" If we add one more line to @gung's code to check the predicted
45,015
How does the second derivative inform an update step in Gradient Descent?
I agree with your distaste for the writing. It seems as though you have an understanding of what is going on, but I will attempt to clarify why the second derivative is important. Consider a two-dimensional orthogonal system. Since they are orthogonal we can look at them independently, and together. This need not be th...
How does the second derivative inform an update step in Gradient Descent?
I agree with your distaste for the writing. It seems as though you have an understanding of what is going on, but I will attempt to clarify why the second derivative is important. Consider a two-dimen
How does the second derivative inform an update step in Gradient Descent? I agree with your distaste for the writing. It seems as though you have an understanding of what is going on, but I will attempt to clarify why the second derivative is important. Consider a two-dimensional orthogonal system. Since they are ortho...
How does the second derivative inform an update step in Gradient Descent? I agree with your distaste for the writing. It seems as though you have an understanding of what is going on, but I will attempt to clarify why the second derivative is important. Consider a two-dimen
45,016
How does the second derivative inform an update step in Gradient Descent?
Deviation from an approximation by linear function An estimate of the improvement after a gradient step approximates the value $f(x_{n+1})$ in the point $x_{n+1}$ by assuming that the slope of the function is constant. It approximates the function with a linear function Your expectation might be that for a small chang...
How does the second derivative inform an update step in Gradient Descent?
Deviation from an approximation by linear function An estimate of the improvement after a gradient step approximates the value $f(x_{n+1})$ in the point $x_{n+1}$ by assuming that the slope of the fun
How does the second derivative inform an update step in Gradient Descent? Deviation from an approximation by linear function An estimate of the improvement after a gradient step approximates the value $f(x_{n+1})$ in the point $x_{n+1}$ by assuming that the slope of the function is constant. It approximates the functio...
How does the second derivative inform an update step in Gradient Descent? Deviation from an approximation by linear function An estimate of the improvement after a gradient step approximates the value $f(x_{n+1})$ in the point $x_{n+1}$ by assuming that the slope of the fun
45,017
How does the second derivative inform an update step in Gradient Descent?
Second derivatives are used to understand the rate of change of derivatives. Considering the huge number of hyper-parameters involved in building the trained model, it is always necessary to detect early the accuracy rate of the model being trained. Many of us, spend a considerable time in training the models with dif...
How does the second derivative inform an update step in Gradient Descent?
Second derivatives are used to understand the rate of change of derivatives. Considering the huge number of hyper-parameters involved in building the trained model, it is always necessary to detect e
How does the second derivative inform an update step in Gradient Descent? Second derivatives are used to understand the rate of change of derivatives. Considering the huge number of hyper-parameters involved in building the trained model, it is always necessary to detect early the accuracy rate of the model being trai...
How does the second derivative inform an update step in Gradient Descent? Second derivatives are used to understand the rate of change of derivatives. Considering the huge number of hyper-parameters involved in building the trained model, it is always necessary to detect e
45,018
Pearson correlation between a variable and its square
You are curious about whether your value of $r$ is "too high" — it seems you think that, as $X$ and $X^2$ do not have an exactly linear relationship, then the Pearson's $r$ should be rather low. The high $r$ is not telling you that the relationship is linear, but it is telling you that the relationship is rather close ...
Pearson correlation between a variable and its square
You are curious about whether your value of $r$ is "too high" — it seems you think that, as $X$ and $X^2$ do not have an exactly linear relationship, then the Pearson's $r$ should be rather low. The h
Pearson correlation between a variable and its square You are curious about whether your value of $r$ is "too high" — it seems you think that, as $X$ and $X^2$ do not have an exactly linear relationship, then the Pearson's $r$ should be rather low. The high $r$ is not telling you that the relationship is linear, but it...
Pearson correlation between a variable and its square You are curious about whether your value of $r$ is "too high" — it seems you think that, as $X$ and $X^2$ do not have an exactly linear relationship, then the Pearson's $r$ should be rather low. The h
45,019
Pearson correlation between a variable and its square
The Pearson correlation measures the closeness to a linear relationship. If $X$ is positive, then the correlation between $X$ and $X^2$ is often fairly close to 1. If you want to measure the strength of monotonic relationship, there are a number of other choices, of which the two best known are the Kendall correlation...
Pearson correlation between a variable and its square
The Pearson correlation measures the closeness to a linear relationship. If $X$ is positive, then the correlation between $X$ and $X^2$ is often fairly close to 1. If you want to measure the strength
Pearson correlation between a variable and its square The Pearson correlation measures the closeness to a linear relationship. If $X$ is positive, then the correlation between $X$ and $X^2$ is often fairly close to 1. If you want to measure the strength of monotonic relationship, there are a number of other choices, o...
Pearson correlation between a variable and its square The Pearson correlation measures the closeness to a linear relationship. If $X$ is positive, then the correlation between $X$ and $X^2$ is often fairly close to 1. If you want to measure the strength
45,020
What does P(A|B)*P(A|C) simplify to?
Let's say we have a problem of predicting whether a storm is coming or not. So we'd like to predict whether a storm is coming or not (event $A$), and we have some clues available to us, namely the amount of clouds in the sky (event $B$) and how scared your dogs are (event $C$). We can visualise the problem at hand usi...
What does P(A|B)*P(A|C) simplify to?
Let's say we have a problem of predicting whether a storm is coming or not. So we'd like to predict whether a storm is coming or not (event $A$), and we have some clues available to us, namely the am
What does P(A|B)*P(A|C) simplify to? Let's say we have a problem of predicting whether a storm is coming or not. So we'd like to predict whether a storm is coming or not (event $A$), and we have some clues available to us, namely the amount of clouds in the sky (event $B$) and how scared your dogs are (event $C$). We ...
What does P(A|B)*P(A|C) simplify to? Let's say we have a problem of predicting whether a storm is coming or not. So we'd like to predict whether a storm is coming or not (event $A$), and we have some clues available to us, namely the am
45,021
What does P(A|B)*P(A|C) simplify to?
In my opinion your problem is not about that expression but about modelling. You have different clues (clouds, scared dogs) which provide evidence for a forthcoming event (rain). In other words, if I understand you, your question is actually more about: how to combine different clues?. This question is dealt with in th...
What does P(A|B)*P(A|C) simplify to?
In my opinion your problem is not about that expression but about modelling. You have different clues (clouds, scared dogs) which provide evidence for a forthcoming event (rain). In other words, if I
What does P(A|B)*P(A|C) simplify to? In my opinion your problem is not about that expression but about modelling. You have different clues (clouds, scared dogs) which provide evidence for a forthcoming event (rain). In other words, if I understand you, your question is actually more about: how to combine different clue...
What does P(A|B)*P(A|C) simplify to? In my opinion your problem is not about that expression but about modelling. You have different clues (clouds, scared dogs) which provide evidence for a forthcoming event (rain). In other words, if I
45,022
What does P(A|B)*P(A|C) simplify to?
Not really a simplification (if that is possible at all) but, if you wish a relation between $P(a \vert b)P(a \vert c)$ and $P(a \vert b,c)$, you could say that $P(a \vert b)P(a \vert c) = P(a \vert b,c)^2 \frac{P(c \vert b)}{P(c \vert a,b)} \frac{P(b \vert c)}{P(b \vert a,c)}$ I think that this relation is more the...
What does P(A|B)*P(A|C) simplify to?
Not really a simplification (if that is possible at all) but, if you wish a relation between $P(a \vert b)P(a \vert c)$ and $P(a \vert b,c)$, you could say that $P(a \vert b)P(a \vert c) = P(a \vert
What does P(A|B)*P(A|C) simplify to? Not really a simplification (if that is possible at all) but, if you wish a relation between $P(a \vert b)P(a \vert c)$ and $P(a \vert b,c)$, you could say that $P(a \vert b)P(a \vert c) = P(a \vert b,c)^2 \frac{P(c \vert b)}{P(c \vert a,b)} \frac{P(b \vert c)}{P(b \vert a,c)}$ I...
What does P(A|B)*P(A|C) simplify to? Not really a simplification (if that is possible at all) but, if you wish a relation between $P(a \vert b)P(a \vert c)$ and $P(a \vert b,c)$, you could say that $P(a \vert b)P(a \vert c) = P(a \vert
45,023
What does P(A|B)*P(A|C) simplify to?
Bayes' Theorem states: $$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$$ so logically we have: $$P(A|B)P(A|C)\\ =\frac{P(B|A)P(A)}{P(B)}\frac{P(C|A)P(A)}{P(C)}\\ =\frac{P(B|A)P(C|A)P(A)^2}{P(B)P(C)} $$ As $B$ and $C$ are independent, I think this is as far as this route extends.
What does P(A|B)*P(A|C) simplify to?
Bayes' Theorem states: $$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$$ so logically we have: $$P(A|B)P(A|C)\\ =\frac{P(B|A)P(A)}{P(B)}\frac{P(C|A)P(A)}{P(C)}\\ =\frac{P(B|A)P(C|A)P(A)^2}{P(B)P(C)} $$ As $B$ and $C
What does P(A|B)*P(A|C) simplify to? Bayes' Theorem states: $$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$$ so logically we have: $$P(A|B)P(A|C)\\ =\frac{P(B|A)P(A)}{P(B)}\frac{P(C|A)P(A)}{P(C)}\\ =\frac{P(B|A)P(C|A)P(A)^2}{P(B)P(C)} $$ As $B$ and $C$ are independent, I think this is as far as this route extends.
What does P(A|B)*P(A|C) simplify to? Bayes' Theorem states: $$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$$ so logically we have: $$P(A|B)P(A|C)\\ =\frac{P(B|A)P(A)}{P(B)}\frac{P(C|A)P(A)}{P(C)}\\ =\frac{P(B|A)P(C|A)P(A)^2}{P(B)P(C)} $$ As $B$ and $C
45,024
What does P(A|B)*P(A|C) simplify to?
As JonMark Perry has already mentioned, Bayes' theorem prohibits that your initial suspicion is true. The rules for conditional probabilities specifically allow for multiplication of probabilities only if conditioning under the same event (either B or C or both simultaneously). To show you a visualistion of the 2 proba...
What does P(A|B)*P(A|C) simplify to?
As JonMark Perry has already mentioned, Bayes' theorem prohibits that your initial suspicion is true. The rules for conditional probabilities specifically allow for multiplication of probabilities onl
What does P(A|B)*P(A|C) simplify to? As JonMark Perry has already mentioned, Bayes' theorem prohibits that your initial suspicion is true. The rules for conditional probabilities specifically allow for multiplication of probabilities only if conditioning under the same event (either B or C or both simultaneously). To s...
What does P(A|B)*P(A|C) simplify to? As JonMark Perry has already mentioned, Bayes' theorem prohibits that your initial suspicion is true. The rules for conditional probabilities specifically allow for multiplication of probabilities onl
45,025
What does P(A|B)*P(A|C) simplify to?
If the objective is to somehow combine the impacts of B and C on the probability of A, I think it make sense to evaluate the probabilities $P(A|B\cup C) $ and $P(A|B\cap C)$ where: $P(A|B\cup C) =\frac{P(A|B)P(B)+P(A|C)P(C)-P(A\cap B\cap C)}{P(B\cup C)}$ and $P(A|B\cap C) =\frac{P(A\cap B \cap C)}{P(B \cap C)}$
What does P(A|B)*P(A|C) simplify to?
If the objective is to somehow combine the impacts of B and C on the probability of A, I think it make sense to evaluate the probabilities $P(A|B\cup C) $ and $P(A|B\cap C)$ where: $P(A|B\cup C) =\fra
What does P(A|B)*P(A|C) simplify to? If the objective is to somehow combine the impacts of B and C on the probability of A, I think it make sense to evaluate the probabilities $P(A|B\cup C) $ and $P(A|B\cap C)$ where: $P(A|B\cup C) =\frac{P(A|B)P(B)+P(A|C)P(C)-P(A\cap B\cap C)}{P(B\cup C)}$ and $P(A|B\cap C) =\frac{P(A...
What does P(A|B)*P(A|C) simplify to? If the objective is to somehow combine the impacts of B and C on the probability of A, I think it make sense to evaluate the probabilities $P(A|B\cup C) $ and $P(A|B\cap C)$ where: $P(A|B\cup C) =\fra
45,026
How can I obtain z-values instead of t-values in linear mixed-effect model (lmer vs glmer)?
tl;dr lmer (linear mixed models) labels this column as a "t statistic", while glmer (generalized linear mixed models) labels it as a "Z statistic", but they're actually the same number. This mirrors the difference between the way lm and glm report their output. The "t statistics" reported by lmer (assuming a Gaussian d...
How can I obtain z-values instead of t-values in linear mixed-effect model (lmer vs glmer)?
tl;dr lmer (linear mixed models) labels this column as a "t statistic", while glmer (generalized linear mixed models) labels it as a "Z statistic", but they're actually the same number. This mirrors t
How can I obtain z-values instead of t-values in linear mixed-effect model (lmer vs glmer)? tl;dr lmer (linear mixed models) labels this column as a "t statistic", while glmer (generalized linear mixed models) labels it as a "Z statistic", but they're actually the same number. This mirrors the difference between the wa...
How can I obtain z-values instead of t-values in linear mixed-effect model (lmer vs glmer)? tl;dr lmer (linear mixed models) labels this column as a "t statistic", while glmer (generalized linear mixed models) labels it as a "Z statistic", but they're actually the same number. This mirrors t
45,027
Lasso regression coefficients values
After you have done LASSO you should generally NOT use the selected variables in a separate linear regression. There are several ways to select a subset of predictor variables for a model. For example, you could use stepwise regression or, with few enough predictors, you could examine all possible subsets of predictors...
Lasso regression coefficients values
After you have done LASSO you should generally NOT use the selected variables in a separate linear regression. There are several ways to select a subset of predictor variables for a model. For example
Lasso regression coefficients values After you have done LASSO you should generally NOT use the selected variables in a separate linear regression. There are several ways to select a subset of predictor variables for a model. For example, you could use stepwise regression or, with few enough predictors, you could exami...
Lasso regression coefficients values After you have done LASSO you should generally NOT use the selected variables in a separate linear regression. There are several ways to select a subset of predictor variables for a model. For example
45,028
Why use squared loss on probabilities instead of logistic loss?
Squared loss on binary outcomes is called the Brier score. It's valid in the sense of being a "proper scoring rule", because you'll get the lowest mean squared error when you use the correct probability. In other words, logistic loss and squared loss have the same minimum. This paper compares the properties of the Bri...
Why use squared loss on probabilities instead of logistic loss?
Squared loss on binary outcomes is called the Brier score. It's valid in the sense of being a "proper scoring rule", because you'll get the lowest mean squared error when you use the correct probabil
Why use squared loss on probabilities instead of logistic loss? Squared loss on binary outcomes is called the Brier score. It's valid in the sense of being a "proper scoring rule", because you'll get the lowest mean squared error when you use the correct probability. In other words, logistic loss and squared loss have...
Why use squared loss on probabilities instead of logistic loss? Squared loss on binary outcomes is called the Brier score. It's valid in the sense of being a "proper scoring rule", because you'll get the lowest mean squared error when you use the correct probabil
45,029
Why does Maximum Likelihood estimation maximize probability density instead of probability
$f(x_i, \theta)$ may not be a probability, it is a density function. In general statistics, we don't want to have to make special exceptions for continuous versus discrete random variables all the time, especially since there is a field of mathematics that gives us a unified approach yet allows us to be rigorous about ...
Why does Maximum Likelihood estimation maximize probability density instead of probability
$f(x_i, \theta)$ may not be a probability, it is a density function. In general statistics, we don't want to have to make special exceptions for continuous versus discrete random variables all the tim
Why does Maximum Likelihood estimation maximize probability density instead of probability $f(x_i, \theta)$ may not be a probability, it is a density function. In general statistics, we don't want to have to make special exceptions for continuous versus discrete random variables all the time, especially since there is ...
Why does Maximum Likelihood estimation maximize probability density instead of probability $f(x_i, \theta)$ may not be a probability, it is a density function. In general statistics, we don't want to have to make special exceptions for continuous versus discrete random variables all the tim
45,030
Why does Maximum Likelihood estimation maximize probability density instead of probability
Your question applies only to continuous random variables. In the case of discrete random variables you do use probabilities and not densities. For a continuous random variable, the probability of each point (one value of the variable) is 0, and only intervals have positive probabilities obtained by integrating the den...
Why does Maximum Likelihood estimation maximize probability density instead of probability
Your question applies only to continuous random variables. In the case of discrete random variables you do use probabilities and not densities. For a continuous random variable, the probability of eac
Why does Maximum Likelihood estimation maximize probability density instead of probability Your question applies only to continuous random variables. In the case of discrete random variables you do use probabilities and not densities. For a continuous random variable, the probability of each point (one value of the var...
Why does Maximum Likelihood estimation maximize probability density instead of probability Your question applies only to continuous random variables. In the case of discrete random variables you do use probabilities and not densities. For a continuous random variable, the probability of eac
45,031
Why does Maximum Likelihood estimation maximize probability density instead of probability
I read the question as: why do we start from the density function $f(\boldsymbol{x}|\theta)$ (with $\theta$ constant) to change point of view and interpret it as a function of $\theta$ (with $\boldsymbol{x}$'s constant) that we want to maximize? Intuitively and absolutely not rigorously, if we consider an infinitesimal...
Why does Maximum Likelihood estimation maximize probability density instead of probability
I read the question as: why do we start from the density function $f(\boldsymbol{x}|\theta)$ (with $\theta$ constant) to change point of view and interpret it as a function of $\theta$ (with $\boldsym
Why does Maximum Likelihood estimation maximize probability density instead of probability I read the question as: why do we start from the density function $f(\boldsymbol{x}|\theta)$ (with $\theta$ constant) to change point of view and interpret it as a function of $\theta$ (with $\boldsymbol{x}$'s constant) that we w...
Why does Maximum Likelihood estimation maximize probability density instead of probability I read the question as: why do we start from the density function $f(\boldsymbol{x}|\theta)$ (with $\theta$ constant) to change point of view and interpret it as a function of $\theta$ (with $\boldsym
45,032
Why does Maximum Likelihood estimation maximize probability density instead of probability
The key idea here is to consider that although Point probability is not defined for the continuous probability distribution but we can easily see that probability that the random variable('X') is "around" x is equals to f(X=x)dx. Therefore when its multiplied for all of the points the Likelihood function would not be a...
Why does Maximum Likelihood estimation maximize probability density instead of probability
The key idea here is to consider that although Point probability is not defined for the continuous probability distribution but we can easily see that probability that the random variable('X') is "aro
Why does Maximum Likelihood estimation maximize probability density instead of probability The key idea here is to consider that although Point probability is not defined for the continuous probability distribution but we can easily see that probability that the random variable('X') is "around" x is equals to f(X=x)dx....
Why does Maximum Likelihood estimation maximize probability density instead of probability The key idea here is to consider that although Point probability is not defined for the continuous probability distribution but we can easily see that probability that the random variable('X') is "aro
45,033
ROC as feature selection
Univariate feature selection is generally a poor method. This question is deftly answered by silverfish in the context of correlation, but all his arguments apply to your case as well. In short, there is no reason to believe that univariately checking how each individual variable $x$ is related to your response $y$ re...
ROC as feature selection
Univariate feature selection is generally a poor method. This question is deftly answered by silverfish in the context of correlation, but all his arguments apply to your case as well. In short, ther
ROC as feature selection Univariate feature selection is generally a poor method. This question is deftly answered by silverfish in the context of correlation, but all his arguments apply to your case as well. In short, there is no reason to believe that univariately checking how each individual variable $x$ is relate...
ROC as feature selection Univariate feature selection is generally a poor method. This question is deftly answered by silverfish in the context of correlation, but all his arguments apply to your case as well. In short, ther
45,034
ROC as feature selection
Echoing Matthew's response from another light: many markers have a concept of stratified predictive accuracy. In this sense they provide extremely good predictive accuracy in a subgroup or in-tandem with another marker. Two examples from the health sciences: Suppose for instance two types of breast cancer grow in women...
ROC as feature selection
Echoing Matthew's response from another light: many markers have a concept of stratified predictive accuracy. In this sense they provide extremely good predictive accuracy in a subgroup or in-tandem w
ROC as feature selection Echoing Matthew's response from another light: many markers have a concept of stratified predictive accuracy. In this sense they provide extremely good predictive accuracy in a subgroup or in-tandem with another marker. Two examples from the health sciences: Suppose for instance two types of br...
ROC as feature selection Echoing Matthew's response from another light: many markers have a concept of stratified predictive accuracy. In this sense they provide extremely good predictive accuracy in a subgroup or in-tandem w
45,035
ROC as feature selection
Since you are using SAS, I thought I'd share this. I'm not sure what model you are using, but if you are using logistic regression this may be a useful resource. Sample 54866: Logistic model selection using area under curve (AUC) or R-square selection criteria Under "Details", it reads: In addition to the AIC and BIC ...
ROC as feature selection
Since you are using SAS, I thought I'd share this. I'm not sure what model you are using, but if you are using logistic regression this may be a useful resource. Sample 54866: Logistic model selection
ROC as feature selection Since you are using SAS, I thought I'd share this. I'm not sure what model you are using, but if you are using logistic regression this may be a useful resource. Sample 54866: Logistic model selection using area under curve (AUC) or R-square selection criteria Under "Details", it reads: In add...
ROC as feature selection Since you are using SAS, I thought I'd share this. I'm not sure what model you are using, but if you are using logistic regression this may be a useful resource. Sample 54866: Logistic model selection
45,036
Normal Distribution with Uniform Mean
You can compute the mean and variance of the compound distribution $X$ with the law of total expectation and law of total variance. Mean: $$ E[X] = E \left[ E [X \mid U ] \right] = E[U] = \frac{b + a}{2}$$ Which is, as you observe, the mean of the uniform distribution. Variance: $$ Var[X] = E[ Var[X \mid U] ] + Var[ E[...
Normal Distribution with Uniform Mean
You can compute the mean and variance of the compound distribution $X$ with the law of total expectation and law of total variance. Mean: $$ E[X] = E \left[ E [X \mid U ] \right] = E[U] = \frac{b + a}
Normal Distribution with Uniform Mean You can compute the mean and variance of the compound distribution $X$ with the law of total expectation and law of total variance. Mean: $$ E[X] = E \left[ E [X \mid U ] \right] = E[U] = \frac{b + a}{2}$$ Which is, as you observe, the mean of the uniform distribution. Variance: $$...
Normal Distribution with Uniform Mean You can compute the mean and variance of the compound distribution $X$ with the law of total expectation and law of total variance. Mean: $$ E[X] = E \left[ E [X \mid U ] \right] = E[U] = \frac{b + a}
45,037
Normal Distribution with Uniform Mean
Distribution that is related to special case of your question was described by Bhattacharjee, Pandit, and Mohan (1963). It assumes that uniform distribution is centered around the global mean $\mu$ and has $(\mu-a, \mu+a)$ bounds. In standard form it has probability density function $$ f(z) = \frac{1}{2a} \left[\Phi\l...
Normal Distribution with Uniform Mean
Distribution that is related to special case of your question was described by Bhattacharjee, Pandit, and Mohan (1963). It assumes that uniform distribution is centered around the global mean $\mu$ an
Normal Distribution with Uniform Mean Distribution that is related to special case of your question was described by Bhattacharjee, Pandit, and Mohan (1963). It assumes that uniform distribution is centered around the global mean $\mu$ and has $(\mu-a, \mu+a)$ bounds. In standard form it has probability density functio...
Normal Distribution with Uniform Mean Distribution that is related to special case of your question was described by Bhattacharjee, Pandit, and Mohan (1963). It assumes that uniform distribution is centered around the global mean $\mu$ an
45,038
Interpretation of p-value in Mann-Whitney rank test
The p-value represents the probability of getting a test-statistic at least as extreme$^\dagger$ as the one you had in your sample, if the null hypothesis were true. A high p-value indicates you saw something really consistent with the null hypothesis (e.g. tossing 151 heads in 300 tosses of a coin you're examining for...
Interpretation of p-value in Mann-Whitney rank test
The p-value represents the probability of getting a test-statistic at least as extreme$^\dagger$ as the one you had in your sample, if the null hypothesis were true. A high p-value indicates you saw s
Interpretation of p-value in Mann-Whitney rank test The p-value represents the probability of getting a test-statistic at least as extreme$^\dagger$ as the one you had in your sample, if the null hypothesis were true. A high p-value indicates you saw something really consistent with the null hypothesis (e.g. tossing 15...
Interpretation of p-value in Mann-Whitney rank test The p-value represents the probability of getting a test-statistic at least as extreme$^\dagger$ as the one you had in your sample, if the null hypothesis were true. A high p-value indicates you saw s
45,039
Interpretation of p-value in Mann-Whitney rank test
If you are trying to prove that the two vectors are approximately equal then you have two issues: 1) What exactly do you mean by "equal"? and 2) No usual test is appropriate. For 1) you have to consider if the data are independent or not and whether you want to test means or medians or whatever. For 2) You should lo...
Interpretation of p-value in Mann-Whitney rank test
If you are trying to prove that the two vectors are approximately equal then you have two issues: 1) What exactly do you mean by "equal"? and 2) No usual test is appropriate. For 1) you have to cons
Interpretation of p-value in Mann-Whitney rank test If you are trying to prove that the two vectors are approximately equal then you have two issues: 1) What exactly do you mean by "equal"? and 2) No usual test is appropriate. For 1) you have to consider if the data are independent or not and whether you want to test...
Interpretation of p-value in Mann-Whitney rank test If you are trying to prove that the two vectors are approximately equal then you have two issues: 1) What exactly do you mean by "equal"? and 2) No usual test is appropriate. For 1) you have to cons
45,040
Transforming data
Transformations are like drugs ... Some are good for you and some aren't. Transforming data by scaling is almost always a good idea . Transforming time series data like taking differences can be a bad idea as an unwarranted difference can actually inject structure into the data. Transforming data by replacing anomalou...
Transforming data
Transformations are like drugs ... Some are good for you and some aren't. Transforming data by scaling is almost always a good idea . Transforming time series data like taking differences can be a ba
Transforming data Transformations are like drugs ... Some are good for you and some aren't. Transforming data by scaling is almost always a good idea . Transforming time series data like taking differences can be a bad idea as an unwarranted difference can actually inject structure into the data. Transforming data by ...
Transforming data Transformations are like drugs ... Some are good for you and some aren't. Transforming data by scaling is almost always a good idea . Transforming time series data like taking differences can be a ba
45,041
Transforming data
There is no particular reason for wanting to transform your data as far as the adequacy of the model is concerned. However you may want to re-scale your outcome to make the coefficients lie in a more manageable range. For instance instead of having sales as the raw count you might express it as so many millions or so m...
Transforming data
There is no particular reason for wanting to transform your data as far as the adequacy of the model is concerned. However you may want to re-scale your outcome to make the coefficients lie in a more
Transforming data There is no particular reason for wanting to transform your data as far as the adequacy of the model is concerned. However you may want to re-scale your outcome to make the coefficients lie in a more manageable range. For instance instead of having sales as the raw count you might express it as so man...
Transforming data There is no particular reason for wanting to transform your data as far as the adequacy of the model is concerned. However you may want to re-scale your outcome to make the coefficients lie in a more
45,042
Strong vs Weak Assumptions
Let $\mathbf u$ be the $T \times 1$ column error vector and $\mathbf X$ be the $T \times k$ regressor matrix, where $T$ is the sample size. Then strict exogeneity is defined as $$E\left(\mathbf u \mid \mathbf X\right) = \mathbf 0$$ This can be decomposed and written perhaps more clearly as $$E(u_t \mid \mathbf X) = 0...
Strong vs Weak Assumptions
Let $\mathbf u$ be the $T \times 1$ column error vector and $\mathbf X$ be the $T \times k$ regressor matrix, where $T$ is the sample size. Then strict exogeneity is defined as $$E\left(\mathbf u \m
Strong vs Weak Assumptions Let $\mathbf u$ be the $T \times 1$ column error vector and $\mathbf X$ be the $T \times k$ regressor matrix, where $T$ is the sample size. Then strict exogeneity is defined as $$E\left(\mathbf u \mid \mathbf X\right) = \mathbf 0$$ This can be decomposed and written perhaps more clearly as ...
Strong vs Weak Assumptions Let $\mathbf u$ be the $T \times 1$ column error vector and $\mathbf X$ be the $T \times k$ regressor matrix, where $T$ is the sample size. Then strict exogeneity is defined as $$E\left(\mathbf u \m
45,043
Strong vs Weak Assumptions
Think about a regression of sales of ice cream on advertising, where the error comes from the effect of weather, which is unobserved to you, but not to the ice cream man or his customers. You care about what advertising does to sales. Assume for the sake of simplicity that weather has no persistence across days, nor wa...
Strong vs Weak Assumptions
Think about a regression of sales of ice cream on advertising, where the error comes from the effect of weather, which is unobserved to you, but not to the ice cream man or his customers. You care abo
Strong vs Weak Assumptions Think about a regression of sales of ice cream on advertising, where the error comes from the effect of weather, which is unobserved to you, but not to the ice cream man or his customers. You care about what advertising does to sales. Assume for the sake of simplicity that weather has no pers...
Strong vs Weak Assumptions Think about a regression of sales of ice cream on advertising, where the error comes from the effect of weather, which is unobserved to you, but not to the ice cream man or his customers. You care abo
45,044
R - How are the significance codes determined when summarizing a logistic regression model?
Firstly, the z or t value (depending on what family you run) is the coefficient divided by the standard error. The p value is then derived from the normal or t distributions using this z or t value. The stars don't really add much in my view. You will see underneath the table of coefficients that there is a line which ...
R - How are the significance codes determined when summarizing a logistic regression model?
Firstly, the z or t value (depending on what family you run) is the coefficient divided by the standard error. The p value is then derived from the normal or t distributions using this z or t value. T
R - How are the significance codes determined when summarizing a logistic regression model? Firstly, the z or t value (depending on what family you run) is the coefficient divided by the standard error. The p value is then derived from the normal or t distributions using this z or t value. The stars don't really add mu...
R - How are the significance codes determined when summarizing a logistic regression model? Firstly, the z or t value (depending on what family you run) is the coefficient divided by the standard error. The p value is then derived from the normal or t distributions using this z or t value. T
45,045
R - How are the significance codes determined when summarizing a logistic regression model?
If you want to know right away which variables (independent variables, IVs) impacts your dependent varibale (DV) the most, you could use the following: install.packages("caret") # run install only if you've never installed it before library(caret) fit<-lm(DV~IV1+IV2+IV3, data=mydata) varImp(fit, scale = FALSE) ...
R - How are the significance codes determined when summarizing a logistic regression model?
If you want to know right away which variables (independent variables, IVs) impacts your dependent varibale (DV) the most, you could use the following: install.packages("caret") # run install only if
R - How are the significance codes determined when summarizing a logistic regression model? If you want to know right away which variables (independent variables, IVs) impacts your dependent varibale (DV) the most, you could use the following: install.packages("caret") # run install only if you've never installed it be...
R - How are the significance codes determined when summarizing a logistic regression model? If you want to know right away which variables (independent variables, IVs) impacts your dependent varibale (DV) the most, you could use the following: install.packages("caret") # run install only if
45,046
Do I need to guess a distribution to use MLE? [duplicate]
To apply parametric MLE, you need to specify a parametric distribution. For non-parametric MLE, you do not specify a parametric distribution. The most popular of the non-parametric MLE approaches is called Empirical Likelihood https://en.wikipedia.org/wiki/Empirical_likelihood (not much of a write up on that page). T...
Do I need to guess a distribution to use MLE? [duplicate]
To apply parametric MLE, you need to specify a parametric distribution. For non-parametric MLE, you do not specify a parametric distribution. The most popular of the non-parametric MLE approaches is
Do I need to guess a distribution to use MLE? [duplicate] To apply parametric MLE, you need to specify a parametric distribution. For non-parametric MLE, you do not specify a parametric distribution. The most popular of the non-parametric MLE approaches is called Empirical Likelihood https://en.wikipedia.org/wiki/Empi...
Do I need to guess a distribution to use MLE? [duplicate] To apply parametric MLE, you need to specify a parametric distribution. For non-parametric MLE, you do not specify a parametric distribution. The most popular of the non-parametric MLE approaches is
45,047
Do I need to guess a distribution to use MLE? [duplicate]
To apply MLE you need to assume a distribution. So, yes, you need to have a distribution in mind, usually. The standard intro texts use Gaussian. For instance, they'd show you how Gaussian distribution leads to MLE in linear model to the same estimators as in least squares regression. Gaussian distribution with indepen...
Do I need to guess a distribution to use MLE? [duplicate]
To apply MLE you need to assume a distribution. So, yes, you need to have a distribution in mind, usually. The standard intro texts use Gaussian. For instance, they'd show you how Gaussian distributio
Do I need to guess a distribution to use MLE? [duplicate] To apply MLE you need to assume a distribution. So, yes, you need to have a distribution in mind, usually. The standard intro texts use Gaussian. For instance, they'd show you how Gaussian distribution leads to MLE in linear model to the same estimators as in le...
Do I need to guess a distribution to use MLE? [duplicate] To apply MLE you need to assume a distribution. So, yes, you need to have a distribution in mind, usually. The standard intro texts use Gaussian. For instance, they'd show you how Gaussian distributio
45,048
Do I need to guess a distribution to use MLE? [duplicate]
How can I make this guess? As pointed out in other answers, sometimes you know what the distribution must be due to the nature of the data generating process. Consider Generalized Extreme Value Distribution as described in Wikipedia: By the extreme value theorem the GEV distribution is the only possible limit distri...
Do I need to guess a distribution to use MLE? [duplicate]
How can I make this guess? As pointed out in other answers, sometimes you know what the distribution must be due to the nature of the data generating process. Consider Generalized Extreme Value Distr
Do I need to guess a distribution to use MLE? [duplicate] How can I make this guess? As pointed out in other answers, sometimes you know what the distribution must be due to the nature of the data generating process. Consider Generalized Extreme Value Distribution as described in Wikipedia: By the extreme value theo...
Do I need to guess a distribution to use MLE? [duplicate] How can I make this guess? As pointed out in other answers, sometimes you know what the distribution must be due to the nature of the data generating process. Consider Generalized Extreme Value Distr
45,049
Do I need to guess a distribution to use MLE? [duplicate]
In general, no you cannot use MLE to find which family of distributions might provide a good parametric model for an outcome. That's not to say that there aren't some exploratory techniques that could shed some light on possibilities. But, as we know from statistics, using the same data as a hypothesis generating and h...
Do I need to guess a distribution to use MLE? [duplicate]
In general, no you cannot use MLE to find which family of distributions might provide a good parametric model for an outcome. That's not to say that there aren't some exploratory techniques that could
Do I need to guess a distribution to use MLE? [duplicate] In general, no you cannot use MLE to find which family of distributions might provide a good parametric model for an outcome. That's not to say that there aren't some exploratory techniques that could shed some light on possibilities. But, as we know from statis...
Do I need to guess a distribution to use MLE? [duplicate] In general, no you cannot use MLE to find which family of distributions might provide a good parametric model for an outcome. That's not to say that there aren't some exploratory techniques that could
45,050
Overfitting due to a unique identifier among features
The predictive value of an ID field will vary considerable from dataset to dataset, so in some cases it's probably ok to leave in, and in others not. One case where it could have high predictive value (and should be taken out) is where you're trying to predict the age of something, and the ID is assigned according to a...
Overfitting due to a unique identifier among features
The predictive value of an ID field will vary considerable from dataset to dataset, so in some cases it's probably ok to leave in, and in others not. One case where it could have high predictive value
Overfitting due to a unique identifier among features The predictive value of an ID field will vary considerable from dataset to dataset, so in some cases it's probably ok to leave in, and in others not. One case where it could have high predictive value (and should be taken out) is where you're trying to predict the a...
Overfitting due to a unique identifier among features The predictive value of an ID field will vary considerable from dataset to dataset, so in some cases it's probably ok to leave in, and in others not. One case where it could have high predictive value
45,051
Overfitting due to a unique identifier among features
"Certainly, any continuous column (with enough resolution) has the power to uniquely identify the example." - that's not true for a predictor that is treated as continuous. For instance, if you generate Y and X at random using uniform distribution, then all of the values of X will most likely be distinct. However, whe...
Overfitting due to a unique identifier among features
"Certainly, any continuous column (with enough resolution) has the power to uniquely identify the example." - that's not true for a predictor that is treated as continuous. For instance, if you genera
Overfitting due to a unique identifier among features "Certainly, any continuous column (with enough resolution) has the power to uniquely identify the example." - that's not true for a predictor that is treated as continuous. For instance, if you generate Y and X at random using uniform distribution, then all of the ...
Overfitting due to a unique identifier among features "Certainly, any continuous column (with enough resolution) has the power to uniquely identify the example." - that's not true for a predictor that is treated as continuous. For instance, if you genera
45,052
Overfitting due to a unique identifier among features
Another way to look at it is from the POV of a neural network. It is much easier to memorize which classes belong to which target, than to learn meaningful features. Including the identifier also encourages co-adaptation, which overfits on the training set.
Overfitting due to a unique identifier among features
Another way to look at it is from the POV of a neural network. It is much easier to memorize which classes belong to which target, than to learn meaningful features. Including the identifier also enc
Overfitting due to a unique identifier among features Another way to look at it is from the POV of a neural network. It is much easier to memorize which classes belong to which target, than to learn meaningful features. Including the identifier also encourages co-adaptation, which overfits on the training set.
Overfitting due to a unique identifier among features Another way to look at it is from the POV of a neural network. It is much easier to memorize which classes belong to which target, than to learn meaningful features. Including the identifier also enc
45,053
How can I compute the standard error of the Wald estimator?
Here is my answer to my question. I hope there is no mistake in calculus. We have : $y_{1,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_1}$ $y_{0,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_0}$ $x_{1,i}$ a dichotomous rando...
How can I compute the standard error of the Wald estimator?
Here is my answer to my question. I hope there is no mistake in calculus. We have : $y_{1,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_1}$ $y_{0,i}$ a
How can I compute the standard error of the Wald estimator? Here is my answer to my question. I hope there is no mistake in calculus. We have : $y_{1,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_1}$ $y_{0,i}$ a dichotomous random variable following a Bernouilli distribut...
How can I compute the standard error of the Wald estimator? Here is my answer to my question. I hope there is no mistake in calculus. We have : $y_{1,i}$ a dichotomous random variable following a Bernouilli distribution with parameter $\mu_{y_1}$ $y_{0,i}$ a
45,054
How can I compute the standard error of the Wald estimator?
Consider pages 287 - 290 in the original Wald(1940) paper. It walks you through the derivation of the variance.
How can I compute the standard error of the Wald estimator?
Consider pages 287 - 290 in the original Wald(1940) paper. It walks you through the derivation of the variance.
How can I compute the standard error of the Wald estimator? Consider pages 287 - 290 in the original Wald(1940) paper. It walks you through the derivation of the variance.
How can I compute the standard error of the Wald estimator? Consider pages 287 - 290 in the original Wald(1940) paper. It walks you through the derivation of the variance.
45,055
How can I compute the standard error of the Wald estimator?
For future readers: the $\beta_1$ in @MichaelChirico's post is $\beta_\mathrm{wald}$ (the average treatment effect, in usual use), and the covariance formula follows from $E[Y|X] = \beta_0 + \beta_1 X$ without loss of generality (since X is binary). (Apologies for the extra answer; I have insufficient reputation to com...
How can I compute the standard error of the Wald estimator?
For future readers: the $\beta_1$ in @MichaelChirico's post is $\beta_\mathrm{wald}$ (the average treatment effect, in usual use), and the covariance formula follows from $E[Y|X] = \beta_0 + \beta_1 X
How can I compute the standard error of the Wald estimator? For future readers: the $\beta_1$ in @MichaelChirico's post is $\beta_\mathrm{wald}$ (the average treatment effect, in usual use), and the covariance formula follows from $E[Y|X] = \beta_0 + \beta_1 X$ without loss of generality (since X is binary). (Apologies...
How can I compute the standard error of the Wald estimator? For future readers: the $\beta_1$ in @MichaelChirico's post is $\beta_\mathrm{wald}$ (the average treatment effect, in usual use), and the covariance formula follows from $E[Y|X] = \beta_0 + \beta_1 X
45,056
Probability of obtaining same sequence in $10$ tosses of a coin
This question has two reasonable interpretations. Neither requires any calculation at all, but only careful reasoning about independence and disjoint outcomes. Conditional on the first ten flips, what is the chance the next ten flips match them in sequence? If we were to re-order the sequences of both flips in the sa...
Probability of obtaining same sequence in $10$ tosses of a coin
This question has two reasonable interpretations. Neither requires any calculation at all, but only careful reasoning about independence and disjoint outcomes. Conditional on the first ten flips, wh
Probability of obtaining same sequence in $10$ tosses of a coin This question has two reasonable interpretations. Neither requires any calculation at all, but only careful reasoning about independence and disjoint outcomes. Conditional on the first ten flips, what is the chance the next ten flips match them in sequen...
Probability of obtaining same sequence in $10$ tosses of a coin This question has two reasonable interpretations. Neither requires any calculation at all, but only careful reasoning about independence and disjoint outcomes. Conditional on the first ten flips, wh
45,057
Probability of obtaining same sequence in $10$ tosses of a coin
You need to consider whether the assumption that given a coin gave a "Head" in the earlier toss, is the occurrence of a "Tail" dependent on the previous event or not? The sequence should only matter if the path taken to the last coin toss impacts the last coin toss. This would be true if we were sampling without replac...
Probability of obtaining same sequence in $10$ tosses of a coin
You need to consider whether the assumption that given a coin gave a "Head" in the earlier toss, is the occurrence of a "Tail" dependent on the previous event or not? The sequence should only matter i
Probability of obtaining same sequence in $10$ tosses of a coin You need to consider whether the assumption that given a coin gave a "Head" in the earlier toss, is the occurrence of a "Tail" dependent on the previous event or not? The sequence should only matter if the path taken to the last coin toss impacts the last ...
Probability of obtaining same sequence in $10$ tosses of a coin You need to consider whether the assumption that given a coin gave a "Head" in the earlier toss, is the occurrence of a "Tail" dependent on the previous event or not? The sequence should only matter i
45,058
Probability of obtaining same sequence in $10$ tosses of a coin
Suppose each coin toss is independent and that the first sequence of (ten) flips is $X_1, X_2, \dots, X_{10}$, where $\forall i \in \{1,\dots,10\}: X_i \in \{H,T\} $ meaning each flip is either $H $ heads or $T $ tails. Let the observed value of the $i^{th}$ flip be denoted $F_i $ so that $\mathbb{P}(X_{i+10} = F_i) $ ...
Probability of obtaining same sequence in $10$ tosses of a coin
Suppose each coin toss is independent and that the first sequence of (ten) flips is $X_1, X_2, \dots, X_{10}$, where $\forall i \in \{1,\dots,10\}: X_i \in \{H,T\} $ meaning each flip is either $H $ h
Probability of obtaining same sequence in $10$ tosses of a coin Suppose each coin toss is independent and that the first sequence of (ten) flips is $X_1, X_2, \dots, X_{10}$, where $\forall i \in \{1,\dots,10\}: X_i \in \{H,T\} $ meaning each flip is either $H $ heads or $T $ tails. Let the observed value of the $i^{th...
Probability of obtaining same sequence in $10$ tosses of a coin Suppose each coin toss is independent and that the first sequence of (ten) flips is $X_1, X_2, \dots, X_{10}$, where $\forall i \in \{1,\dots,10\}: X_i \in \{H,T\} $ meaning each flip is either $H $ h
45,059
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
You must have some autocorrelation in your data. In most cases, if one ignores correlation structure in the data (pseudolikelihood), the effect is that the estimated error in the data is too small. Suppose you considered the weather on two consecutive days, they are far more likely to be similar than the weather on two...
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
You must have some autocorrelation in your data. In most cases, if one ignores correlation structure in the data (pseudolikelihood), the effect is that the estimated error in the data is too small. Su
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? You must have some autocorrelation in your data. In most cases, if one ignores correlation structure in the data (pseudolikelihood), the effect is that the estimated error in the data is too small. Suppose you considered...
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? You must have some autocorrelation in your data. In most cases, if one ignores correlation structure in the data (pseudolikelihood), the effect is that the estimated error in the data is too small. Su
45,060
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
Hypothesis 1: You have applied cross-validation incorrectly. The information encoded in position is somehow related to the outcome. To ameliorate this, you could try not selecting your sets to be adjacent but instead a random partition. That might be enough to "average out" the effect of position. Hypothesis 2: By igno...
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
Hypothesis 1: You have applied cross-validation incorrectly. The information encoded in position is somehow related to the outcome. To ameliorate this, you could try not selecting your sets to be adja
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? Hypothesis 1: You have applied cross-validation incorrectly. The information encoded in position is somehow related to the outcome. To ameliorate this, you could try not selecting your sets to be adjacent but instead a r...
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? Hypothesis 1: You have applied cross-validation incorrectly. The information encoded in position is somehow related to the outcome. To ameliorate this, you could try not selecting your sets to be adja
45,061
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
I think you have an issue with the way the position is encoded, as above, but my take on it is a little bit different, in that I think you have it encoded as an absolute distance to a reference point. If so your NN will work well only within the boundaries of the farthest point. To obviate this problem, one solution wi...
Neural network working well on datasets near the training set, but poorly on farther datasets. Why?
I think you have an issue with the way the position is encoded, as above, but my take on it is a little bit different, in that I think you have it encoded as an absolute distance to a reference point.
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? I think you have an issue with the way the position is encoded, as above, but my take on it is a little bit different, in that I think you have it encoded as an absolute distance to a reference point. If so your NN will ...
Neural network working well on datasets near the training set, but poorly on farther datasets. Why? I think you have an issue with the way the position is encoded, as above, but my take on it is a little bit different, in that I think you have it encoded as an absolute distance to a reference point.
45,062
Convexity of linear regression
2 parameters and a single data point is not strictly convex because the rank of the matrix of observations and predictors is deficient. Indeed, as you observe, there is a line of many "equally good" solutions, and this is because for any choice of $x$ there is a corresponding $y$ which achieves the minimum: how many po...
Convexity of linear regression
2 parameters and a single data point is not strictly convex because the rank of the matrix of observations and predictors is deficient. Indeed, as you observe, there is a line of many "equally good" s
Convexity of linear regression 2 parameters and a single data point is not strictly convex because the rank of the matrix of observations and predictors is deficient. Indeed, as you observe, there is a line of many "equally good" solutions, and this is because for any choice of $x$ there is a corresponding $y$ which ac...
Convexity of linear regression 2 parameters and a single data point is not strictly convex because the rank of the matrix of observations and predictors is deficient. Indeed, as you observe, there is a line of many "equally good" s
45,063
Convexity of linear regression
$$(x+y-2)^2=0$$ $$x+y=2$$ $$y=2-x$$ You can pick any $x$, and get a corresponding $y$, i.e. there's no unique solution. With two unknowns and one observation, there's not going to be a unique solution
Convexity of linear regression
$$(x+y-2)^2=0$$ $$x+y=2$$ $$y=2-x$$ You can pick any $x$, and get a corresponding $y$, i.e. there's no unique solution. With two unknowns and one observation, there's not going to be a unique solution
Convexity of linear regression $$(x+y-2)^2=0$$ $$x+y=2$$ $$y=2-x$$ You can pick any $x$, and get a corresponding $y$, i.e. there's no unique solution. With two unknowns and one observation, there's not going to be a unique solution
Convexity of linear regression $$(x+y-2)^2=0$$ $$x+y=2$$ $$y=2-x$$ You can pick any $x$, and get a corresponding $y$, i.e. there's no unique solution. With two unknowns and one observation, there's not going to be a unique solution
45,064
Why use mean of posterior distribution instead of probability?
railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has. The example concerns with so called German tank problem. From what I see, Allen B. Downey does not suggest that taking mean of posterior distribution enables us to calcul...
Why use mean of posterior distribution instead of probability?
railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has. The example concerns with so called German tank pr
Why use mean of posterior distribution instead of probability? railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has. The example concerns with so called German tank problem. From what I see, Allen B. Downey does not suggest ...
Why use mean of posterior distribution instead of probability? railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has. The example concerns with so called German tank pr
45,065
Why use mean of posterior distribution instead of probability?
Indeed, the mean of the posterior says nothing that the posterior density itself does not contain. However as it minmises the loss function $$ mean(p(\theta|x)) = argmin_{\theta^{*}} \int_{\theta} ||\theta^{*}-\theta||^2 \cdot p(\theta|x) \cdot d\theta $$ it provides a number (which can be interpreted more easily than...
Why use mean of posterior distribution instead of probability?
Indeed, the mean of the posterior says nothing that the posterior density itself does not contain. However as it minmises the loss function $$ mean(p(\theta|x)) = argmin_{\theta^{*}} \int_{\theta} ||
Why use mean of posterior distribution instead of probability? Indeed, the mean of the posterior says nothing that the posterior density itself does not contain. However as it minmises the loss function $$ mean(p(\theta|x)) = argmin_{\theta^{*}} \int_{\theta} ||\theta^{*}-\theta||^2 \cdot p(\theta|x) \cdot d\theta $$ ...
Why use mean of posterior distribution instead of probability? Indeed, the mean of the posterior says nothing that the posterior density itself does not contain. However as it minmises the loss function $$ mean(p(\theta|x)) = argmin_{\theta^{*}} \int_{\theta} ||
45,066
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
I am convinced that it is incorrect to use the Vuong test -- in any of its forms -- as a test for zero-inflation. I have had a paper "The misuse of the Vuong test for non-nested models to test for zero-inflation" published that explains why. See http://cybermetrics.wlv.ac.uk/paperdata/misusevuong.pdf. I have also pre...
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
I am convinced that it is incorrect to use the Vuong test -- in any of its forms -- as a test for zero-inflation. I have had a paper "The misuse of the Vuong test for non-nested models to test for zer
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results I am convinced that it is incorrect to use the Vuong test -- in any of its forms -- as a test for zero-inflation. I have had a paper "The misuse of the Vuong test for non-nested models to test for zero-inflation" published that explains why...
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results I am convinced that it is incorrect to use the Vuong test -- in any of its forms -- as a test for zero-inflation. I have had a paper "The misuse of the Vuong test for non-nested models to test for zer
45,067
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
Great question, with a very un-great answer: it depends. It depends on whether or not there actually is zero-inflation in the DGP. To say it another way, the vuong test is conditional - not a diagnosis, so there will be much to justify when it comes to your results. The best explanation I have found is in Desmarais an...
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
Great question, with a very un-great answer: it depends. It depends on whether or not there actually is zero-inflation in the DGP. To say it another way, the vuong test is conditional - not a diagnosi
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results Great question, with a very un-great answer: it depends. It depends on whether or not there actually is zero-inflation in the DGP. To say it another way, the vuong test is conditional - not a diagnosis, so there will be much to justify when...
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results Great question, with a very un-great answer: it depends. It depends on whether or not there actually is zero-inflation in the DGP. To say it another way, the vuong test is conditional - not a diagnosi
45,068
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
This is a very interesting question I'm also searching for. Unfortunately, I couldn't find an answer yet. That's why I cannot help you with explaining the difference between Raw, AIC and BIC. However, I can help you with your initial question, which model you should choose. The AIC- and BIC-corrected tests are based ch...
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
This is a very interesting question I'm also searching for. Unfortunately, I couldn't find an answer yet. That's why I cannot help you with explaining the difference between Raw, AIC and BIC. However,
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results This is a very interesting question I'm also searching for. Unfortunately, I couldn't find an answer yet. That's why I cannot help you with explaining the difference between Raw, AIC and BIC. However, I can help you with your initial questi...
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results This is a very interesting question I'm also searching for. Unfortunately, I couldn't find an answer yet. That's why I cannot help you with explaining the difference between Raw, AIC and BIC. However,
45,069
How do I understand ANCOVA in basic layman's terms?
To answer your question, I would like to invite you to think of a broader picture for, then, take you back to your original question. First, I would like to introduce a comparison between ANOVA and linear regression with one categorical independent variable; second, I would like to introduce a comparison between ANCOVA...
How do I understand ANCOVA in basic layman's terms?
To answer your question, I would like to invite you to think of a broader picture for, then, take you back to your original question. First, I would like to introduce a comparison between ANOVA and li
How do I understand ANCOVA in basic layman's terms? To answer your question, I would like to invite you to think of a broader picture for, then, take you back to your original question. First, I would like to introduce a comparison between ANOVA and linear regression with one categorical independent variable; second, I...
How do I understand ANCOVA in basic layman's terms? To answer your question, I would like to invite you to think of a broader picture for, then, take you back to your original question. First, I would like to introduce a comparison between ANOVA and li
45,070
How do I understand ANCOVA in basic layman's terms?
ANOVA looks at the influence of one or more grouping variables (factors) on some continuous dependent measure. ANCOVA includes at least one grouping variable, but also includes interval-or-ratio-scaled variables on the IV side that are assumed to relate to the DV in linear fashion as in a regression. ANOVA will let me ...
How do I understand ANCOVA in basic layman's terms?
ANOVA looks at the influence of one or more grouping variables (factors) on some continuous dependent measure. ANCOVA includes at least one grouping variable, but also includes interval-or-ratio-scale
How do I understand ANCOVA in basic layman's terms? ANOVA looks at the influence of one or more grouping variables (factors) on some continuous dependent measure. ANCOVA includes at least one grouping variable, but also includes interval-or-ratio-scaled variables on the IV side that are assumed to relate to the DV in l...
How do I understand ANCOVA in basic layman's terms? ANOVA looks at the influence of one or more grouping variables (factors) on some continuous dependent measure. ANCOVA includes at least one grouping variable, but also includes interval-or-ratio-scale
45,071
Data augmentation step in Krizhevsky et al. paper
They say they increased the size of the training set by a factor of 2048. Does this mean they trained on a total of 2024 X 1.2 millions images? Yes, in the paper: The first form of data augmentation consists of generating image translations and horizontal reflections. We do this by extracting random 224x224 patches ...
Data augmentation step in Krizhevsky et al. paper
They say they increased the size of the training set by a factor of 2048. Does this mean they trained on a total of 2024 X 1.2 millions images? Yes, in the paper: The first form of data augmentatio
Data augmentation step in Krizhevsky et al. paper They say they increased the size of the training set by a factor of 2048. Does this mean they trained on a total of 2024 X 1.2 millions images? Yes, in the paper: The first form of data augmentation consists of generating image translations and horizontal reflections...
Data augmentation step in Krizhevsky et al. paper They say they increased the size of the training set by a factor of 2048. Does this mean they trained on a total of 2024 X 1.2 millions images? Yes, in the paper: The first form of data augmentatio
45,072
Data augmentation step in Krizhevsky et al. paper
I think they've trained only on 1.2M images. Here is why: Even if they could get 0.001s per forward and backward pass (with 1 Titan X and cuDNN), it would take this much time to train on 2048*1.28M images for 90 epochs with mini-batch SGD: 0.001*2048*1280000*90/60/60/24 = ~ 2730 days = ~ 7.5 years
Data augmentation step in Krizhevsky et al. paper
I think they've trained only on 1.2M images. Here is why: Even if they could get 0.001s per forward and backward pass (with 1 Titan X and cuDNN), it would take this much time to train on 2048*1.28M im
Data augmentation step in Krizhevsky et al. paper I think they've trained only on 1.2M images. Here is why: Even if they could get 0.001s per forward and backward pass (with 1 Titan X and cuDNN), it would take this much time to train on 2048*1.28M images for 90 epochs with mini-batch SGD: 0.001*2048*1280000*90/60/60/24...
Data augmentation step in Krizhevsky et al. paper I think they've trained only on 1.2M images. Here is why: Even if they could get 0.001s per forward and backward pass (with 1 Titan X and cuDNN), it would take this much time to train on 2048*1.28M im
45,073
Data augmentation step in Krizhevsky et al. paper
They are actually training on 1.2 million * 2048 training images. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images For each training image of size 256x256, if you extract patches of size 224x224, you can get up to 1024 224x224 patches from the image ((256-...
Data augmentation step in Krizhevsky et al. paper
They are actually training on 1.2 million * 2048 training images. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images For each training ima
Data augmentation step in Krizhevsky et al. paper They are actually training on 1.2 million * 2048 training images. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images For each training image of size 256x256, if you extract patches of size 224x224, you can ge...
Data augmentation step in Krizhevsky et al. paper They are actually training on 1.2 million * 2048 training images. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images For each training ima
45,074
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
You're hitting the wall because you're exhausting limitations of the first fourier transform fourier(1:n,i,m1). As RandomDude correctly pointed out above, # of transforms i should be less than half period (m1). However, if, with your code, you run 2 cycles -- one for i, and another for j, where j would be # of transfor...
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
You're hitting the wall because you're exhausting limitations of the first fourier transform fourier(1:n,i,m1). As RandomDude correctly pointed out above, # of transforms i should be less than half pe
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors You're hitting the wall because you're exhausting limitations of the first fourier transform fourier(1:n,i,m1). As RandomDude correctly pointed out above, # of transforms i should be less than half period (m1). However, if, with ...
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors You're hitting the wall because you're exhausting limitations of the first fourier transform fourier(1:n,i,m1). As RandomDude correctly pointed out above, # of transforms i should be less than half pe
45,075
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
Is there a reason why you are not using the fourier() function in the forecast package? When you try to build a fourier term of a seasonal time series object your K must be smaller than period/2. Otherwise you get an error: fourier(ts(test, frequency=7),4) #3 works, 4+ doesn't Error in ...fourier(x, K, 1:length(x)) : ...
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
Is there a reason why you are not using the fourier() function in the forecast package? When you try to build a fourier term of a seasonal time series object your K must be smaller than period/2. Othe
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors Is there a reason why you are not using the fourier() function in the forecast package? When you try to build a fourier term of a seasonal time series object your K must be smaller than period/2. Otherwise you get an error: fouri...
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors Is there a reason why you are not using the fourier() function in the forecast package? When you try to build a fourier term of a seasonal time series object your K must be smaller than period/2. Othe
45,076
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
Optimization of Fourier pairs based on AICc values. This is for yearly and monthly seasonality on data without weekends. The ranges 0:10 and 1:20 should be changed accordingly for different seasonal periods. Or increased for a broader search. msts_test <- msts( test , seasonal.periods = c(21.66,260)) my_aic_df <- ma...
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors
Optimization of Fourier pairs based on AICc values. This is for yearly and monthly seasonality on data without weekends. The ranges 0:10 and 1:20 should be changed accordingly for different seasonal p
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors Optimization of Fourier pairs based on AICc values. This is for yearly and monthly seasonality on data without weekends. The ranges 0:10 and 1:20 should be changed accordingly for different seasonal periods. Or increased for a br...
R Time series forecasting: Having issues selecting fourier pairs for ARIMA with regressors Optimization of Fourier pairs based on AICc values. This is for yearly and monthly seasonality on data without weekends. The ranges 0:10 and 1:20 should be changed accordingly for different seasonal p
45,077
Class weights in caret [closed]
I haven't gotten around to implementing it for all the models that can accept weights. Right now, it should work for rpart variants, glmnet, gamSpline, glmboost, gamboost, evtree, ctree, ctree2, chaid, cforest, blackboost, treebag, glm, glmStepAIC, and bayesglm. Note that ksvm function does not have a weight parameter...
Class weights in caret [closed]
I haven't gotten around to implementing it for all the models that can accept weights. Right now, it should work for rpart variants, glmnet, gamSpline, glmboost, gamboost, evtree, ctree, ctree2, chaid
Class weights in caret [closed] I haven't gotten around to implementing it for all the models that can accept weights. Right now, it should work for rpart variants, glmnet, gamSpline, glmboost, gamboost, evtree, ctree, ctree2, chaid, cforest, blackboost, treebag, glm, glmStepAIC, and bayesglm. Note that ksvm function ...
Class weights in caret [closed] I haven't gotten around to implementing it for all the models that can accept weights. Right now, it should work for rpart variants, glmnet, gamSpline, glmboost, gamboost, evtree, ctree, ctree2, chaid
45,078
PCA: Eigenvectors of opposite sign and not being able to compute eigenvectors with `solve` in R
1) The definition of eigenvector $Ax = \lambda x$ is ambidextrous. If $x$ is an eigenvector, so is $-x$, for then $$A(-x) = -Ax = -\lambda x = \lambda (-x)$$ So the definition of an eigenbasis is ambiguous of sign. 2) It's hard to know for sure, but I have a strong suspicion of what is happening here. Your equation ...
PCA: Eigenvectors of opposite sign and not being able to compute eigenvectors with `solve` in R
1) The definition of eigenvector $Ax = \lambda x$ is ambidextrous. If $x$ is an eigenvector, so is $-x$, for then $$A(-x) = -Ax = -\lambda x = \lambda (-x)$$ So the definition of an eigenbasis is am
PCA: Eigenvectors of opposite sign and not being able to compute eigenvectors with `solve` in R 1) The definition of eigenvector $Ax = \lambda x$ is ambidextrous. If $x$ is an eigenvector, so is $-x$, for then $$A(-x) = -Ax = -\lambda x = \lambda (-x)$$ So the definition of an eigenbasis is ambiguous of sign. 2) It's...
PCA: Eigenvectors of opposite sign and not being able to compute eigenvectors with `solve` in R 1) The definition of eigenvector $Ax = \lambda x$ is ambidextrous. If $x$ is an eigenvector, so is $-x$, for then $$A(-x) = -Ax = -\lambda x = \lambda (-x)$$ So the definition of an eigenbasis is am
45,079
Probability that one sum of squared standard normals is greater than a constant times another such sum
You can use the relation of the F-distribution to the chi-squared $$F_{m,n}=\frac{\chi_m^2/m}{\chi_n^2/n}$$ $P\{Y_3^2 + Y_4^2+ ... + Y_n^2 \geq \alpha ( Y_1^2 + Y_2^2)\}=P(\chi_{n-2}^2\ge\alpha\chi_2^2)=P(\frac{\chi_{n-2}^2}{\chi_2^2}\ge\alpha)$ Now you can adjust it to be in the form of an $F$. $$P\Bigg(F_{n-2,2}\ge\...
Probability that one sum of squared standard normals is greater than a constant times another such s
You can use the relation of the F-distribution to the chi-squared $$F_{m,n}=\frac{\chi_m^2/m}{\chi_n^2/n}$$ $P\{Y_3^2 + Y_4^2+ ... + Y_n^2 \geq \alpha ( Y_1^2 + Y_2^2)\}=P(\chi_{n-2}^2\ge\alpha\chi_2
Probability that one sum of squared standard normals is greater than a constant times another such sum You can use the relation of the F-distribution to the chi-squared $$F_{m,n}=\frac{\chi_m^2/m}{\chi_n^2/n}$$ $P\{Y_3^2 + Y_4^2+ ... + Y_n^2 \geq \alpha ( Y_1^2 + Y_2^2)\}=P(\chi_{n-2}^2\ge\alpha\chi_2^2)=P(\frac{\chi_...
Probability that one sum of squared standard normals is greater than a constant times another such s You can use the relation of the F-distribution to the chi-squared $$F_{m,n}=\frac{\chi_m^2/m}{\chi_n^2/n}$$ $P\{Y_3^2 + Y_4^2+ ... + Y_n^2 \geq \alpha ( Y_1^2 + Y_2^2)\}=P(\chi_{n-2}^2\ge\alpha\chi_2
45,080
Probability that one sum of squared standard normals is greater than a constant times another such sum
In keeping with the self-study policy I will leave some hints rather than post a complete answer, but also try to explore a little about why this type of question "works". Probably I have to use the fact that the sum of squared standard gaussians is chi-distributed random variable Yes, but you need to make that "th...
Probability that one sum of squared standard normals is greater than a constant times another such s
In keeping with the self-study policy I will leave some hints rather than post a complete answer, but also try to explore a little about why this type of question "works". Probably I have to use the
Probability that one sum of squared standard normals is greater than a constant times another such sum In keeping with the self-study policy I will leave some hints rather than post a complete answer, but also try to explore a little about why this type of question "works". Probably I have to use the fact that the sum...
Probability that one sum of squared standard normals is greater than a constant times another such s In keeping with the self-study policy I will leave some hints rather than post a complete answer, but also try to explore a little about why this type of question "works". Probably I have to use the
45,081
Which formula is this?
Scheaffer et al[1] call this the "estimated variance of $\bar{y}$", or $\widehat{V}(\bar{y})$ See equation 4.2, p83 (Well, they have $\left(1-\frac{n}{N}\right)\frac{s^2}{n}$, but it's trivial to show they're the same. They call $\left(1-\frac{n}{N}\right)$ the $\text{fpc}$, for finite population correction.) They say ...
Which formula is this?
Scheaffer et al[1] call this the "estimated variance of $\bar{y}$", or $\widehat{V}(\bar{y})$ See equation 4.2, p83 (Well, they have $\left(1-\frac{n}{N}\right)\frac{s^2}{n}$, but it's trivial to show
Which formula is this? Scheaffer et al[1] call this the "estimated variance of $\bar{y}$", or $\widehat{V}(\bar{y})$ See equation 4.2, p83 (Well, they have $\left(1-\frac{n}{N}\right)\frac{s^2}{n}$, but it's trivial to show they're the same. They call $\left(1-\frac{n}{N}\right)$ the $\text{fpc}$, for finite population...
Which formula is this? Scheaffer et al[1] call this the "estimated variance of $\bar{y}$", or $\widehat{V}(\bar{y})$ See equation 4.2, p83 (Well, they have $\left(1-\frac{n}{N}\right)\frac{s^2}{n}$, but it's trivial to show
45,082
Sum of binomial coefficients with increasing $n$
Let's add in an initial value of $1 = \binom{n}{0}$. The fundamental relationship $$\binom{n}{k-1} + \binom{n}{k} = \binom{n+1}{k}\tag{1}$$ makes the sum telescope: $$\eqalign{ &\color{Blue}{\binom{n}{0} + \binom{n}{1}} &+\binom{n+1}{2} &+ \binom{n+2}{3} + \cdots &+ \binom{n+m}{m+1} \\ & =\color{Blue}{\binom{n+1}{1}...
Sum of binomial coefficients with increasing $n$
Let's add in an initial value of $1 = \binom{n}{0}$. The fundamental relationship $$\binom{n}{k-1} + \binom{n}{k} = \binom{n+1}{k}\tag{1}$$ makes the sum telescope: $$\eqalign{ &\color{Blue}{\binom{n
Sum of binomial coefficients with increasing $n$ Let's add in an initial value of $1 = \binom{n}{0}$. The fundamental relationship $$\binom{n}{k-1} + \binom{n}{k} = \binom{n+1}{k}\tag{1}$$ makes the sum telescope: $$\eqalign{ &\color{Blue}{\binom{n}{0} + \binom{n}{1}} &+\binom{n+1}{2} &+ \binom{n+2}{3} + \cdots &+ \...
Sum of binomial coefficients with increasing $n$ Let's add in an initial value of $1 = \binom{n}{0}$. The fundamental relationship $$\binom{n}{k-1} + \binom{n}{k} = \binom{n+1}{k}\tag{1}$$ makes the sum telescope: $$\eqalign{ &\color{Blue}{\binom{n
45,083
Sum of binomial coefficients with increasing $n$
$\binom{n+1}{2} = \binom{n}{1} + \binom{n}{2} = \binom{n}{1} + \frac{n-1}{2} \binom{n}{1}$ $\binom{n+2}{3} = \binom{n+1}{2} + \binom{n+1}{3} = \binom{n+1}{2} + \frac{n-1}{2} \binom{n+1}{2}$ $\binom{n+3}{4} = \binom{n+2}{3} + \binom{n+2}{4} = \binom{n+2}{3} + \frac{n-1}{3} \binom{n+2}{3}$ ... ... $\binom{n+m}{m} = \bino...
Sum of binomial coefficients with increasing $n$
$\binom{n+1}{2} = \binom{n}{1} + \binom{n}{2} = \binom{n}{1} + \frac{n-1}{2} \binom{n}{1}$ $\binom{n+2}{3} = \binom{n+1}{2} + \binom{n+1}{3} = \binom{n+1}{2} + \frac{n-1}{2} \binom{n+1}{2}$ $\binom{n+
Sum of binomial coefficients with increasing $n$ $\binom{n+1}{2} = \binom{n}{1} + \binom{n}{2} = \binom{n}{1} + \frac{n-1}{2} \binom{n}{1}$ $\binom{n+2}{3} = \binom{n+1}{2} + \binom{n+1}{3} = \binom{n+1}{2} + \frac{n-1}{2} \binom{n+1}{2}$ $\binom{n+3}{4} = \binom{n+2}{3} + \binom{n+2}{4} = \binom{n+2}{3} + \frac{n-1}{3...
Sum of binomial coefficients with increasing $n$ $\binom{n+1}{2} = \binom{n}{1} + \binom{n}{2} = \binom{n}{1} + \frac{n-1}{2} \binom{n}{1}$ $\binom{n+2}{3} = \binom{n+1}{2} + \binom{n+1}{3} = \binom{n+1}{2} + \frac{n-1}{2} \binom{n+1}{2}$ $\binom{n+
45,084
Which stats tests to use? Bee flight activity quantified during an eclipse
Here is a graph of the two days' data (Day 1, gold line with black markers, Day 2 black line with gold markers; I coded time in terms of minutes elapsed since the first measurement of the day): The trouble with asking "whether there was a difference between Day 1 and 2" is that the answer is Yes. There are many differ...
Which stats tests to use? Bee flight activity quantified during an eclipse
Here is a graph of the two days' data (Day 1, gold line with black markers, Day 2 black line with gold markers; I coded time in terms of minutes elapsed since the first measurement of the day): The t
Which stats tests to use? Bee flight activity quantified during an eclipse Here is a graph of the two days' data (Day 1, gold line with black markers, Day 2 black line with gold markers; I coded time in terms of minutes elapsed since the first measurement of the day): The trouble with asking "whether there was a diffe...
Which stats tests to use? Bee flight activity quantified during an eclipse Here is a graph of the two days' data (Day 1, gold line with black markers, Day 2 black line with gold markers; I coded time in terms of minutes elapsed since the first measurement of the day): The t
45,085
Clustering high dimensional data (p > n) in R
First some background: R is good choice and have so many clustering methods in different packages. The functions include Hierarchical Clustering, Partitioning Clustering, Model-Based Clustering, and Cluster-wise Regression. Connectivity based clustering or Hierarchical clustering (also called hierarchical cluster ana...
Clustering high dimensional data (p > n) in R
First some background: R is good choice and have so many clustering methods in different packages. The functions include Hierarchical Clustering, Partitioning Clustering, Model-Based Clustering, and
Clustering high dimensional data (p > n) in R First some background: R is good choice and have so many clustering methods in different packages. The functions include Hierarchical Clustering, Partitioning Clustering, Model-Based Clustering, and Cluster-wise Regression. Connectivity based clustering or Hierarchical cl...
Clustering high dimensional data (p > n) in R First some background: R is good choice and have so many clustering methods in different packages. The functions include Hierarchical Clustering, Partitioning Clustering, Model-Based Clustering, and
45,086
Linear mixed effect model vs. Ordered Probit vs. Ordered Logit with ordinal response
This is going to be at best a partial answer but hope it helps a little. Given that your response is ordinal you have to ask yourself whether the distance between different categories is different depending on the starting position. In other words. If you think the gap between 1 and 3 is not necessarily the same gap as...
Linear mixed effect model vs. Ordered Probit vs. Ordered Logit with ordinal response
This is going to be at best a partial answer but hope it helps a little. Given that your response is ordinal you have to ask yourself whether the distance between different categories is different dep
Linear mixed effect model vs. Ordered Probit vs. Ordered Logit with ordinal response This is going to be at best a partial answer but hope it helps a little. Given that your response is ordinal you have to ask yourself whether the distance between different categories is different depending on the starting position. In...
Linear mixed effect model vs. Ordered Probit vs. Ordered Logit with ordinal response This is going to be at best a partial answer but hope it helps a little. Given that your response is ordinal you have to ask yourself whether the distance between different categories is different dep
45,087
Monte Carlo integration aim for maximum variance
50% is wrong: the closer you can get to 100% the better off you are. Let the measure of the target region $T$ be $t$ and the measure of the enclosing (or "probe") region $V$ be $v$. The chance of a uniformly random point in $V$ to lie in $T$ therefore is $t/v$. This Bernoulli distribution has variance $$\frac{t}{v}\le...
Monte Carlo integration aim for maximum variance
50% is wrong: the closer you can get to 100% the better off you are. Let the measure of the target region $T$ be $t$ and the measure of the enclosing (or "probe") region $V$ be $v$. The chance of a un
Monte Carlo integration aim for maximum variance 50% is wrong: the closer you can get to 100% the better off you are. Let the measure of the target region $T$ be $t$ and the measure of the enclosing (or "probe") region $V$ be $v$. The chance of a uniformly random point in $V$ to lie in $T$ therefore is $t/v$. This Ber...
Monte Carlo integration aim for maximum variance 50% is wrong: the closer you can get to 100% the better off you are. Let the measure of the target region $T$ be $t$ and the measure of the enclosing (or "probe") region $V$ be $v$. The chance of a un
45,088
Monte Carlo integration aim for maximum variance
Partial solution that explains why 2 % is worse than 50 %, but does not arrive at the 50 % guideline. The variance of the estimate $\hat{p}$ of the proportion $|T|/V$, \begin{equation} \textrm{Var}\left(\hat{p}\right) = \frac{p(1-p)}{N}, \end{equation} is indeed maximized when $V=2|T|$. However, we are actually interes...
Monte Carlo integration aim for maximum variance
Partial solution that explains why 2 % is worse than 50 %, but does not arrive at the 50 % guideline. The variance of the estimate $\hat{p}$ of the proportion $|T|/V$, \begin{equation} \textrm{Var}\le
Monte Carlo integration aim for maximum variance Partial solution that explains why 2 % is worse than 50 %, but does not arrive at the 50 % guideline. The variance of the estimate $\hat{p}$ of the proportion $|T|/V$, \begin{equation} \textrm{Var}\left(\hat{p}\right) = \frac{p(1-p)}{N}, \end{equation} is indeed maximize...
Monte Carlo integration aim for maximum variance Partial solution that explains why 2 % is worse than 50 %, but does not arrive at the 50 % guideline. The variance of the estimate $\hat{p}$ of the proportion $|T|/V$, \begin{equation} \textrm{Var}\le
45,089
Monte Carlo integration aim for maximum variance
It's easy. Let's say T is a unit circle, and S is a square than contains it. When you sample from S, you want to pick points which are closer to where the circle's bound is, right? If you sample point in the center of a square, you know that they're going to be inside the circle too. There's very little information gai...
Monte Carlo integration aim for maximum variance
It's easy. Let's say T is a unit circle, and S is a square than contains it. When you sample from S, you want to pick points which are closer to where the circle's bound is, right? If you sample point
Monte Carlo integration aim for maximum variance It's easy. Let's say T is a unit circle, and S is a square than contains it. When you sample from S, you want to pick points which are closer to where the circle's bound is, right? If you sample point in the center of a square, you know that they're going to be inside th...
Monte Carlo integration aim for maximum variance It's easy. Let's say T is a unit circle, and S is a square than contains it. When you sample from S, you want to pick points which are closer to where the circle's bound is, right? If you sample point
45,090
If any two variables follow a normal bivariate distribution does it also have a multivariate normal distribution?
Here's a counter-example: Let $X$, $Y$, $Z$ be independent standard normal, and let $W = |Z|\cdot \text{sign}(XY)$. Then $(W,X)$, $(W,Y)$ and $(X,Y)$ are bivariate normal, but $(W,X,Y)$ is not trivariate normal, since $WXY$ is never negative. What's happening is that the trivariate distribution has been constructed so ...
If any two variables follow a normal bivariate distribution does it also have a multivariate normal
Here's a counter-example: Let $X$, $Y$, $Z$ be independent standard normal, and let $W = |Z|\cdot \text{sign}(XY)$. Then $(W,X)$, $(W,Y)$ and $(X,Y)$ are bivariate normal, but $(W,X,Y)$ is not trivari
If any two variables follow a normal bivariate distribution does it also have a multivariate normal distribution? Here's a counter-example: Let $X$, $Y$, $Z$ be independent standard normal, and let $W = |Z|\cdot \text{sign}(XY)$. Then $(W,X)$, $(W,Y)$ and $(X,Y)$ are bivariate normal, but $(W,X,Y)$ is not trivariate no...
If any two variables follow a normal bivariate distribution does it also have a multivariate normal Here's a counter-example: Let $X$, $Y$, $Z$ be independent standard normal, and let $W = |Z|\cdot \text{sign}(XY)$. Then $(W,X)$, $(W,Y)$ and $(X,Y)$ are bivariate normal, but $(W,X,Y)$ is not trivari
45,091
If any two variables follow a normal bivariate distribution does it also have a multivariate normal distribution?
No. In theory, exceptions like @Glen_b's answer may apply. In practice, multivariate normality partly depends on how precisely your variables follow their normal uni/bivariate distributions. It is rare that any real distribution has exactly zero skewness and zero excess kurtosis, after all. Therefore, if you're inferr...
If any two variables follow a normal bivariate distribution does it also have a multivariate normal
No. In theory, exceptions like @Glen_b's answer may apply. In practice, multivariate normality partly depends on how precisely your variables follow their normal uni/bivariate distributions. It is ra
If any two variables follow a normal bivariate distribution does it also have a multivariate normal distribution? No. In theory, exceptions like @Glen_b's answer may apply. In practice, multivariate normality partly depends on how precisely your variables follow their normal uni/bivariate distributions. It is rare tha...
If any two variables follow a normal bivariate distribution does it also have a multivariate normal No. In theory, exceptions like @Glen_b's answer may apply. In practice, multivariate normality partly depends on how precisely your variables follow their normal uni/bivariate distributions. It is ra
45,092
How to specify/restrict the sign of coefficients in a GLM or similar model in R
The negative estimated coefficient on something that you KNOW is positive comes from omitted variable bias and/or colinearity between your regressors. For prediction, this isn't so problematic, so long as you are sampling new data to predict the outcome (price?) of from the same population as your sample. The negati...
How to specify/restrict the sign of coefficients in a GLM or similar model in R
The negative estimated coefficient on something that you KNOW is positive comes from omitted variable bias and/or colinearity between your regressors. For prediction, this isn't so problematic, so l
How to specify/restrict the sign of coefficients in a GLM or similar model in R The negative estimated coefficient on something that you KNOW is positive comes from omitted variable bias and/or colinearity between your regressors. For prediction, this isn't so problematic, so long as you are sampling new data to pred...
How to specify/restrict the sign of coefficients in a GLM or similar model in R The negative estimated coefficient on something that you KNOW is positive comes from omitted variable bias and/or colinearity between your regressors. For prediction, this isn't so problematic, so l
45,093
How to specify/restrict the sign of coefficients in a GLM or similar model in R
You can do this in R with Lavaan by specifying the model as a structural equation model and adding constraints. I'm not sure if it's a good idea, but it can be done. #load library and generate some data library(lavaan) d <- as.data.frame(matrix(rnorm(1:3000), ncol=3, dimnames=list(NULL, c("y", "x1", "x2")))) Run it w...
How to specify/restrict the sign of coefficients in a GLM or similar model in R
You can do this in R with Lavaan by specifying the model as a structural equation model and adding constraints. I'm not sure if it's a good idea, but it can be done. #load library and generate some da
How to specify/restrict the sign of coefficients in a GLM or similar model in R You can do this in R with Lavaan by specifying the model as a structural equation model and adding constraints. I'm not sure if it's a good idea, but it can be done. #load library and generate some data library(lavaan) d <- as.data.frame(m...
How to specify/restrict the sign of coefficients in a GLM or similar model in R You can do this in R with Lavaan by specifying the model as a structural equation model and adding constraints. I'm not sure if it's a good idea, but it can be done. #load library and generate some da
45,094
How to specify/restrict the sign of coefficients in a GLM or similar model in R
For future references, if you don't mind switching to lasso type glm, you can use cv.glmnet with the argument lower.limits to specify which parameters should not go under 0. It also has the nice property of removing a lot of spurious correlations from the fit: using as reference model, the model with lambda = "lambda....
How to specify/restrict the sign of coefficients in a GLM or similar model in R
For future references, if you don't mind switching to lasso type glm, you can use cv.glmnet with the argument lower.limits to specify which parameters should not go under 0. It also has the nice prop
How to specify/restrict the sign of coefficients in a GLM or similar model in R For future references, if you don't mind switching to lasso type glm, you can use cv.glmnet with the argument lower.limits to specify which parameters should not go under 0. It also has the nice property of removing a lot of spurious corre...
How to specify/restrict the sign of coefficients in a GLM or similar model in R For future references, if you don't mind switching to lasso type glm, you can use cv.glmnet with the argument lower.limits to specify which parameters should not go under 0. It also has the nice prop
45,095
Standard error from correlation coefficient
If you look at the Wikipedia page for the Pearson product-moment correlation, you will find sections that describe how confidence intervals can be calculated. Typically, people will use Fisher's $z$-transformation (arctan) to turn the $r$ into a variable that is approximately normally distributed: $$ z_r = \frac 1 2 \...
Standard error from correlation coefficient
If you look at the Wikipedia page for the Pearson product-moment correlation, you will find sections that describe how confidence intervals can be calculated. Typically, people will use Fisher's $z$-
Standard error from correlation coefficient If you look at the Wikipedia page for the Pearson product-moment correlation, you will find sections that describe how confidence intervals can be calculated. Typically, people will use Fisher's $z$-transformation (arctan) to turn the $r$ into a variable that is approximatel...
Standard error from correlation coefficient If you look at the Wikipedia page for the Pearson product-moment correlation, you will find sections that describe how confidence intervals can be calculated. Typically, people will use Fisher's $z$-
45,096
Standard error from correlation coefficient
To add to gung's answer, one can also use the a lazy approach of directly calculating the standard error for the correlation. This will produce inaccurate results in some cases and may produce impossible out of range confidence intervals. But for most cases, it's fine. The equation is: Example calculation of confidenc...
Standard error from correlation coefficient
To add to gung's answer, one can also use the a lazy approach of directly calculating the standard error for the correlation. This will produce inaccurate results in some cases and may produce impossi
Standard error from correlation coefficient To add to gung's answer, one can also use the a lazy approach of directly calculating the standard error for the correlation. This will produce inaccurate results in some cases and may produce impossible out of range confidence intervals. But for most cases, it's fine. The eq...
Standard error from correlation coefficient To add to gung's answer, one can also use the a lazy approach of directly calculating the standard error for the correlation. This will produce inaccurate results in some cases and may produce impossi
45,097
Performance evaluation of auto.arima in R and UCM on one dataset
I employed AUTOBOX ( a piece of software that I have helped develop ) . The automatic model identification scheme detect a first difference model with an ar(3) component. . The test for constancy of parameters revealed a possible breakpoint at or around period 53 (year=1953 note that the UCM model declared a new trend ...
Performance evaluation of auto.arima in R and UCM on one dataset
I employed AUTOBOX ( a piece of software that I have helped develop ) . The automatic model identification scheme detect a first difference model with an ar(3) component. . The test for constancy of p
Performance evaluation of auto.arima in R and UCM on one dataset I employed AUTOBOX ( a piece of software that I have helped develop ) . The automatic model identification scheme detect a first difference model with an ar(3) component. . The test for constancy of parameters revealed a possible breakpoint at or around p...
Performance evaluation of auto.arima in R and UCM on one dataset I employed AUTOBOX ( a piece of software that I have helped develop ) . The automatic model identification scheme detect a first difference model with an ar(3) component. . The test for constancy of p
45,098
Performance evaluation of auto.arima in R and UCM on one dataset
A recent change to the way regression coefficients are initialized in the estimation of ARIMA models means that a different model is now selected by auto.arima (using R3.0.2): > auto.arima(window(eggs,end=1983)) ARIMA(0,1,0) with drift Coefficients: drift -2.2665 s.e. 3.1133 sigma^2 estimate...
Performance evaluation of auto.arima in R and UCM on one dataset
A recent change to the way regression coefficients are initialized in the estimation of ARIMA models means that a different model is now selected by auto.arima (using R3.0.2): > auto.arima(window(eggs
Performance evaluation of auto.arima in R and UCM on one dataset A recent change to the way regression coefficients are initialized in the estimation of ARIMA models means that a different model is now selected by auto.arima (using R3.0.2): > auto.arima(window(eggs,end=1983)) ARIMA(0,1,0) with drift Coeffici...
Performance evaluation of auto.arima in R and UCM on one dataset A recent change to the way regression coefficients are initialized in the estimation of ARIMA models means that a different model is now selected by auto.arima (using R3.0.2): > auto.arima(window(eggs
45,099
Performance evaluation of auto.arima in R and UCM on one dataset
I don't get the same result as the OP (version 5.0 of the forecast package). If you run the following, the result is indeed a linear downward trend. install.packages("fma") library(fma) install.packages("forecast") library(forecast) #window as per the OP eggs2<-window(eggs, start=c(1900), end=c(1983)) plot(eggs2) ...
Performance evaluation of auto.arima in R and UCM on one dataset
I don't get the same result as the OP (version 5.0 of the forecast package). If you run the following, the result is indeed a linear downward trend. install.packages("fma") library(fma) install.packag
Performance evaluation of auto.arima in R and UCM on one dataset I don't get the same result as the OP (version 5.0 of the forecast package). If you run the following, the result is indeed a linear downward trend. install.packages("fma") library(fma) install.packages("forecast") library(forecast) #window as per the OP...
Performance evaluation of auto.arima in R and UCM on one dataset I don't get the same result as the OP (version 5.0 of the forecast package). If you run the following, the result is indeed a linear downward trend. install.packages("fma") library(fma) install.packag
45,100
AR(1) coefficient is correlation?
For a second-order stationary series it is the correlation coefficient between the dependent value and its lag. Specify $$y_{t+1} = \beta y_t + u_{t+1}\qquad u_{t+1}= \text{white noise}$$ The correlation coefficient between $y_{t+1}$ and $y_{t}$ is defined as usual $$\rho_{(1)} = \frac{\text{Cov}(y_{t+1},y_{t})}{\sigma...
AR(1) coefficient is correlation?
For a second-order stationary series it is the correlation coefficient between the dependent value and its lag. Specify $$y_{t+1} = \beta y_t + u_{t+1}\qquad u_{t+1}= \text{white noise}$$ The correlat
AR(1) coefficient is correlation? For a second-order stationary series it is the correlation coefficient between the dependent value and its lag. Specify $$y_{t+1} = \beta y_t + u_{t+1}\qquad u_{t+1}= \text{white noise}$$ The correlation coefficient between $y_{t+1}$ and $y_{t}$ is defined as usual $$\rho_{(1)} = \frac...
AR(1) coefficient is correlation? For a second-order stationary series it is the correlation coefficient between the dependent value and its lag. Specify $$y_{t+1} = \beta y_t + u_{t+1}\qquad u_{t+1}= \text{white noise}$$ The correlat