idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,101
How to show "not a number" on a line plot?
Stephen Few has an article, Displaying Missing Values and Incomplete Periods in Time Series, which discusses some possibilities such as using skipped, dashed or faded connections for missing Y values. Those work well when the X values are at regular intervals (above) but not so well when the X values are irregular (be...
How to show "not a number" on a line plot?
Stephen Few has an article, Displaying Missing Values and Incomplete Periods in Time Series, which discusses some possibilities such as using skipped, dashed or faded connections for missing Y values.
How to show "not a number" on a line plot? Stephen Few has an article, Displaying Missing Values and Incomplete Periods in Time Series, which discusses some possibilities such as using skipped, dashed or faded connections for missing Y values. Those work well when the X values are at regular intervals (above) but not ...
How to show "not a number" on a line plot? Stephen Few has an article, Displaying Missing Values and Incomplete Periods in Time Series, which discusses some possibilities such as using skipped, dashed or faded connections for missing Y values.
46,102
How to show "not a number" on a line plot?
It has been argued that a "bad" visualization is one that deceives or distorts. A subtle form of deception in line plots is to connect successive points with line segments (or higher-order splines), because that presents a compelling but false visual representation that (a) intermediate values (not in the dataset) exi...
How to show "not a number" on a line plot?
It has been argued that a "bad" visualization is one that deceives or distorts. A subtle form of deception in line plots is to connect successive points with line segments (or higher-order splines),
How to show "not a number" on a line plot? It has been argued that a "bad" visualization is one that deceives or distorts. A subtle form of deception in line plots is to connect successive points with line segments (or higher-order splines), because that presents a compelling but false visual representation that (a) i...
How to show "not a number" on a line plot? It has been argued that a "bad" visualization is one that deceives or distorts. A subtle form of deception in line plots is to connect successive points with line segments (or higher-order splines),
46,103
Statistical Analysis on Sparse data?
Pretty sketchy question but answerable nonetheless. The answer is "yes," there is lots one can do with sparse data. This response is far from "complete" but will review a few options in a kind of "DIY" shotgun listing. In other words, it is up to the analyst to decide which option may be appropriate to pursue. The firs...
Statistical Analysis on Sparse data?
Pretty sketchy question but answerable nonetheless. The answer is "yes," there is lots one can do with sparse data. This response is far from "complete" but will review a few options in a kind of "DIY
Statistical Analysis on Sparse data? Pretty sketchy question but answerable nonetheless. The answer is "yes," there is lots one can do with sparse data. This response is far from "complete" but will review a few options in a kind of "DIY" shotgun listing. In other words, it is up to the analyst to decide which option m...
Statistical Analysis on Sparse data? Pretty sketchy question but answerable nonetheless. The answer is "yes," there is lots one can do with sparse data. This response is far from "complete" but will review a few options in a kind of "DIY
46,104
What is the most beginner-friendly book for information geometry?
I also think these books are quite hard to read at the first place too (but I'm an applied guy). For me, it was simpler to start with scattered material/tutorial/applications using bits of IG such as: Pattern learning and recognition on statistical manifolds: An information-geometric review .
What is the most beginner-friendly book for information geometry?
I also think these books are quite hard to read at the first place too (but I'm an applied guy). For me, it was simpler to start with scattered material/tutorial/applications using bits of IG such as:
What is the most beginner-friendly book for information geometry? I also think these books are quite hard to read at the first place too (but I'm an applied guy). For me, it was simpler to start with scattered material/tutorial/applications using bits of IG such as: Pattern learning and recognition on statistical manif...
What is the most beginner-friendly book for information geometry? I also think these books are quite hard to read at the first place too (but I'm an applied guy). For me, it was simpler to start with scattered material/tutorial/applications using bits of IG such as:
46,105
Sum-to-zero constraint in one-way ANOVA
Consider for simplicity that $m=2$ and compare the models $\mu=0,\beta_1=0,\beta_2=2$, $\mu=1,\beta_1=-1,\beta_2=1$, $\mu=2,\beta_1=-2,\beta_2=0$. These models are all special cases of $(\mu,\beta_1,\beta_2)=(\mu,-\mu,2-\mu)$. You can see that whatever $\mu$ we choose, $\mu+\beta_1=0$ and $\mu+\beta_2=2$, so there's...
Sum-to-zero constraint in one-way ANOVA
Consider for simplicity that $m=2$ and compare the models $\mu=0,\beta_1=0,\beta_2=2$, $\mu=1,\beta_1=-1,\beta_2=1$, $\mu=2,\beta_1=-2,\beta_2=0$. These models are all special cases of $(\mu,\beta_
Sum-to-zero constraint in one-way ANOVA Consider for simplicity that $m=2$ and compare the models $\mu=0,\beta_1=0,\beta_2=2$, $\mu=1,\beta_1=-1,\beta_2=1$, $\mu=2,\beta_1=-2,\beta_2=0$. These models are all special cases of $(\mu,\beta_1,\beta_2)=(\mu,-\mu,2-\mu)$. You can see that whatever $\mu$ we choose, $\mu+\b...
Sum-to-zero constraint in one-way ANOVA Consider for simplicity that $m=2$ and compare the models $\mu=0,\beta_1=0,\beta_2=2$, $\mu=1,\beta_1=-1,\beta_2=1$, $\mu=2,\beta_1=-2,\beta_2=0$. These models are all special cases of $(\mu,\beta_
46,106
Variance of $Y$ in regression model?
$x_i$ is one single non-random variable, so on itself it has a variance of 0, so the formula you wrote simplifies to just $\sigma^2$. Normally $y_i$ is expressed as follows: $$y_i \sim N(\beta_1 + \beta_2x_i, \;\sigma^2)$$ This way it should be evident how the variance of $y_i$ is determined. $\beta_1 + \beta_2x_i$ onl...
Variance of $Y$ in regression model?
$x_i$ is one single non-random variable, so on itself it has a variance of 0, so the formula you wrote simplifies to just $\sigma^2$. Normally $y_i$ is expressed as follows: $$y_i \sim N(\beta_1 + \be
Variance of $Y$ in regression model? $x_i$ is one single non-random variable, so on itself it has a variance of 0, so the formula you wrote simplifies to just $\sigma^2$. Normally $y_i$ is expressed as follows: $$y_i \sim N(\beta_1 + \beta_2x_i, \;\sigma^2)$$ This way it should be evident how the variance of $y_i$ is d...
Variance of $Y$ in regression model? $x_i$ is one single non-random variable, so on itself it has a variance of 0, so the formula you wrote simplifies to just $\sigma^2$. Normally $y_i$ is expressed as follows: $$y_i \sim N(\beta_1 + \be
46,107
Variance of $Y$ in regression model?
Let's say you have the regression equation: $$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ Different books, different lectures notes etc... follow two different approaches: Treat $x_i$ are scalars. They're entirely exogenous. They're not random. Treat $x_i$ as a random variable. The answer of @Jarko Dubbeldam takes a...
Variance of $Y$ in regression model?
Let's say you have the regression equation: $$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ Different books, different lectures notes etc... follow two different approaches: Treat $x_i$ are scalars. T
Variance of $Y$ in regression model? Let's say you have the regression equation: $$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ Different books, different lectures notes etc... follow two different approaches: Treat $x_i$ are scalars. They're entirely exogenous. They're not random. Treat $x_i$ as a random variable. T...
Variance of $Y$ in regression model? Let's say you have the regression equation: $$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ Different books, different lectures notes etc... follow two different approaches: Treat $x_i$ are scalars. T
46,108
Compound Poisson random variable
We'll denote $\mu_N = \text{E}(N)$, $\mu_X = \text{E}(X)$ and $\sigma_N^2 = \text{Var}(N)$. To find the covariance we can use the formula \begin{align} \text{Cov}(S, N) &= \text{E}(SN) - \text{E}(S) \text{E}(N) \\ &= \text{E}(SN) - \mu_N^2 \mu_X \end{align} where the second equality is found by taking an iterated exp...
Compound Poisson random variable
We'll denote $\mu_N = \text{E}(N)$, $\mu_X = \text{E}(X)$ and $\sigma_N^2 = \text{Var}(N)$. To find the covariance we can use the formula \begin{align} \text{Cov}(S, N) &= \text{E}(SN) - \text{E}(S)
Compound Poisson random variable We'll denote $\mu_N = \text{E}(N)$, $\mu_X = \text{E}(X)$ and $\sigma_N^2 = \text{Var}(N)$. To find the covariance we can use the formula \begin{align} \text{Cov}(S, N) &= \text{E}(SN) - \text{E}(S) \text{E}(N) \\ &= \text{E}(SN) - \mu_N^2 \mu_X \end{align} where the second equality i...
Compound Poisson random variable We'll denote $\mu_N = \text{E}(N)$, $\mu_X = \text{E}(X)$ and $\sigma_N^2 = \text{Var}(N)$. To find the covariance we can use the formula \begin{align} \text{Cov}(S, N) &= \text{E}(SN) - \text{E}(S)
46,109
What are the differences between generalized additive model, basis expansion and boosting?
Basis expansion implies a basis function. In mathematics, a basis function is an element of a particular basis for a function space. For example, sines and cosines form a basis for Fourier analysis and can duplicate any waveform shape (square waves, sawtooth waves, etc.) just by adding enough basis functions together. ...
What are the differences between generalized additive model, basis expansion and boosting?
Basis expansion implies a basis function. In mathematics, a basis function is an element of a particular basis for a function space. For example, sines and cosines form a basis for Fourier analysis an
What are the differences between generalized additive model, basis expansion and boosting? Basis expansion implies a basis function. In mathematics, a basis function is an element of a particular basis for a function space. For example, sines and cosines form a basis for Fourier analysis and can duplicate any waveform ...
What are the differences between generalized additive model, basis expansion and boosting? Basis expansion implies a basis function. In mathematics, a basis function is an element of a particular basis for a function space. For example, sines and cosines form a basis for Fourier analysis an
46,110
GAM model selection
If you are using an extra penalty on each term, you can just fit the model and you are done (from the point of view of selection). The point of these penalties is allow for shrinkage of the perfectly smooth functions in the spline basis expansion as well as the wiggly functions. The results of the model fit account for...
GAM model selection
If you are using an extra penalty on each term, you can just fit the model and you are done (from the point of view of selection). The point of these penalties is allow for shrinkage of the perfectly
GAM model selection If you are using an extra penalty on each term, you can just fit the model and you are done (from the point of view of selection). The point of these penalties is allow for shrinkage of the perfectly smooth functions in the spline basis expansion as well as the wiggly functions. The results of the m...
GAM model selection If you are using an extra penalty on each term, you can just fit the model and you are done (from the point of view of selection). The point of these penalties is allow for shrinkage of the perfectly
46,111
why the lower.tail=F is used when mannualy calculating the p value from t score
Check out the documentation for R's pt() function. lower.tail logical; if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x].* In other words, when lower.tail=FALSE you get the probability to the right of X (the first of your two diagrams). Or just run it for yourself: > pt(2,10) [1] 0.963306 > pt(...
why the lower.tail=F is used when mannualy calculating the p value from t score
Check out the documentation for R's pt() function. lower.tail logical; if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x].* In other words, when lower.tail=FALSE you get the probab
why the lower.tail=F is used when mannualy calculating the p value from t score Check out the documentation for R's pt() function. lower.tail logical; if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x].* In other words, when lower.tail=FALSE you get the probability to the right of X (the first of yo...
why the lower.tail=F is used when mannualy calculating the p value from t score Check out the documentation for R's pt() function. lower.tail logical; if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x].* In other words, when lower.tail=FALSE you get the probab
46,112
why the lower.tail=F is used when mannualy calculating the p value from t score
Student t distribution is symmetric. So if you calculate the area under upper tail and multiply it by 2, you end up with the 2 tailed p-value of your test score.
why the lower.tail=F is used when mannualy calculating the p value from t score
Student t distribution is symmetric. So if you calculate the area under upper tail and multiply it by 2, you end up with the 2 tailed p-value of your test score.
why the lower.tail=F is used when mannualy calculating the p value from t score Student t distribution is symmetric. So if you calculate the area under upper tail and multiply it by 2, you end up with the 2 tailed p-value of your test score.
why the lower.tail=F is used when mannualy calculating the p value from t score Student t distribution is symmetric. So if you calculate the area under upper tail and multiply it by 2, you end up with the 2 tailed p-value of your test score.
46,113
Explanation for this event on a high-dimensional dataset
This is a question about high-dimensional Euclidean geometry, related to the "curse of dimensionality". It comes down to this: almost all the surface area of a sphere in $d$-dimensional Euclidean space $E^d$ is concentrated around its equator. The nearest neighbors of any point will tend to be scattered in random dir...
Explanation for this event on a high-dimensional dataset
This is a question about high-dimensional Euclidean geometry, related to the "curse of dimensionality". It comes down to this: almost all the surface area of a sphere in $d$-dimensional Euclidean spa
Explanation for this event on a high-dimensional dataset This is a question about high-dimensional Euclidean geometry, related to the "curse of dimensionality". It comes down to this: almost all the surface area of a sphere in $d$-dimensional Euclidean space $E^d$ is concentrated around its equator. The nearest neigh...
Explanation for this event on a high-dimensional dataset This is a question about high-dimensional Euclidean geometry, related to the "curse of dimensionality". It comes down to this: almost all the surface area of a sphere in $d$-dimensional Euclidean spa
46,114
Explanation for this event on a high-dimensional dataset
First of all, a comment on the question: Points in the sample $S$ won't be at $\sqrt{d}$ distance away from the center. Distance squared will follow a $\chi^2$ distribution with $d$ degrees of freedom and therefore with $d$ mean and $2d$ variance. Then, distance will be distributed around $\sqrt{d}$ but there is not a ...
Explanation for this event on a high-dimensional dataset
First of all, a comment on the question: Points in the sample $S$ won't be at $\sqrt{d}$ distance away from the center. Distance squared will follow a $\chi^2$ distribution with $d$ degrees of freedom
Explanation for this event on a high-dimensional dataset First of all, a comment on the question: Points in the sample $S$ won't be at $\sqrt{d}$ distance away from the center. Distance squared will follow a $\chi^2$ distribution with $d$ degrees of freedom and therefore with $d$ mean and $2d$ variance. Then, distance ...
Explanation for this event on a high-dimensional dataset First of all, a comment on the question: Points in the sample $S$ won't be at $\sqrt{d}$ distance away from the center. Distance squared will follow a $\chi^2$ distribution with $d$ degrees of freedom
46,115
Including ordinal independent variables in a linear mixed effects model (using the lme4 package in R)
(This answer applies to [generalized] linear models generally, not just mixed models.) This answer on SO discusses the interpretation of linear models with ordinal independent (predictor) variables. Here are two reasonable approaches, not clear which is best: treat the score as numeric. Advantages: simple, parsimoniou...
Including ordinal independent variables in a linear mixed effects model (using the lme4 package in R
(This answer applies to [generalized] linear models generally, not just mixed models.) This answer on SO discusses the interpretation of linear models with ordinal independent (predictor) variables. H
Including ordinal independent variables in a linear mixed effects model (using the lme4 package in R) (This answer applies to [generalized] linear models generally, not just mixed models.) This answer on SO discusses the interpretation of linear models with ordinal independent (predictor) variables. Here are two reason...
Including ordinal independent variables in a linear mixed effects model (using the lme4 package in R (This answer applies to [generalized] linear models generally, not just mixed models.) This answer on SO discusses the interpretation of linear models with ordinal independent (predictor) variables. H
46,116
Is it possible to over-train a classifier?
Pankaj Daga's expansion is great, I'll take care of the illustration. Here is a typical curve when training a neural networks: The reported F1-score for the test set should to be the F1-score of the test set of the epoch where the F1-score of the validation set was the highest. (this is called "test best" on the figur...
Is it possible to over-train a classifier?
Pankaj Daga's expansion is great, I'll take care of the illustration. Here is a typical curve when training a neural networks: The reported F1-score for the test set should to be the F1-score of the
Is it possible to over-train a classifier? Pankaj Daga's expansion is great, I'll take care of the illustration. Here is a typical curve when training a neural networks: The reported F1-score for the test set should to be the F1-score of the test set of the epoch where the F1-score of the validation set was the highes...
Is it possible to over-train a classifier? Pankaj Daga's expansion is great, I'll take care of the illustration. Here is a typical curve when training a neural networks: The reported F1-score for the test set should to be the F1-score of the
46,117
Is it possible to over-train a classifier?
Your comment regarding the epoch is true. So, if you use too few epoches, you may underfit and using too many epoches can result in overfitting. As you know you can always increase the training accuracy arbitrarily by increasing model complexity and increasing the number of epoch steps. One way to try and alleviate thi...
Is it possible to over-train a classifier?
Your comment regarding the epoch is true. So, if you use too few epoches, you may underfit and using too many epoches can result in overfitting. As you know you can always increase the training accura
Is it possible to over-train a classifier? Your comment regarding the epoch is true. So, if you use too few epoches, you may underfit and using too many epoches can result in overfitting. As you know you can always increase the training accuracy arbitrarily by increasing model complexity and increasing the number of ep...
Is it possible to over-train a classifier? Your comment regarding the epoch is true. So, if you use too few epoches, you may underfit and using too many epoches can result in overfitting. As you know you can always increase the training accura
46,118
Missing Data Mixed Effects Modelling for Repeated Measures
Imputation using within subject means isn't a great idea because it will result in biased (too small) standard errors and possibly biased estimates. Assuming that the data are missing at random, a much better idea is to use multiple imputation. The mice package in R has the capability to impute continuous variables in ...
Missing Data Mixed Effects Modelling for Repeated Measures
Imputation using within subject means isn't a great idea because it will result in biased (too small) standard errors and possibly biased estimates. Assuming that the data are missing at random, a muc
Missing Data Mixed Effects Modelling for Repeated Measures Imputation using within subject means isn't a great idea because it will result in biased (too small) standard errors and possibly biased estimates. Assuming that the data are missing at random, a much better idea is to use multiple imputation. The mice package...
Missing Data Mixed Effects Modelling for Repeated Measures Imputation using within subject means isn't a great idea because it will result in biased (too small) standard errors and possibly biased estimates. Assuming that the data are missing at random, a muc
46,119
Missing Data Mixed Effects Modelling for Repeated Measures
In his book Stef van Buuren describes the difficulties in multilevel modelling when a level-1 or level-2 predictor is missing. He advices to use 2l.pmm from miceadds, and setting the value to 3 in the predictor matrix as outlined here under '7.10.2 Random intercepts, missing level-1 predictor'. This adds the cluster me...
Missing Data Mixed Effects Modelling for Repeated Measures
In his book Stef van Buuren describes the difficulties in multilevel modelling when a level-1 or level-2 predictor is missing. He advices to use 2l.pmm from miceadds, and setting the value to 3 in the
Missing Data Mixed Effects Modelling for Repeated Measures In his book Stef van Buuren describes the difficulties in multilevel modelling when a level-1 or level-2 predictor is missing. He advices to use 2l.pmm from miceadds, and setting the value to 3 in the predictor matrix as outlined here under '7.10.2 Random inter...
Missing Data Mixed Effects Modelling for Repeated Measures In his book Stef van Buuren describes the difficulties in multilevel modelling when a level-1 or level-2 predictor is missing. He advices to use 2l.pmm from miceadds, and setting the value to 3 in the
46,120
How to sample value from Symmetric Geometric Distribution
I mostly agree with @Glen_b but I think the correct probability function is $$f(\theta) = \begin{cases} p_g &\theta = 0\\ \frac{1}{2}p_g(1-p_g)^{|\theta|-1} &\theta \neq 0\end{cases}$$ for integer $\theta$. This seems to be the only way to get the correct variance, and it is the same as the given formula up to a scalar...
How to sample value from Symmetric Geometric Distribution
I mostly agree with @Glen_b but I think the correct probability function is $$f(\theta) = \begin{cases} p_g &\theta = 0\\ \frac{1}{2}p_g(1-p_g)^{|\theta|-1} &\theta \neq 0\end{cases}$$ for integer $\t
How to sample value from Symmetric Geometric Distribution I mostly agree with @Glen_b but I think the correct probability function is $$f(\theta) = \begin{cases} p_g &\theta = 0\\ \frac{1}{2}p_g(1-p_g)^{|\theta|-1} &\theta \neq 0\end{cases}$$ for integer $\theta$. This seems to be the only way to get the correct varian...
How to sample value from Symmetric Geometric Distribution I mostly agree with @Glen_b but I think the correct probability function is $$f(\theta) = \begin{cases} p_g &\theta = 0\\ \frac{1}{2}p_g(1-p_g)^{|\theta|-1} &\theta \neq 0\end{cases}$$ for integer $\t
46,121
How to sample value from Symmetric Geometric Distribution
Discrete Laplace distribution is a very similar to the one you describe (check Inusah and Kozubowski, 2006, and Kotz, Kozubowski and Podgorski, 2012). Discrete Laplace distribution has probability mass function: $$ f(x) = \frac{1-p}{1+p} p^{|x-\mu|} $$ and cumulative distribution function $$ F(x) = \left\{\begin{array...
How to sample value from Symmetric Geometric Distribution
Discrete Laplace distribution is a very similar to the one you describe (check Inusah and Kozubowski, 2006, and Kotz, Kozubowski and Podgorski, 2012). Discrete Laplace distribution has probability ma
How to sample value from Symmetric Geometric Distribution Discrete Laplace distribution is a very similar to the one you describe (check Inusah and Kozubowski, 2006, and Kotz, Kozubowski and Podgorski, 2012). Discrete Laplace distribution has probability mass function: $$ f(x) = \frac{1-p}{1+p} p^{|x-\mu|} $$ and cumu...
How to sample value from Symmetric Geometric Distribution Discrete Laplace distribution is a very similar to the one you describe (check Inusah and Kozubowski, 2006, and Kotz, Kozubowski and Podgorski, 2012). Discrete Laplace distribution has probability ma
46,122
How to sample value from Symmetric Geometric Distribution
The linked document is unclear; it doesn't even indicate the random variable (which I will call $X$). I assume it means $p_g$ & $\theta$ are parameters. It doesn't define the values taken by $X$ - which I believe should appear in the exponent as $|x-\theta|$ not as $|\theta|$ - nor the values taken by $\theta$ (which...
How to sample value from Symmetric Geometric Distribution
The linked document is unclear; it doesn't even indicate the random variable (which I will call $X$). I assume it means $p_g$ & $\theta$ are parameters. It doesn't define the values taken by $X$ - w
How to sample value from Symmetric Geometric Distribution The linked document is unclear; it doesn't even indicate the random variable (which I will call $X$). I assume it means $p_g$ & $\theta$ are parameters. It doesn't define the values taken by $X$ - which I believe should appear in the exponent as $|x-\theta|$ n...
How to sample value from Symmetric Geometric Distribution The linked document is unclear; it doesn't even indicate the random variable (which I will call $X$). I assume it means $p_g$ & $\theta$ are parameters. It doesn't define the values taken by $X$ - w
46,123
Generalized Additive Model interpretation with ordered categorical family in R
In these models, the linear predictor is a latent variable, with estimated thresholds $t_i$ that mark the transitions between levels of the ordered categorical response. The plots you show in the question are the smooth contributions of the four variables to the linear predictor, thresholds along which demarcate the ca...
Generalized Additive Model interpretation with ordered categorical family in R
In these models, the linear predictor is a latent variable, with estimated thresholds $t_i$ that mark the transitions between levels of the ordered categorical response. The plots you show in the ques
Generalized Additive Model interpretation with ordered categorical family in R In these models, the linear predictor is a latent variable, with estimated thresholds $t_i$ that mark the transitions between levels of the ordered categorical response. The plots you show in the question are the smooth contributions of the ...
Generalized Additive Model interpretation with ordered categorical family in R In these models, the linear predictor is a latent variable, with estimated thresholds $t_i$ that mark the transitions between levels of the ordered categorical response. The plots you show in the ques
46,124
Seasonal data deemed stationary by ADF and KPSS tests
Both the augmented Dickey-Fuller (ADF) test and the Kwiatkowski, Phillips, Schmidt and Shin (KPSS) test are tailored for detecting nonstationarity in the form of a unit root in the process. (The test equations explicitly allow for a unit root; see the refence below.) However, they are not tailored for detecting other f...
Seasonal data deemed stationary by ADF and KPSS tests
Both the augmented Dickey-Fuller (ADF) test and the Kwiatkowski, Phillips, Schmidt and Shin (KPSS) test are tailored for detecting nonstationarity in the form of a unit root in the process. (The test
Seasonal data deemed stationary by ADF and KPSS tests Both the augmented Dickey-Fuller (ADF) test and the Kwiatkowski, Phillips, Schmidt and Shin (KPSS) test are tailored for detecting nonstationarity in the form of a unit root in the process. (The test equations explicitly allow for a unit root; see the refence below....
Seasonal data deemed stationary by ADF and KPSS tests Both the augmented Dickey-Fuller (ADF) test and the Kwiatkowski, Phillips, Schmidt and Shin (KPSS) test are tailored for detecting nonstationarity in the form of a unit root in the process. (The test
46,125
Seasonal data deemed stationary by ADF and KPSS tests
You can use HEGY test for seasonality or CH test for seasonality to check for seasonal unit roots. Better to use HEGY test. ADF and KPSS test for non seasonal unit roots. Since your seasonality is strong, take the seasonal difference and proceed to test for non seasonal unit roots. Probably seasonal differencing will r...
Seasonal data deemed stationary by ADF and KPSS tests
You can use HEGY test for seasonality or CH test for seasonality to check for seasonal unit roots. Better to use HEGY test. ADF and KPSS test for non seasonal unit roots. Since your seasonality is str
Seasonal data deemed stationary by ADF and KPSS tests You can use HEGY test for seasonality or CH test for seasonality to check for seasonal unit roots. Better to use HEGY test. ADF and KPSS test for non seasonal unit roots. Since your seasonality is strong, take the seasonal difference and proceed to test for non seas...
Seasonal data deemed stationary by ADF and KPSS tests You can use HEGY test for seasonality or CH test for seasonality to check for seasonal unit roots. Better to use HEGY test. ADF and KPSS test for non seasonal unit roots. Since your seasonality is str
46,126
Seasonal data deemed stationary by ADF and KPSS tests
It is because the DF/ADF is testing for the first difference (aka the trend) only. It could iteratively cover larger differences (aka the seasonalities) as well but it is defined as it is. This video https://www.youtube.com/watch?v=1opjnegd_hA explains the equations where you will see the value of delta.
Seasonal data deemed stationary by ADF and KPSS tests
It is because the DF/ADF is testing for the first difference (aka the trend) only. It could iteratively cover larger differences (aka the seasonalities) as well but it is defined as it is. This video
Seasonal data deemed stationary by ADF and KPSS tests It is because the DF/ADF is testing for the first difference (aka the trend) only. It could iteratively cover larger differences (aka the seasonalities) as well but it is defined as it is. This video https://www.youtube.com/watch?v=1opjnegd_hA explains the equations...
Seasonal data deemed stationary by ADF and KPSS tests It is because the DF/ADF is testing for the first difference (aka the trend) only. It could iteratively cover larger differences (aka the seasonalities) as well but it is defined as it is. This video
46,127
What's the difference between univariate and multivariate cox regression?
I think that many people who use the words "multivariate regression" with Cox models really mean to say "multiple regression." (I will confess to having done that myself; it's common in the literature.) "Multiple regression" means having more than one predictor in a regression model, while "multivariate regression" is ...
What's the difference between univariate and multivariate cox regression?
I think that many people who use the words "multivariate regression" with Cox models really mean to say "multiple regression." (I will confess to having done that myself; it's common in the literature
What's the difference between univariate and multivariate cox regression? I think that many people who use the words "multivariate regression" with Cox models really mean to say "multiple regression." (I will confess to having done that myself; it's common in the literature.) "Multiple regression" means having more tha...
What's the difference between univariate and multivariate cox regression? I think that many people who use the words "multivariate regression" with Cox models really mean to say "multiple regression." (I will confess to having done that myself; it's common in the literature
46,128
What's the difference between univariate and multivariate cox regression?
You should opt to do multivariable cox regression analysis (Not multivariate). As rightly point out by @EdM multivaraite means having more than one outcome variable, whereas, in survival analysis you have only one outcome variable, i.e. time-to-event of interest. Since, in oncology the group of patients under study, in...
What's the difference between univariate and multivariate cox regression?
You should opt to do multivariable cox regression analysis (Not multivariate). As rightly point out by @EdM multivaraite means having more than one outcome variable, whereas, in survival analysis you
What's the difference between univariate and multivariate cox regression? You should opt to do multivariable cox regression analysis (Not multivariate). As rightly point out by @EdM multivaraite means having more than one outcome variable, whereas, in survival analysis you have only one outcome variable, i.e. time-to-e...
What's the difference between univariate and multivariate cox regression? You should opt to do multivariable cox regression analysis (Not multivariate). As rightly point out by @EdM multivaraite means having more than one outcome variable, whereas, in survival analysis you
46,129
suitable non-linear equation to capture a 'J-shaped' relationship between x and y
You just need to choose some sensible options in nls. Here I used constrained nls, forcing the coefficients to have opposite signs. I ran this after your code above: mfit=nls(scatter ~ a1*exp(-b1*age)+a2*exp(b2*age), start=list(a1=.01,a2=.02,b1=.04,b2=.04),lower=list(0,0,0,0), algorithm="port",trace=TRUE) line...
suitable non-linear equation to capture a 'J-shaped' relationship between x and y
You just need to choose some sensible options in nls. Here I used constrained nls, forcing the coefficients to have opposite signs. I ran this after your code above: mfit=nls(scatter ~ a1*exp(-b1*age)
suitable non-linear equation to capture a 'J-shaped' relationship between x and y You just need to choose some sensible options in nls. Here I used constrained nls, forcing the coefficients to have opposite signs. I ran this after your code above: mfit=nls(scatter ~ a1*exp(-b1*age)+a2*exp(b2*age), start=list(a1=.01...
suitable non-linear equation to capture a 'J-shaped' relationship between x and y You just need to choose some sensible options in nls. Here I used constrained nls, forcing the coefficients to have opposite signs. I ran this after your code above: mfit=nls(scatter ~ a1*exp(-b1*age)
46,130
suitable non-linear equation to capture a 'J-shaped' relationship between x and y
FIRST APPROACH: Hurst et al. use the following form for a similar problem (as far as I can see): mortality = a + b * age * exp(c * age) where a, b, and c are parameters. The call to nls would be along the lines dat <- data.frame(age=age, mortality=scatter) fit <- nls( mortality ~ a + b * age * exp(c*age), dat...
suitable non-linear equation to capture a 'J-shaped' relationship between x and y
FIRST APPROACH: Hurst et al. use the following form for a similar problem (as far as I can see): mortality = a + b * age * exp(c * age) where a, b, and c are parameters. The call to nls would be alon
suitable non-linear equation to capture a 'J-shaped' relationship between x and y FIRST APPROACH: Hurst et al. use the following form for a similar problem (as far as I can see): mortality = a + b * age * exp(c * age) where a, b, and c are parameters. The call to nls would be along the lines dat <- data.frame(age=age,...
suitable non-linear equation to capture a 'J-shaped' relationship between x and y FIRST APPROACH: Hurst et al. use the following form for a similar problem (as far as I can see): mortality = a + b * age * exp(c * age) where a, b, and c are parameters. The call to nls would be alon
46,131
Raw data outperforms Z-score transformed data in SVM classification
Keep in mind why people typically scale features prior to estimating an SVM. The notion is that the data are on different scales, and this happenstance of how things were measured might not be desirable -- for example, measuring some length quantity in meters versus kilometers. Obviously one will have a much larger ran...
Raw data outperforms Z-score transformed data in SVM classification
Keep in mind why people typically scale features prior to estimating an SVM. The notion is that the data are on different scales, and this happenstance of how things were measured might not be desirab
Raw data outperforms Z-score transformed data in SVM classification Keep in mind why people typically scale features prior to estimating an SVM. The notion is that the data are on different scales, and this happenstance of how things were measured might not be desirable -- for example, measuring some length quantity in...
Raw data outperforms Z-score transformed data in SVM classification Keep in mind why people typically scale features prior to estimating an SVM. The notion is that the data are on different scales, and this happenstance of how things were measured might not be desirab
46,132
Raw data outperforms Z-score transformed data in SVM classification
SVM is minimizing hinge loss with ridge regularization $$ \min_\mathbf w \sum_i(1-y_i \mathbf w\cdot \mathbf x_i)_+ +\lambda ||\mathbf w||^2 $$ So, the scaling will make differences when we have the regularization term. My hypothesis would be the original scale of your features impacts regularization on different featu...
Raw data outperforms Z-score transformed data in SVM classification
SVM is minimizing hinge loss with ridge regularization $$ \min_\mathbf w \sum_i(1-y_i \mathbf w\cdot \mathbf x_i)_+ +\lambda ||\mathbf w||^2 $$ So, the scaling will make differences when we have the r
Raw data outperforms Z-score transformed data in SVM classification SVM is minimizing hinge loss with ridge regularization $$ \min_\mathbf w \sum_i(1-y_i \mathbf w\cdot \mathbf x_i)_+ +\lambda ||\mathbf w||^2 $$ So, the scaling will make differences when we have the regularization term. My hypothesis would be the origi...
Raw data outperforms Z-score transformed data in SVM classification SVM is minimizing hinge loss with ridge regularization $$ \min_\mathbf w \sum_i(1-y_i \mathbf w\cdot \mathbf x_i)_+ +\lambda ||\mathbf w||^2 $$ So, the scaling will make differences when we have the r
46,133
Raw data outperforms Z-score transformed data in SVM classification
Two points: First: are your classes distributed equally (I think this is in General Abrial's link; but I haven't read it so unsure)? I.e. do you have 50% class A, and 50% class B? Or is it that you have one class to be more dominant? Accuracy is very sensitive to the class imbalance issue. E.g. if 90% of test cases ar...
Raw data outperforms Z-score transformed data in SVM classification
Two points: First: are your classes distributed equally (I think this is in General Abrial's link; but I haven't read it so unsure)? I.e. do you have 50% class A, and 50% class B? Or is it that you h
Raw data outperforms Z-score transformed data in SVM classification Two points: First: are your classes distributed equally (I think this is in General Abrial's link; but I haven't read it so unsure)? I.e. do you have 50% class A, and 50% class B? Or is it that you have one class to be more dominant? Accuracy is very ...
Raw data outperforms Z-score transformed data in SVM classification Two points: First: are your classes distributed equally (I think this is in General Abrial's link; but I haven't read it so unsure)? I.e. do you have 50% class A, and 50% class B? Or is it that you h
46,134
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$?
It is not incorrect. It is nonlinear in $\mathbf{w}$. Can you take a matrix, premultiply the weight vector by it, and get that? What I mean is that there is no $\mathbf{A}$ such that $\mathbf{A} \mathbf{w} = x w_1 \cdots w_l$. The key is to see that it is a function in $\mathbf{w}$. $$f(\mathbf{w}) = x w_1 \cdots w_l$...
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i .
It is not incorrect. It is nonlinear in $\mathbf{w}$. Can you take a matrix, premultiply the weight vector by it, and get that? What I mean is that there is no $\mathbf{A}$ such that $\mathbf{A} \math
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$? It is not incorrect. It is nonlinear in $\mathbf{w}$. Can you take a matrix, premultiply the weight vector by it, and get that? What I mean is that there is no $\mathbf{A}$ such...
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i . It is not incorrect. It is nonlinear in $\mathbf{w}$. Can you take a matrix, premultiply the weight vector by it, and get that? What I mean is that there is no $\mathbf{A}$ such that $\mathbf{A} \math
46,135
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$?
As @amoeba said, it is non-linear in the combination $\mathbf \prod_{i=1}^{n} w_{i}$. Let's see an example where each $w_{i}$ is doubled. Then the new value becomes $2^{n}$ times the old value whereas it should have been just 2 times the old value for a linear function.
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i .
As @amoeba said, it is non-linear in the combination $\mathbf \prod_{i=1}^{n} w_{i}$. Let's see an example where each $w_{i}$ is doubled. Then the new value becomes $2^{n}$ times the old value whereas
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$? As @amoeba said, it is non-linear in the combination $\mathbf \prod_{i=1}^{n} w_{i}$. Let's see an example where each $w_{i}$ is doubled. Then the new value becomes $2^{n}$ time...
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i . As @amoeba said, it is non-linear in the combination $\mathbf \prod_{i=1}^{n} w_{i}$. Let's see an example where each $w_{i}$ is doubled. Then the new value becomes $2^{n}$ times the old value whereas
46,136
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$?
To be fair, it is easy to misunderstand it to mean "a nonlinear function of each of the weights $w_i$." In that case, your analysis would be correct. For what it's worth, since we're working with real numbers here, you can use a simpler definition of linear function: $$ f(\alpha x) = \alpha f(x) $$ In other words, tha...
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i .
To be fair, it is easy to misunderstand it to mean "a nonlinear function of each of the weights $w_i$." In that case, your analysis would be correct. For what it's worth, since we're working with real
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$? To be fair, it is easy to misunderstand it to mean "a nonlinear function of each of the weights $w_i$." In that case, your analysis would be correct. For what it's worth, since ...
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i . To be fair, it is easy to misunderstand it to mean "a nonlinear function of each of the weights $w_i$." In that case, your analysis would be correct. For what it's worth, since we're working with real
46,137
How is repeated-measures ANOVA a special case of linear mixed models?
Repeated measurements ANOVA is a special case of linear mixed effects models because it is less-flexible in regards with the structure of the "random-effects" part of a linear mixed effects model as well as it assumes some further conveniences for the data at hand. The repeated measurements ANOVA starts with the gener...
How is repeated-measures ANOVA a special case of linear mixed models?
Repeated measurements ANOVA is a special case of linear mixed effects models because it is less-flexible in regards with the structure of the "random-effects" part of a linear mixed effects model as w
How is repeated-measures ANOVA a special case of linear mixed models? Repeated measurements ANOVA is a special case of linear mixed effects models because it is less-flexible in regards with the structure of the "random-effects" part of a linear mixed effects model as well as it assumes some further conveniences for th...
How is repeated-measures ANOVA a special case of linear mixed models? Repeated measurements ANOVA is a special case of linear mixed effects models because it is less-flexible in regards with the structure of the "random-effects" part of a linear mixed effects model as w
46,138
Finding the distribution of iid variables X, Y given distribution of X-Y
We know two independent random variables $X,Y$ has a common distribution, and we know both have zero expectation (or some other convenient centering). We don't know what that common distribution is, but we do know the distribution of $D=X-Y$. Is it possible to determine the distribution function of $X$ and $Y$? Let u...
Finding the distribution of iid variables X, Y given distribution of X-Y
We know two independent random variables $X,Y$ has a common distribution, and we know both have zero expectation (or some other convenient centering). We don't know what that common distribution is,
Finding the distribution of iid variables X, Y given distribution of X-Y We know two independent random variables $X,Y$ has a common distribution, and we know both have zero expectation (or some other convenient centering). We don't know what that common distribution is, but we do know the distribution of $D=X-Y$. Is...
Finding the distribution of iid variables X, Y given distribution of X-Y We know two independent random variables $X,Y$ has a common distribution, and we know both have zero expectation (or some other convenient centering). We don't know what that common distribution is,
46,139
Finding the distribution of iid variables X, Y given distribution of X-Y
As a counter-example: If $X \sim N(\mu, \sigma^2)$ and $Y \sim N(\mu, \sigma^2)$ are iid, then $Z = X - Y$ is $N(0, 2 \sigma^2)$. Conversely, if we observe that $Z \sim N(0, 2 \sigma^2)$, then there are an infinite number of Normal parents $X \sim N(\mu, \sigma^2)$ that satisfy same ... i.e. while we can fix $\sigma$, ...
Finding the distribution of iid variables X, Y given distribution of X-Y
As a counter-example: If $X \sim N(\mu, \sigma^2)$ and $Y \sim N(\mu, \sigma^2)$ are iid, then $Z = X - Y$ is $N(0, 2 \sigma^2)$. Conversely, if we observe that $Z \sim N(0, 2 \sigma^2)$, then there a
Finding the distribution of iid variables X, Y given distribution of X-Y As a counter-example: If $X \sim N(\mu, \sigma^2)$ and $Y \sim N(\mu, \sigma^2)$ are iid, then $Z = X - Y$ is $N(0, 2 \sigma^2)$. Conversely, if we observe that $Z \sim N(0, 2 \sigma^2)$, then there are an infinite number of Normal parents $X \sim...
Finding the distribution of iid variables X, Y given distribution of X-Y As a counter-example: If $X \sim N(\mu, \sigma^2)$ and $Y \sim N(\mu, \sigma^2)$ are iid, then $Z = X - Y$ is $N(0, 2 \sigma^2)$. Conversely, if we observe that $Z \sim N(0, 2 \sigma^2)$, then there a
46,140
Error bars, linear regression and "standard deviation" for point
Your dream of a "global SD" for estimated errors in $x$ given a value of $y$ is not possible. If what you care about is the SD of a prediction of $x$ given a value of $y$, then what you should examine is the square root of equation (10) of the linked reference. The same result is provided in equation 5.25 of an online ...
Error bars, linear regression and "standard deviation" for point
Your dream of a "global SD" for estimated errors in $x$ given a value of $y$ is not possible. If what you care about is the SD of a prediction of $x$ given a value of $y$, then what you should examine
Error bars, linear regression and "standard deviation" for point Your dream of a "global SD" for estimated errors in $x$ given a value of $y$ is not possible. If what you care about is the SD of a prediction of $x$ given a value of $y$, then what you should examine is the square root of equation (10) of the linked refe...
Error bars, linear regression and "standard deviation" for point Your dream of a "global SD" for estimated errors in $x$ given a value of $y$ is not possible. If what you care about is the SD of a prediction of $x$ given a value of $y$, then what you should examine
46,141
Error bars, linear regression and "standard deviation" for point
There is a relatively simple resolution of this problem: compute a “fiducial limit” based on “inverse regression” [Draper & Smith 1981]. The idea is to create confidence envelopes for the true line and then find the range of $X$ values where these envelopes enclose the target response. After introducing some notation ...
Error bars, linear regression and "standard deviation" for point
There is a relatively simple resolution of this problem: compute a “fiducial limit” based on “inverse regression” [Draper & Smith 1981]. The idea is to create confidence envelopes for the true line a
Error bars, linear regression and "standard deviation" for point There is a relatively simple resolution of this problem: compute a “fiducial limit” based on “inverse regression” [Draper & Smith 1981]. The idea is to create confidence envelopes for the true line and then find the range of $X$ values where these envelo...
Error bars, linear regression and "standard deviation" for point There is a relatively simple resolution of this problem: compute a “fiducial limit” based on “inverse regression” [Draper & Smith 1981]. The idea is to create confidence envelopes for the true line a
46,142
Beginner references to understand probabilistic principal component analysis (PPCA)
PPCA was introduced in Tipping & Bishop, 1999, Probabilistic Principal Component Analysis. I would say that this paper itself is one of the best references: it is concise and clear. Nevertheless, it might be difficult for a beginner. If so, you can try Bishop's textbook Pattern Recognition and Machine Learning, which i...
Beginner references to understand probabilistic principal component analysis (PPCA)
PPCA was introduced in Tipping & Bishop, 1999, Probabilistic Principal Component Analysis. I would say that this paper itself is one of the best references: it is concise and clear. Nevertheless, it m
Beginner references to understand probabilistic principal component analysis (PPCA) PPCA was introduced in Tipping & Bishop, 1999, Probabilistic Principal Component Analysis. I would say that this paper itself is one of the best references: it is concise and clear. Nevertheless, it might be difficult for a beginner. If...
Beginner references to understand probabilistic principal component analysis (PPCA) PPCA was introduced in Tipping & Bishop, 1999, Probabilistic Principal Component Analysis. I would say that this paper itself is one of the best references: it is concise and clear. Nevertheless, it m
46,143
A non-uniform distribution of $p$-values...again
When you write this: Suppose X is a Rayleigh random variable (with shape parameter b). It can be shown that the random variable Y=X/mean(X) You're talking about dividing by the population mean $\mu_X$, a fixed constant, giving $Y_i=X_i/\mu_X$. Then in your experiment, you divide by the sample mean: standardize the...
A non-uniform distribution of $p$-values...again
When you write this: Suppose X is a Rayleigh random variable (with shape parameter b). It can be shown that the random variable Y=X/mean(X) You're talking about dividing by the population mean $\mu
A non-uniform distribution of $p$-values...again When you write this: Suppose X is a Rayleigh random variable (with shape parameter b). It can be shown that the random variable Y=X/mean(X) You're talking about dividing by the population mean $\mu_X$, a fixed constant, giving $Y_i=X_i/\mu_X$. Then in your experiment...
A non-uniform distribution of $p$-values...again When you write this: Suppose X is a Rayleigh random variable (with shape parameter b). It can be shown that the random variable Y=X/mean(X) You're talking about dividing by the population mean $\mu
46,144
Possible to do Fourier decomposition using Linear Regression?
You can do that (estimate the magnitudes via regression), but you can actually estimate the magnitudes and the phases using regression, so it will still work when the phases are unknown, and it will work in the presence of noise in $y$ (in the sense at least that you can still estimate those coefficients, though themse...
Possible to do Fourier decomposition using Linear Regression?
You can do that (estimate the magnitudes via regression), but you can actually estimate the magnitudes and the phases using regression, so it will still work when the phases are unknown, and it will w
Possible to do Fourier decomposition using Linear Regression? You can do that (estimate the magnitudes via regression), but you can actually estimate the magnitudes and the phases using regression, so it will still work when the phases are unknown, and it will work in the presence of noise in $y$ (in the sense at least...
Possible to do Fourier decomposition using Linear Regression? You can do that (estimate the magnitudes via regression), but you can actually estimate the magnitudes and the phases using regression, so it will still work when the phases are unknown, and it will w
46,145
Possible to do Fourier decomposition using Linear Regression?
Of course you can do this, meaning neither the regression nor the Fourier transformation police will come and bust your house. But would it make sense? I think not. I guess you are trying to decompose a function(!) of x, where $y_i=f(x_i)$. But this is not what regression is about. Regression assumes that you do not se...
Possible to do Fourier decomposition using Linear Regression?
Of course you can do this, meaning neither the regression nor the Fourier transformation police will come and bust your house. But would it make sense? I think not. I guess you are trying to decompose
Possible to do Fourier decomposition using Linear Regression? Of course you can do this, meaning neither the regression nor the Fourier transformation police will come and bust your house. But would it make sense? I think not. I guess you are trying to decompose a function(!) of x, where $y_i=f(x_i)$. But this is not w...
Possible to do Fourier decomposition using Linear Regression? Of course you can do this, meaning neither the regression nor the Fourier transformation police will come and bust your house. But would it make sense? I think not. I guess you are trying to decompose
46,146
Train waiting time in probability
Picture in your mind's eye the whole train schedule is already generated; it looks like a line with marks on it, where the marks represent a train arriving. On average, two consecutive marks are fifteen minutes apart half the time, and 45 minutes apart half the time. Now, imagine a person arrives; this means randomly ...
Train waiting time in probability
Picture in your mind's eye the whole train schedule is already generated; it looks like a line with marks on it, where the marks represent a train arriving. On average, two consecutive marks are fifte
Train waiting time in probability Picture in your mind's eye the whole train schedule is already generated; it looks like a line with marks on it, where the marks represent a train arriving. On average, two consecutive marks are fifteen minutes apart half the time, and 45 minutes apart half the time. Now, imagine a pe...
Train waiting time in probability Picture in your mind's eye the whole train schedule is already generated; it looks like a line with marks on it, where the marks represent a train arriving. On average, two consecutive marks are fifte
46,147
Train waiting time in probability
Your simulator is correct. Since 15 minutes and 45 minutes intervals are equally likely, you end up in a 15 minute interval in 25% of the time and in a 45 minute interval in 75% of the time. In a 15 minute interval, you have to wait $15 \cdot \frac12 = 7.5$ minutes on average. In a 45 minute interval, you have to wait ...
Train waiting time in probability
Your simulator is correct. Since 15 minutes and 45 minutes intervals are equally likely, you end up in a 15 minute interval in 25% of the time and in a 45 minute interval in 75% of the time. In a 15 m
Train waiting time in probability Your simulator is correct. Since 15 minutes and 45 minutes intervals are equally likely, you end up in a 15 minute interval in 25% of the time and in a 45 minute interval in 75% of the time. In a 15 minute interval, you have to wait $15 \cdot \frac12 = 7.5$ minutes on average. In a 45 ...
Train waiting time in probability Your simulator is correct. Since 15 minutes and 45 minutes intervals are equally likely, you end up in a 15 minute interval in 25% of the time and in a 45 minute interval in 75% of the time. In a 15 m
46,148
How to determine how many variables and what kind of variables a table of data has?
There's most likely no single correct answer to the question "how many variables does this dataset have", one can structure the data in different ways as you've shown leading to different numbers of columns. However, there's probably a good answer to "what structure would make this dataset most amenable for analysis", ...
How to determine how many variables and what kind of variables a table of data has?
There's most likely no single correct answer to the question "how many variables does this dataset have", one can structure the data in different ways as you've shown leading to different numbers of c
How to determine how many variables and what kind of variables a table of data has? There's most likely no single correct answer to the question "how many variables does this dataset have", one can structure the data in different ways as you've shown leading to different numbers of columns. However, there's probably a ...
How to determine how many variables and what kind of variables a table of data has? There's most likely no single correct answer to the question "how many variables does this dataset have", one can structure the data in different ways as you've shown leading to different numbers of c
46,149
How to determine how many variables and what kind of variables a table of data has?
@nick-eng pretty much answered it all (+1)! I just thought I could add some examples to illustrate his points and to show why the long-format (your first table) is more efficient to work with, especially when you are working with Hadley Wickham's R packages ggplot2 and plyr. But I do have to say that I often prefer to ...
How to determine how many variables and what kind of variables a table of data has?
@nick-eng pretty much answered it all (+1)! I just thought I could add some examples to illustrate his points and to show why the long-format (your first table) is more efficient to work with, especia
How to determine how many variables and what kind of variables a table of data has? @nick-eng pretty much answered it all (+1)! I just thought I could add some examples to illustrate his points and to show why the long-format (your first table) is more efficient to work with, especially when you are working with Hadley...
How to determine how many variables and what kind of variables a table of data has? @nick-eng pretty much answered it all (+1)! I just thought I could add some examples to illustrate his points and to show why the long-format (your first table) is more efficient to work with, especia
46,150
Bayesian vs. frequentist estimation
The simple answer for all your questions is: in Bayesian model you include a priori information in your model besides the data, $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ so if you include in your model additional information, than your estimates (mean, standard deviation etc.) can possibly di...
Bayesian vs. frequentist estimation
The simple answer for all your questions is: in Bayesian model you include a priori information in your model besides the data, $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ so
Bayesian vs. frequentist estimation The simple answer for all your questions is: in Bayesian model you include a priori information in your model besides the data, $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ so if you include in your model additional information, than your estimates (mean, stan...
Bayesian vs. frequentist estimation The simple answer for all your questions is: in Bayesian model you include a priori information in your model besides the data, $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ so
46,151
Bayesian vs. frequentist estimation
I've covered this topic on other threads listed below. I will address your specific questions here. So how does this a-posteriori distribution compare to the sample mean point estimate? Is the expected value of the a-posteriori distribution equal to the sample mean? Is the standard deviation of the a-posteriori dis...
Bayesian vs. frequentist estimation
I've covered this topic on other threads listed below. I will address your specific questions here. So how does this a-posteriori distribution compare to the sample mean point estimate? Is the expe
Bayesian vs. frequentist estimation I've covered this topic on other threads listed below. I will address your specific questions here. So how does this a-posteriori distribution compare to the sample mean point estimate? Is the expected value of the a-posteriori distribution equal to the sample mean? Is the standa...
Bayesian vs. frequentist estimation I've covered this topic on other threads listed below. I will address your specific questions here. So how does this a-posteriori distribution compare to the sample mean point estimate? Is the expe
46,152
Maximal model for linear mixed-effects model for repeated mesaures design
The maximal structure would need to include also a random effect for the interaction between color and shape, that is: Y ~ color * shape + (color + shape + color:shape | subject) This will result in all your predictors (color, shape and their interaction) having a fixed effect (constant for all subjects), and a random...
Maximal model for linear mixed-effects model for repeated mesaures design
The maximal structure would need to include also a random effect for the interaction between color and shape, that is: Y ~ color * shape + (color + shape + color:shape | subject) This will result in
Maximal model for linear mixed-effects model for repeated mesaures design The maximal structure would need to include also a random effect for the interaction between color and shape, that is: Y ~ color * shape + (color + shape + color:shape | subject) This will result in all your predictors (color, shape and their in...
Maximal model for linear mixed-effects model for repeated mesaures design The maximal structure would need to include also a random effect for the interaction between color and shape, that is: Y ~ color * shape + (color + shape + color:shape | subject) This will result in
46,153
Maximal model for linear mixed-effects model for repeated mesaures design
First I should say that if your aim was to formulate a mixed model that was exactly analogous to a repeated measures ANOVA you would also have to enforce compound symmetry, which in lme would be done as follows library(lmerTest) library(nlme) fit=lme(Y~ color*shape, random=~1|subject, correlation=corCompSymm(form=~...
Maximal model for linear mixed-effects model for repeated mesaures design
First I should say that if your aim was to formulate a mixed model that was exactly analogous to a repeated measures ANOVA you would also have to enforce compound symmetry, which in lme would be done
Maximal model for linear mixed-effects model for repeated mesaures design First I should say that if your aim was to formulate a mixed model that was exactly analogous to a repeated measures ANOVA you would also have to enforce compound symmetry, which in lme would be done as follows library(lmerTest) library(nlme)...
Maximal model for linear mixed-effects model for repeated mesaures design First I should say that if your aim was to formulate a mixed model that was exactly analogous to a repeated measures ANOVA you would also have to enforce compound symmetry, which in lme would be done
46,154
Pr(Z>|z|) values and the level of significance
Firstly, the p-value given for the Z-statistic would have to be interpreted as how likely it is that a result as extreme or more extreme than that observed would have occured under the null hypothesis. I.e. 0.96 would in principle mean that the data are providing very little evidence that the variable is needed (while ...
Pr(Z>|z|) values and the level of significance
Firstly, the p-value given for the Z-statistic would have to be interpreted as how likely it is that a result as extreme or more extreme than that observed would have occured under the null hypothesis
Pr(Z>|z|) values and the level of significance Firstly, the p-value given for the Z-statistic would have to be interpreted as how likely it is that a result as extreme or more extreme than that observed would have occured under the null hypothesis. I.e. 0.96 would in principle mean that the data are providing very litt...
Pr(Z>|z|) values and the level of significance Firstly, the p-value given for the Z-statistic would have to be interpreted as how likely it is that a result as extreme or more extreme than that observed would have occured under the null hypothesis
46,155
Pr(Z>|z|) values and the level of significance
You are using the normal approximation and specifically the Wald test so you do what you would do in a regular t-test. That is, you reject the null hypothesis if the probability of the event $\left\{Z \geq |z| \right\}$ is lower than the conventional threshold of $0.05$. Alternatively you fail to reject the null hypoth...
Pr(Z>|z|) values and the level of significance
You are using the normal approximation and specifically the Wald test so you do what you would do in a regular t-test. That is, you reject the null hypothesis if the probability of the event $\left\{Z
Pr(Z>|z|) values and the level of significance You are using the normal approximation and specifically the Wald test so you do what you would do in a regular t-test. That is, you reject the null hypothesis if the probability of the event $\left\{Z \geq |z| \right\}$ is lower than the conventional threshold of $0.05$. A...
Pr(Z>|z|) values and the level of significance You are using the normal approximation and specifically the Wald test so you do what you would do in a regular t-test. That is, you reject the null hypothesis if the probability of the event $\left\{Z
46,156
Pr(Z>|z|) values and the level of significance
The value of the coefficient and its large standard error suggest that what we are seeing here is separation or the Hauck-Donner effect which has its own tag hauck-donner-effect which has a clear and helpful wiki excerpt. I think therefore the debate about $t$ versus $z$ is a red herring. Profile likelihood would be th...
Pr(Z>|z|) values and the level of significance
The value of the coefficient and its large standard error suggest that what we are seeing here is separation or the Hauck-Donner effect which has its own tag hauck-donner-effect which has a clear and
Pr(Z>|z|) values and the level of significance The value of the coefficient and its large standard error suggest that what we are seeing here is separation or the Hauck-Donner effect which has its own tag hauck-donner-effect which has a clear and helpful wiki excerpt. I think therefore the debate about $t$ versus $z$ i...
Pr(Z>|z|) values and the level of significance The value of the coefficient and its large standard error suggest that what we are seeing here is separation or the Hauck-Donner effect which has its own tag hauck-donner-effect which has a clear and
46,157
Definition of improper priors
The classical definition of an improper prior in Bayesian statistics is one of a measure $\text{d}\pi$ with infinite mass $$\int_\Theta \text{d}\pi(\theta)=+\infty$$ See, e.g., Hartigan's Bayes Theory, which formalises quite nicely the use of improper priors. Any measure $\text{d}\pi$ with finite mass can be normalised...
Definition of improper priors
The classical definition of an improper prior in Bayesian statistics is one of a measure $\text{d}\pi$ with infinite mass $$\int_\Theta \text{d}\pi(\theta)=+\infty$$ See, e.g., Hartigan's Bayes Theory
Definition of improper priors The classical definition of an improper prior in Bayesian statistics is one of a measure $\text{d}\pi$ with infinite mass $$\int_\Theta \text{d}\pi(\theta)=+\infty$$ See, e.g., Hartigan's Bayes Theory, which formalises quite nicely the use of improper priors. Any measure $\text{d}\pi$ with...
Definition of improper priors The classical definition of an improper prior in Bayesian statistics is one of a measure $\text{d}\pi$ with infinite mass $$\int_\Theta \text{d}\pi(\theta)=+\infty$$ See, e.g., Hartigan's Bayes Theory
46,158
Definition of improper priors
Since in the case of $p (\theta) \propto f (\theta) $ with $\int_{\theta \in \Theta} f (\theta) d\theta = c$ integrating to some constant you just need to normalize the density to $f (\theta)/c $, that defines a proper prior.
Definition of improper priors
Since in the case of $p (\theta) \propto f (\theta) $ with $\int_{\theta \in \Theta} f (\theta) d\theta = c$ integrating to some constant you just need to normalize the density to $f (\theta)/c $, tha
Definition of improper priors Since in the case of $p (\theta) \propto f (\theta) $ with $\int_{\theta \in \Theta} f (\theta) d\theta = c$ integrating to some constant you just need to normalize the density to $f (\theta)/c $, that defines a proper prior.
Definition of improper priors Since in the case of $p (\theta) \propto f (\theta) $ with $\int_{\theta \in \Theta} f (\theta) d\theta = c$ integrating to some constant you just need to normalize the density to $f (\theta)/c $, tha
46,159
Independent samples t-test with unequal sample sizes
t-test requires a set of assumptions. It assumes your data is i.i.d. (independent and identically distributed) and comes from a normal distribution. If you care to compare the means of the two groups (and they follow the assumptions), then yes - you can use that test. As JohnK, you may wish to note if you want to assum...
Independent samples t-test with unequal sample sizes
t-test requires a set of assumptions. It assumes your data is i.i.d. (independent and identically distributed) and comes from a normal distribution. If you care to compare the means of the two groups
Independent samples t-test with unequal sample sizes t-test requires a set of assumptions. It assumes your data is i.i.d. (independent and identically distributed) and comes from a normal distribution. If you care to compare the means of the two groups (and they follow the assumptions), then yes - you can use that test...
Independent samples t-test with unequal sample sizes t-test requires a set of assumptions. It assumes your data is i.i.d. (independent and identically distributed) and comes from a normal distribution. If you care to compare the means of the two groups
46,160
Independent samples t-test with unequal sample sizes
The two-sample t-test makes no assumption about equal sample sizes. However, if you have $2n$ observations, the best allocation of them is into two groups, each with $n$ observations. This is part of the experimental design; if you already have your observations, then you don’t get to allocate them into groups. (They e...
Independent samples t-test with unequal sample sizes
The two-sample t-test makes no assumption about equal sample sizes. However, if you have $2n$ observations, the best allocation of them is into two groups, each with $n$ observations. This is part of
Independent samples t-test with unequal sample sizes The two-sample t-test makes no assumption about equal sample sizes. However, if you have $2n$ observations, the best allocation of them is into two groups, each with $n$ observations. This is part of the experimental design; if you already have your observations, the...
Independent samples t-test with unequal sample sizes The two-sample t-test makes no assumption about equal sample sizes. However, if you have $2n$ observations, the best allocation of them is into two groups, each with $n$ observations. This is part of
46,161
Independent samples t-test with unequal sample sizes
Sure, you do not need equal sample sizes between group. You even not need equal variance, if you use Welch test specification of t-test, which estimate common variance in denominator part. Also you do not need normality of data, just normality of sample means and respectively their difference (if you test H0 as E(Diff)...
Independent samples t-test with unequal sample sizes
Sure, you do not need equal sample sizes between group. You even not need equal variance, if you use Welch test specification of t-test, which estimate common variance in denominator part. Also you do
Independent samples t-test with unequal sample sizes Sure, you do not need equal sample sizes between group. You even not need equal variance, if you use Welch test specification of t-test, which estimate common variance in denominator part. Also you do not need normality of data, just normality of sample means and res...
Independent samples t-test with unequal sample sizes Sure, you do not need equal sample sizes between group. You even not need equal variance, if you use Welch test specification of t-test, which estimate common variance in denominator part. Also you do
46,162
Estimating sample mean from a biased sample (whose generative process is known)
Formalizing gung's suggestion, you can estimate the sample mean by inverse probability weighting, also known as the Horvitz-Thompson estimator. It is admissible in the class of unbiased estimators. The H-T estimator can be used to estimate the sum $S = \sum_{i=1}^n y_i$ of sample values in a population using a random s...
Estimating sample mean from a biased sample (whose generative process is known)
Formalizing gung's suggestion, you can estimate the sample mean by inverse probability weighting, also known as the Horvitz-Thompson estimator. It is admissible in the class of unbiased estimators. Th
Estimating sample mean from a biased sample (whose generative process is known) Formalizing gung's suggestion, you can estimate the sample mean by inverse probability weighting, also known as the Horvitz-Thompson estimator. It is admissible in the class of unbiased estimators. The H-T estimator can be used to estimate ...
Estimating sample mean from a biased sample (whose generative process is known) Formalizing gung's suggestion, you can estimate the sample mean by inverse probability weighting, also known as the Horvitz-Thompson estimator. It is admissible in the class of unbiased estimators. Th
46,163
R mixed models: lme, lmer or both? which one is relevant for my data and why?
It doesn't look like these are "totally different results"; the fixed-effects coefficients are the same between the 2 outputs, with slightly different estimates for standard errors and and random effects. That's not surprising given that the lmer fit used restricted maximum likelihood (REML) while the lme fit used maxi...
R mixed models: lme, lmer or both? which one is relevant for my data and why?
It doesn't look like these are "totally different results"; the fixed-effects coefficients are the same between the 2 outputs, with slightly different estimates for standard errors and and random effe
R mixed models: lme, lmer or both? which one is relevant for my data and why? It doesn't look like these are "totally different results"; the fixed-effects coefficients are the same between the 2 outputs, with slightly different estimates for standard errors and and random effects. That's not surprising given that the ...
R mixed models: lme, lmer or both? which one is relevant for my data and why? It doesn't look like these are "totally different results"; the fixed-effects coefficients are the same between the 2 outputs, with slightly different estimates for standard errors and and random effe
46,164
Is E(Y|X) a function of Y?
$$g_{Y\mid X = x}(y\mid x) =\frac {f_{X,Y}(x,y)}{\int_{-\infty}^{\infty}f_{X,Y}(x,y)dy} = \frac {f_{X,Y}(x,y)}{f_{X}(x)}$$ and so $$E[Y\mid X = x] = \int_{-\infty}^{\infty} y g_{Y\mid X = x}(y\mid x) \ dy = \frac{1}{f_{X}(x)} \int_{-\infty}^{\infty} y f_{X,Y}(x,y) \ dy = h(x)\tag{1}$$ Thus, the number $E[Y\mid X = x]$ ...
Is E(Y|X) a function of Y?
$$g_{Y\mid X = x}(y\mid x) =\frac {f_{X,Y}(x,y)}{\int_{-\infty}^{\infty}f_{X,Y}(x,y)dy} = \frac {f_{X,Y}(x,y)}{f_{X}(x)}$$ and so $$E[Y\mid X = x] = \int_{-\infty}^{\infty} y g_{Y\mid X = x}(y\mid x)
Is E(Y|X) a function of Y? $$g_{Y\mid X = x}(y\mid x) =\frac {f_{X,Y}(x,y)}{\int_{-\infty}^{\infty}f_{X,Y}(x,y)dy} = \frac {f_{X,Y}(x,y)}{f_{X}(x)}$$ and so $$E[Y\mid X = x] = \int_{-\infty}^{\infty} y g_{Y\mid X = x}(y\mid x) \ dy = \frac{1}{f_{X}(x)} \int_{-\infty}^{\infty} y f_{X,Y}(x,y) \ dy = h(x)\tag{1}$$ Thus, t...
Is E(Y|X) a function of Y? $$g_{Y\mid X = x}(y\mid x) =\frac {f_{X,Y}(x,y)}{\int_{-\infty}^{\infty}f_{X,Y}(x,y)dy} = \frac {f_{X,Y}(x,y)}{f_{X}(x)}$$ and so $$E[Y\mid X = x] = \int_{-\infty}^{\infty} y g_{Y\mid X = x}(y\mid x)
46,165
Is E(Y|X) a function of Y?
@Neznajka pointed out the formal basis for why $E[Y|X]$ is a function of $X$, not $Y$. At a more abstract level, you can also see this by considering that $E[Y|X]$ takes as input a value of $X$ and maps it to a conditional expected value of $Y$. Therefore, you can write $E[Y|X]$ as a plain old univariate function $f: \...
Is E(Y|X) a function of Y?
@Neznajka pointed out the formal basis for why $E[Y|X]$ is a function of $X$, not $Y$. At a more abstract level, you can also see this by considering that $E[Y|X]$ takes as input a value of $X$ and ma
Is E(Y|X) a function of Y? @Neznajka pointed out the formal basis for why $E[Y|X]$ is a function of $X$, not $Y$. At a more abstract level, you can also see this by considering that $E[Y|X]$ takes as input a value of $X$ and maps it to a conditional expected value of $Y$. Therefore, you can write $E[Y|X]$ as a plain ol...
Is E(Y|X) a function of Y? @Neznajka pointed out the formal basis for why $E[Y|X]$ is a function of $X$, not $Y$. At a more abstract level, you can also see this by considering that $E[Y|X]$ takes as input a value of $X$ and ma
46,166
Can gradient descent find a better solution than least squares regression?
No. These two methods both solve the same problem: minimizing the sum of squares error. One method is much faster than the other, but they are both arriving at the same answer. This would be akin to asking "which gives a better answer to 10/4: long division or a calculator?"
Can gradient descent find a better solution than least squares regression?
No. These two methods both solve the same problem: minimizing the sum of squares error. One method is much faster than the other, but they are both arriving at the same answer. This would be akin to
Can gradient descent find a better solution than least squares regression? No. These two methods both solve the same problem: minimizing the sum of squares error. One method is much faster than the other, but they are both arriving at the same answer. This would be akin to asking "which gives a better answer to 10/4:...
Can gradient descent find a better solution than least squares regression? No. These two methods both solve the same problem: minimizing the sum of squares error. One method is much faster than the other, but they are both arriving at the same answer. This would be akin to
46,167
Can gradient descent find a better solution than least squares regression?
OLS solves for BLUE (best linear unbiased estimator) only when all Gauss-Markov assumptions are met. You need a linear model, independence, identical distribution, exogeneity, and homoscedasticity. In scenarios, without linearity, we can still solve for a local minimum using gradient descent. Preferably, stochastic gra...
Can gradient descent find a better solution than least squares regression?
OLS solves for BLUE (best linear unbiased estimator) only when all Gauss-Markov assumptions are met. You need a linear model, independence, identical distribution, exogeneity, and homoscedasticity. In
Can gradient descent find a better solution than least squares regression? OLS solves for BLUE (best linear unbiased estimator) only when all Gauss-Markov assumptions are met. You need a linear model, independence, identical distribution, exogeneity, and homoscedasticity. In scenarios, without linearity, we can still s...
Can gradient descent find a better solution than least squares regression? OLS solves for BLUE (best linear unbiased estimator) only when all Gauss-Markov assumptions are met. You need a linear model, independence, identical distribution, exogeneity, and homoscedasticity. In
46,168
Calculate mean and variance of a distribution of a distribution
The expectation and variance computations in your example can be handled with the law of total expectation and law of total variance. The law of total expectation in your case reads: $$ E(\theta) = E_{\mu} ( E_{\theta} (\theta \mid \mu ) ) $$ where the subscripts indicate which variable is being averaged over in the ex...
Calculate mean and variance of a distribution of a distribution
The expectation and variance computations in your example can be handled with the law of total expectation and law of total variance. The law of total expectation in your case reads: $$ E(\theta) = E_
Calculate mean and variance of a distribution of a distribution The expectation and variance computations in your example can be handled with the law of total expectation and law of total variance. The law of total expectation in your case reads: $$ E(\theta) = E_{\mu} ( E_{\theta} (\theta \mid \mu ) ) $$ where the sub...
Calculate mean and variance of a distribution of a distribution The expectation and variance computations in your example can be handled with the law of total expectation and law of total variance. The law of total expectation in your case reads: $$ E(\theta) = E_
46,169
Estimate lag for granger causality test
Introduction This test in your question seems rather heavy handed. It is conducting pairwise bivariate Granger causality testing over all pairs in the data set. I'll choose two to examine. require(lmtest) ts(datSel$cpi)->cpi ts(datSel$lohn)->wages #i presume Note that in your test at lag order 4 we get that both wa...
Estimate lag for granger causality test
Introduction This test in your question seems rather heavy handed. It is conducting pairwise bivariate Granger causality testing over all pairs in the data set. I'll choose two to examine. require(lmt
Estimate lag for granger causality test Introduction This test in your question seems rather heavy handed. It is conducting pairwise bivariate Granger causality testing over all pairs in the data set. I'll choose two to examine. require(lmtest) ts(datSel$cpi)->cpi ts(datSel$lohn)->wages #i presume Note that in your...
Estimate lag for granger causality test Introduction This test in your question seems rather heavy handed. It is conducting pairwise bivariate Granger causality testing over all pairs in the data set. I'll choose two to examine. require(lmt
46,170
Why do we need to log transform independent variable in logistic regression
The reason for such transformations have nothing to do with their distribution. Instead, the reason has to do with the functional form of the effect. Say we want to know the effect of the number of publications on the probability of getting tenure. It is reasonable to believe that getting an extra publication when one ...
Why do we need to log transform independent variable in logistic regression
The reason for such transformations have nothing to do with their distribution. Instead, the reason has to do with the functional form of the effect. Say we want to know the effect of the number of pu
Why do we need to log transform independent variable in logistic regression The reason for such transformations have nothing to do with their distribution. Instead, the reason has to do with the functional form of the effect. Say we want to know the effect of the number of publications on the probability of getting ten...
Why do we need to log transform independent variable in logistic regression The reason for such transformations have nothing to do with their distribution. Instead, the reason has to do with the functional form of the effect. Say we want to know the effect of the number of pu
46,171
What is the median of the school grades A A B B ?
Both $A$ and $B$ would be valid medians, since at least half the data are $\leq A$ and $\geq B$; you could call one the "lower" median and one the "upper" median, or if uniquely defined medians are important you could try make the set that includes both "the median".
What is the median of the school grades A A B B ?
Both $A$ and $B$ would be valid medians, since at least half the data are $\leq A$ and $\geq B$; you could call one the "lower" median and one the "upper" median, or if uniquely defined medians are im
What is the median of the school grades A A B B ? Both $A$ and $B$ would be valid medians, since at least half the data are $\leq A$ and $\geq B$; you could call one the "lower" median and one the "upper" median, or if uniquely defined medians are important you could try make the set that includes both "the median".
What is the median of the school grades A A B B ? Both $A$ and $B$ would be valid medians, since at least half the data are $\leq A$ and $\geq B$; you could call one the "lower" median and one the "upper" median, or if uniquely defined medians are im
46,172
Avoiding a spline dip
There are a number of ways to avoid such effects (e.g. smoothing splines can often be tweaked so as to avoid a dip, or maybe some form of monotonic spline to the left of the peak will be needed), but I think in this particular case a simple approach might be to transform (perhaps take logs or square roots), fit a splin...
Avoiding a spline dip
There are a number of ways to avoid such effects (e.g. smoothing splines can often be tweaked so as to avoid a dip, or maybe some form of monotonic spline to the left of the peak will be needed), but
Avoiding a spline dip There are a number of ways to avoid such effects (e.g. smoothing splines can often be tweaked so as to avoid a dip, or maybe some form of monotonic spline to the left of the peak will be needed), but I think in this particular case a simple approach might be to transform (perhaps take logs or squa...
Avoiding a spline dip There are a number of ways to avoid such effects (e.g. smoothing splines can often be tweaked so as to avoid a dip, or maybe some form of monotonic spline to the left of the peak will be needed), but
46,173
How to estimate variance of classifier on test set?
Is there a principled way to estimate the variance of the classifiers using the testing distribution? Yes, and contrary to your intuition it is actually easy to do this by cross valiation. The idea is that iterated/repeated cross validation (or out-of-bag if you prefer to resample with replacement) allow you to compa...
How to estimate variance of classifier on test set?
Is there a principled way to estimate the variance of the classifiers using the testing distribution? Yes, and contrary to your intuition it is actually easy to do this by cross valiation. The idea i
How to estimate variance of classifier on test set? Is there a principled way to estimate the variance of the classifiers using the testing distribution? Yes, and contrary to your intuition it is actually easy to do this by cross valiation. The idea is that iterated/repeated cross validation (or out-of-bag if you pre...
How to estimate variance of classifier on test set? Is there a principled way to estimate the variance of the classifiers using the testing distribution? Yes, and contrary to your intuition it is actually easy to do this by cross valiation. The idea i
46,174
How to estimate variance of classifier on test set?
You want some sort of bootstrap or method for generating independent measurements of performance. You can't look at the k cross validation folds or divide the test set into k partitions as the observations won't be independent. This can and will introduce significant bias in the estimate of variance. See for example Yo...
How to estimate variance of classifier on test set?
You want some sort of bootstrap or method for generating independent measurements of performance. You can't look at the k cross validation folds or divide the test set into k partitions as the observa
How to estimate variance of classifier on test set? You want some sort of bootstrap or method for generating independent measurements of performance. You can't look at the k cross validation folds or divide the test set into k partitions as the observations won't be independent. This can and will introduce significant ...
How to estimate variance of classifier on test set? You want some sort of bootstrap or method for generating independent measurements of performance. You can't look at the k cross validation folds or divide the test set into k partitions as the observa
46,175
Piecewise regression with constraints
If the goal is simply to fit a function, you could treat this as an optimization problem: y <- c(4.5,4.3,2.57,4.40,4.52,1.39,4.15,3.55,2.49,4.27,4.42,4.10,2.21,2.90,1.42,1.50,1.45,1.7,4.6,3.8,1.9) x <- c(320,419,650,340,400,800,300,570,720,480,425,460,675,600,850,920,975,1022,450,520,780) plot(x, y, col="black",pch...
Piecewise regression with constraints
If the goal is simply to fit a function, you could treat this as an optimization problem: y <- c(4.5,4.3,2.57,4.40,4.52,1.39,4.15,3.55,2.49,4.27,4.42,4.10,2.21,2.90,1.42,1.50,1.45,1.7,4.6,3.8,1.9) x
Piecewise regression with constraints If the goal is simply to fit a function, you could treat this as an optimization problem: y <- c(4.5,4.3,2.57,4.40,4.52,1.39,4.15,3.55,2.49,4.27,4.42,4.10,2.21,2.90,1.42,1.50,1.45,1.7,4.6,3.8,1.9) x <- c(320,419,650,340,400,800,300,570,720,480,425,460,675,600,850,920,975,1022,450...
Piecewise regression with constraints If the goal is simply to fit a function, you could treat this as an optimization problem: y <- c(4.5,4.3,2.57,4.40,4.52,1.39,4.15,3.55,2.49,4.27,4.42,4.10,2.21,2.90,1.42,1.50,1.45,1.7,4.6,3.8,1.9) x
46,176
Piecewise regression with constraints
If you also want confidence and prediction intervals you can first approximate your three-phase piecewise linear function by a smooth function, do an nls fit and then using the investr package (this helps for the fitting as the function is then continuously differentiable). In your case: x <- c(478, 525, 580, 650, 7...
Piecewise regression with constraints
If you also want confidence and prediction intervals you can first approximate your three-phase piecewise linear function by a smooth function, do an nls fit and then using the investr package (this h
Piecewise regression with constraints If you also want confidence and prediction intervals you can first approximate your three-phase piecewise linear function by a smooth function, do an nls fit and then using the investr package (this helps for the fitting as the function is then continuously differentiable). In you...
Piecewise regression with constraints If you also want confidence and prediction intervals you can first approximate your three-phase piecewise linear function by a smooth function, do an nls fit and then using the investr package (this h
46,177
Piecewise regression with constraints
In the general case, the linear piecewise regression for three segments leads to this kind of function : The parameters where computed according to the direct method (not iterative) given page 30 in the paper https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf But the first and t...
Piecewise regression with constraints
In the general case, the linear piecewise regression for three segments leads to this kind of function : The parameters where computed according to the direct method (not iterative) given page 30 in
Piecewise regression with constraints In the general case, the linear piecewise regression for three segments leads to this kind of function : The parameters where computed according to the direct method (not iterative) given page 30 in the paper https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecew...
Piecewise regression with constraints In the general case, the linear piecewise regression for three segments leads to this kind of function : The parameters where computed according to the direct method (not iterative) given page 30 in
46,178
Interpretation of marginal effects in Logit Model with log$\times$independent variable
You know that in a logit: $$Pr[y = 1 \vert x,z] = p = \frac{\exp (\alpha + \beta \cdot \ln x + \gamma z)}{1+\exp (\alpha + \beta \cdot \ln x + \gamma z )}. $$ After some tedious calculus and simplification, the partial of that with respect to $x$ becomes: $$ \frac{\partial Pr[y=1 \vert x,z]}{\partial x} = \frac{\beta}{...
Interpretation of marginal effects in Logit Model with log$\times$independent variable
You know that in a logit: $$Pr[y = 1 \vert x,z] = p = \frac{\exp (\alpha + \beta \cdot \ln x + \gamma z)}{1+\exp (\alpha + \beta \cdot \ln x + \gamma z )}. $$ After some tedious calculus and simplific
Interpretation of marginal effects in Logit Model with log$\times$independent variable You know that in a logit: $$Pr[y = 1 \vert x,z] = p = \frac{\exp (\alpha + \beta \cdot \ln x + \gamma z)}{1+\exp (\alpha + \beta \cdot \ln x + \gamma z )}. $$ After some tedious calculus and simplification, the partial of that with r...
Interpretation of marginal effects in Logit Model with log$\times$independent variable You know that in a logit: $$Pr[y = 1 \vert x,z] = p = \frac{\exp (\alpha + \beta \cdot \ln x + \gamma z)}{1+\exp (\alpha + \beta \cdot \ln x + \gamma z )}. $$ After some tedious calculus and simplific
46,179
Negative $R^2$ at random regression forest [duplicate]
Explained variance is here defined as R² = 1- SSmodel / SStotal = sum((ŷ-y)²) / sum((mean(y)-y)²). = 1 - mse / var(y). It is correct that the squared pearson product-moment correlation cannot be negative. in the documentation to randomForest function is written in values section: rsq (regression only) “pseudo R-square...
Negative $R^2$ at random regression forest [duplicate]
Explained variance is here defined as R² = 1- SSmodel / SStotal = sum((ŷ-y)²) / sum((mean(y)-y)²). = 1 - mse / var(y). It is correct that the squared pearson product-moment correlation cannot be nega
Negative $R^2$ at random regression forest [duplicate] Explained variance is here defined as R² = 1- SSmodel / SStotal = sum((ŷ-y)²) / sum((mean(y)-y)²). = 1 - mse / var(y). It is correct that the squared pearson product-moment correlation cannot be negative. in the documentation to randomForest function is written in...
Negative $R^2$ at random regression forest [duplicate] Explained variance is here defined as R² = 1- SSmodel / SStotal = sum((ŷ-y)²) / sum((mean(y)-y)²). = 1 - mse / var(y). It is correct that the squared pearson product-moment correlation cannot be nega
46,180
Chi-squared-like test with absolute instead of squared differences for calculating test statistic
You don't very clearly explain your concern, but I suppose that you're probably worried about the relative weight the chi-square puts on the cases where $(O_i-E_i)^2$ is large relative to the $E_i$ on the denominator. A single such term can dominate the statistic. I also assume (at least to start with) that you're aski...
Chi-squared-like test with absolute instead of squared differences for calculating test statistic
You don't very clearly explain your concern, but I suppose that you're probably worried about the relative weight the chi-square puts on the cases where $(O_i-E_i)^2$ is large relative to the $E_i$ on
Chi-squared-like test with absolute instead of squared differences for calculating test statistic You don't very clearly explain your concern, but I suppose that you're probably worried about the relative weight the chi-square puts on the cases where $(O_i-E_i)^2$ is large relative to the $E_i$ on the denominator. A si...
Chi-squared-like test with absolute instead of squared differences for calculating test statistic You don't very clearly explain your concern, but I suppose that you're probably worried about the relative weight the chi-square puts on the cases where $(O_i-E_i)^2$ is large relative to the $E_i$ on
46,181
Chi-squared-like test with absolute instead of squared differences for calculating test statistic
Just to add to this, the concern about weighting for chi-square and related tests is well-founded in certain cases. One solution is to use the (unweighted) Euclidean distance. This isn't the absolute distance, but I think it's a little more intuitive than the weighted Euclidean distance involved in the chi-square test....
Chi-squared-like test with absolute instead of squared differences for calculating test statistic
Just to add to this, the concern about weighting for chi-square and related tests is well-founded in certain cases. One solution is to use the (unweighted) Euclidean distance. This isn't the absolute
Chi-squared-like test with absolute instead of squared differences for calculating test statistic Just to add to this, the concern about weighting for chi-square and related tests is well-founded in certain cases. One solution is to use the (unweighted) Euclidean distance. This isn't the absolute distance, but I think ...
Chi-squared-like test with absolute instead of squared differences for calculating test statistic Just to add to this, the concern about weighting for chi-square and related tests is well-founded in certain cases. One solution is to use the (unweighted) Euclidean distance. This isn't the absolute
46,182
Compute the probability that the provided classifier label is correct
SVMs do produce a decision function but it does not directly correspond to probability. There is a way in LibSVM (and sklearn which uses that under the hood) to get probabilities using Platt scaling which sounds like what you're looking for. Some more details about how that works are here: How does sklearn.svm.svc's ...
Compute the probability that the provided classifier label is correct
SVMs do produce a decision function but it does not directly correspond to probability. There is a way in LibSVM (and sklearn which uses that under the hood) to get probabilities using Platt scaling
Compute the probability that the provided classifier label is correct SVMs do produce a decision function but it does not directly correspond to probability. There is a way in LibSVM (and sklearn which uses that under the hood) to get probabilities using Platt scaling which sounds like what you're looking for. Some m...
Compute the probability that the provided classifier label is correct SVMs do produce a decision function but it does not directly correspond to probability. There is a way in LibSVM (and sklearn which uses that under the hood) to get probabilities using Platt scaling
46,183
Compute the probability that the provided classifier label is correct
What you want sounds akin to a precision-recall (PR) curve. PR curves show precision (TP / (TP + FP)) as a function of recall (TP / (TP + FN)). Every PR point corresponds to a threshold $T$ on the SVM's output $d$ (signed distance to the hyperplane), that is positive if $d \geq T$ and negative otherwise. As such, you c...
Compute the probability that the provided classifier label is correct
What you want sounds akin to a precision-recall (PR) curve. PR curves show precision (TP / (TP + FP)) as a function of recall (TP / (TP + FN)). Every PR point corresponds to a threshold $T$ on the SVM
Compute the probability that the provided classifier label is correct What you want sounds akin to a precision-recall (PR) curve. PR curves show precision (TP / (TP + FP)) as a function of recall (TP / (TP + FN)). Every PR point corresponds to a threshold $T$ on the SVM's output $d$ (signed distance to the hyperplane),...
Compute the probability that the provided classifier label is correct What you want sounds akin to a precision-recall (PR) curve. PR curves show precision (TP / (TP + FP)) as a function of recall (TP / (TP + FN)). Every PR point corresponds to a threshold $T$ on the SVM
46,184
Specific robust measure of scale
This is possible in a somewhat artificial sense: just adjust $R$ a little whenever $S$ is nonzero, to guarantee $R$ is nonzero in such cases. "Resistant" means $R$ has a finite breakdown point, but the possibility of a sample with values $(Y, Y, \ldots, Y, Y^\prime \ne Y)$ means that $R$ has to respond to the value of...
Specific robust measure of scale
This is possible in a somewhat artificial sense: just adjust $R$ a little whenever $S$ is nonzero, to guarantee $R$ is nonzero in such cases. "Resistant" means $R$ has a finite breakdown point, but t
Specific robust measure of scale This is possible in a somewhat artificial sense: just adjust $R$ a little whenever $S$ is nonzero, to guarantee $R$ is nonzero in such cases. "Resistant" means $R$ has a finite breakdown point, but the possibility of a sample with values $(Y, Y, \ldots, Y, Y^\prime \ne Y)$ means that $...
Specific robust measure of scale This is possible in a somewhat artificial sense: just adjust $R$ a little whenever $S$ is nonzero, to guarantee $R$ is nonzero in such cases. "Resistant" means $R$ has a finite breakdown point, but t
46,185
Specific robust measure of scale
I will here gather together comments with the import that this is not possible in a simple manner. Consider an example such as $y = 7, 7, \cdots, 7, 42$ with two distinct values, one of which occurs only once. The singleton in this example, $42$, has outlier flavour. So in broad terms resistant measures of scale are o...
Specific robust measure of scale
I will here gather together comments with the import that this is not possible in a simple manner. Consider an example such as $y = 7, 7, \cdots, 7, 42$ with two distinct values, one of which occurs
Specific robust measure of scale I will here gather together comments with the import that this is not possible in a simple manner. Consider an example such as $y = 7, 7, \cdots, 7, 42$ with two distinct values, one of which occurs only once. The singleton in this example, $42$, has outlier flavour. So in broad terms ...
Specific robust measure of scale I will here gather together comments with the import that this is not possible in a simple manner. Consider an example such as $y = 7, 7, \cdots, 7, 42$ with two distinct values, one of which occurs
46,186
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$?
The parameters of a $\text{Beta}(\alpha,\beta)$ distribution with mean $0\lt m\lt 1$ and variance $0\lt v\lt m(1-m)$ are $$\alpha = m\frac{m(1-m)- v}{v},\quad \beta = (1-m)\frac{m(1-m)-v}{v}.$$ This shaded contour plot of $\alpha$ has contours ranging from $0$ (at the top of the colored region) to $1$ (along the botto...
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$?
The parameters of a $\text{Beta}(\alpha,\beta)$ distribution with mean $0\lt m\lt 1$ and variance $0\lt v\lt m(1-m)$ are $$\alpha = m\frac{m(1-m)- v}{v},\quad \beta = (1-m)\frac{m(1-m)-v}{v}.$$ This
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$? The parameters of a $\text{Beta}(\alpha,\beta)$ distribution with mean $0\lt m\lt 1$ and variance $0\lt v\lt m(1-m)$ are $$\alpha = m\frac{m(1-m)- v}{v},\quad \beta = (1-m)\frac{m(1-m)-v}{v}.$$ This shaded contour plot of $\alpha$ has conto...
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$? The parameters of a $\text{Beta}(\alpha,\beta)$ distribution with mean $0\lt m\lt 1$ and variance $0\lt v\lt m(1-m)$ are $$\alpha = m\frac{m(1-m)- v}{v},\quad \beta = (1-m)\frac{m(1-m)-v}{v}.$$ This
46,187
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$?
The mean of the Beta distribution is $$\mu = \frac {\alpha}{\alpha + \beta}$$ We want to see whether restricting the permissible range of $\mu$, will guarantee that we will have either $\{\alpha \geq 1, \beta >0\}$, OR $\{\alpha >0, \beta \geq 1\}$. Treating the mean as a function of the parameters we obtain $$\frac {\...
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$?
The mean of the Beta distribution is $$\mu = \frac {\alpha}{\alpha + \beta}$$ We want to see whether restricting the permissible range of $\mu$, will guarantee that we will have either $\{\alpha \geq
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$? The mean of the Beta distribution is $$\mu = \frac {\alpha}{\alpha + \beta}$$ We want to see whether restricting the permissible range of $\mu$, will guarantee that we will have either $\{\alpha \geq 1, \beta >0\}$, OR $\{\alpha >0, \beta \g...
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$? The mean of the Beta distribution is $$\mu = \frac {\alpha}{\alpha + \beta}$$ We want to see whether restricting the permissible range of $\mu$, will guarantee that we will have either $\{\alpha \geq
46,188
TF-IDF versus Cosine Similarity in Document Search
Xeon is right in what TF-IDF and cosine similarity are two different things. TF-IDF will give you a representation for a given term in a document. Cosine similarity will give you a score for two different documents that share the same representation. However, "one of the simplest ranking functions is computed by summin...
TF-IDF versus Cosine Similarity in Document Search
Xeon is right in what TF-IDF and cosine similarity are two different things. TF-IDF will give you a representation for a given term in a document. Cosine similarity will give you a score for two diffe
TF-IDF versus Cosine Similarity in Document Search Xeon is right in what TF-IDF and cosine similarity are two different things. TF-IDF will give you a representation for a given term in a document. Cosine similarity will give you a score for two different documents that share the same representation. However, "one of t...
TF-IDF versus Cosine Similarity in Document Search Xeon is right in what TF-IDF and cosine similarity are two different things. TF-IDF will give you a representation for a given term in a document. Cosine similarity will give you a score for two diffe
46,189
TF-IDF versus Cosine Similarity in Document Search
TF-IDF is about features and their normalization. Cosine metric is metric that you will use to score. If my memory is good, TF makes the word counts in a vector normalized. You can then compare TF normalized vectors using the cosine metric. Adding DF weight is about weighting down too frequent terms (e.g. stop words) s...
TF-IDF versus Cosine Similarity in Document Search
TF-IDF is about features and their normalization. Cosine metric is metric that you will use to score. If my memory is good, TF makes the word counts in a vector normalized. You can then compare TF nor
TF-IDF versus Cosine Similarity in Document Search TF-IDF is about features and their normalization. Cosine metric is metric that you will use to score. If my memory is good, TF makes the word counts in a vector normalized. You can then compare TF normalized vectors using the cosine metric. Adding DF weight is about we...
TF-IDF versus Cosine Similarity in Document Search TF-IDF is about features and their normalization. Cosine metric is metric that you will use to score. If my memory is good, TF makes the word counts in a vector normalized. You can then compare TF nor
46,190
Expected value of Y = (1/X) where $X \sim Gamma$
The calculation of $E\left[X^{-1}\right]$ when $X$ is a Gamma random variable with order parameter $n$ and rate parameter $\lambda$ requires recognition of the density of another Gamma random variable (with order parameter $n-1$ and rate parameter $\lambda$) in the integral given by the law of the unconscious statistic...
Expected value of Y = (1/X) where $X \sim Gamma$
The calculation of $E\left[X^{-1}\right]$ when $X$ is a Gamma random variable with order parameter $n$ and rate parameter $\lambda$ requires recognition of the density of another Gamma random variable
Expected value of Y = (1/X) where $X \sim Gamma$ The calculation of $E\left[X^{-1}\right]$ when $X$ is a Gamma random variable with order parameter $n$ and rate parameter $\lambda$ requires recognition of the density of another Gamma random variable (with order parameter $n-1$ and rate parameter $\lambda$) in the integ...
Expected value of Y = (1/X) where $X \sim Gamma$ The calculation of $E\left[X^{-1}\right]$ when $X$ is a Gamma random variable with order parameter $n$ and rate parameter $\lambda$ requires recognition of the density of another Gamma random variable
46,191
How are individual trees added together in boosted regression tree?
They assume that you're keeping track of a "current estimator" $\hat f$, which is a sum of all the trees you've seen so far. (In code you would just store this as an array of all the trees you've seen so far.) The $\leftarrow$ sign just means "takes the new value"--so when they say "add the new tree" they mean, basical...
How are individual trees added together in boosted regression tree?
They assume that you're keeping track of a "current estimator" $\hat f$, which is a sum of all the trees you've seen so far. (In code you would just store this as an array of all the trees you've seen
How are individual trees added together in boosted regression tree? They assume that you're keeping track of a "current estimator" $\hat f$, which is a sum of all the trees you've seen so far. (In code you would just store this as an array of all the trees you've seen so far.) The $\leftarrow$ sign just means "takes th...
How are individual trees added together in boosted regression tree? They assume that you're keeping track of a "current estimator" $\hat f$, which is a sum of all the trees you've seen so far. (In code you would just store this as an array of all the trees you've seen
46,192
Computing the power of Fisher's exact test in R
What you are asking for here is a post-hoc power analysis. (More specifically, "the probability of correctly rejecting the null hypothesis" is the power, and 1-power is beta, "the probability of a type-II error". You ask for both, but we only need one to know the other.) We take your existing dataset as the alternat...
Computing the power of Fisher's exact test in R
What you are asking for here is a post-hoc power analysis. (More specifically, "the probability of correctly rejecting the null hypothesis" is the power, and 1-power is beta, "the probability of a ty
Computing the power of Fisher's exact test in R What you are asking for here is a post-hoc power analysis. (More specifically, "the probability of correctly rejecting the null hypothesis" is the power, and 1-power is beta, "the probability of a type-II error". You ask for both, but we only need one to know the other....
Computing the power of Fisher's exact test in R What you are asking for here is a post-hoc power analysis. (More specifically, "the probability of correctly rejecting the null hypothesis" is the power, and 1-power is beta, "the probability of a ty
46,193
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
The following approach might be more accurate and more efficient. Goutte, C., & Gaussier, E. (2005, March). A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In European Conference on Information Retrieval (pp. 345-359). Springer Berlin Heidelberg.
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
The following approach might be more accurate and more efficient. Goutte, C., & Gaussier, E. (2005, March). A probabilistic interpretation of precision, recall and F-score, with implication for evalua
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? The following approach might be more accurate and more efficient. Goutte, C., & Gaussier, E. (2005, March). A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In European Confere...
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? The following approach might be more accurate and more efficient. Goutte, C., & Gaussier, E. (2005, March). A probabilistic interpretation of precision, recall and F-score, with implication for evalua
46,194
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
I'll summarise the approaches which are sketched by the other two answers. Suggest by @marsu. Assume that your confusion matrix $C$ has a multinomial distribution $M(n; π)$, then the distribution of the $TP$ is binomial. Assume a symmetric beta prior for precision $p$ and and recall $r$, that is $p,r ∼ Beta(λ, λ)$. Th...
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
I'll summarise the approaches which are sketched by the other two answers. Suggest by @marsu. Assume that your confusion matrix $C$ has a multinomial distribution $M(n; π)$, then the distribution of
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? I'll summarise the approaches which are sketched by the other two answers. Suggest by @marsu. Assume that your confusion matrix $C$ has a multinomial distribution $M(n; π)$, then the distribution of the $TP$ is binomial. Ass...
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? I'll summarise the approaches which are sketched by the other two answers. Suggest by @marsu. Assume that your confusion matrix $C$ has a multinomial distribution $M(n; π)$, then the distribution of
46,195
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
An answer here suggests using bootstrapped statistics; we've done this at my place of employment and it seems to do the right thing. Confidence interval for precision and recall in classification
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
An answer here suggests using bootstrapped statistics; we've done this at my place of employment and it seems to do the right thing. Confidence interval for precision and recall in classification
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? An answer here suggests using bootstrapped statistics; we've done this at my place of employment and it seems to do the right thing. Confidence interval for precision and recall in classification
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? An answer here suggests using bootstrapped statistics; we've done this at my place of employment and it seems to do the right thing. Confidence interval for precision and recall in classification
46,196
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
Please note that the approach described in the paper suggested by @Marsu_ is a Bayesian rather than a frequentist one. This means that the intervals it provides, despite what the article claims, are credible intervals, not the confidence ones; and those are in fact very different in interpretation. The Bayesian approa...
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
Please note that the approach described in the paper suggested by @Marsu_ is a Bayesian rather than a frequentist one. This means that the intervals it provides, despite what the article claims, are c
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? Please note that the approach described in the paper suggested by @Marsu_ is a Bayesian rather than a frequentist one. This means that the intervals it provides, despite what the article claims, are credible intervals, not th...
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? Please note that the approach described in the paper suggested by @Marsu_ is a Bayesian rather than a frequentist one. This means that the intervals it provides, despite what the article claims, are c
46,197
How can MANOVA report a significant difference when none of the univariate ANOVAs reaches significance?
Here is a figure illustrating how it is possible: Two populations (red and blue) are sampled from a same 2D distribution, but slightly shifted from each other. On the left $N=100$, on the right $N=1000$ in each group. In both cases, I conduct two univariate ANOVAs (for $x$ dimension and for $y$ dimension) and one mult...
How can MANOVA report a significant difference when none of the univariate ANOVAs reaches significan
Here is a figure illustrating how it is possible: Two populations (red and blue) are sampled from a same 2D distribution, but slightly shifted from each other. On the left $N=100$, on the right $N=10
How can MANOVA report a significant difference when none of the univariate ANOVAs reaches significance? Here is a figure illustrating how it is possible: Two populations (red and blue) are sampled from a same 2D distribution, but slightly shifted from each other. On the left $N=100$, on the right $N=1000$ in each grou...
How can MANOVA report a significant difference when none of the univariate ANOVAs reaches significan Here is a figure illustrating how it is possible: Two populations (red and blue) are sampled from a same 2D distribution, but slightly shifted from each other. On the left $N=100$, on the right $N=10
46,198
Detecting Bimodal Distribution
This looks like a typical task of detecting components of a mixture distribution with an umbrella topic being finite mixture models. If you use R, you don't need to implement K-means or other clustering algorithms, as there are enough existing packages that already do that and more. One of the most popular one - mixtoo...
Detecting Bimodal Distribution
This looks like a typical task of detecting components of a mixture distribution with an umbrella topic being finite mixture models. If you use R, you don't need to implement K-means or other clusteri
Detecting Bimodal Distribution This looks like a typical task of detecting components of a mixture distribution with an umbrella topic being finite mixture models. If you use R, you don't need to implement K-means or other clustering algorithms, as there are enough existing packages that already do that and more. One o...
Detecting Bimodal Distribution This looks like a typical task of detecting components of a mixture distribution with an umbrella topic being finite mixture models. If you use R, you don't need to implement K-means or other clusteri
46,199
Detecting Bimodal Distribution
I have often used a scheme (Intervention Detection) even though it is not time series data to determine the presence of "an intercept change" or a change in the mean value. An intercept change is essentially a mean change or in other words a level shift. Please post your data and I will try and help you. Both plots sug...
Detecting Bimodal Distribution
I have often used a scheme (Intervention Detection) even though it is not time series data to determine the presence of "an intercept change" or a change in the mean value. An intercept change is esse
Detecting Bimodal Distribution I have often used a scheme (Intervention Detection) even though it is not time series data to determine the presence of "an intercept change" or a change in the mean value. An intercept change is essentially a mean change or in other words a level shift. Please post your data and I will t...
Detecting Bimodal Distribution I have often used a scheme (Intervention Detection) even though it is not time series data to determine the presence of "an intercept change" or a change in the mean value. An intercept change is esse
46,200
Multiple Regression or Separate Simple Regressions?
This is a somewhat confusing debate because I am not sure how the two-regression method is going to achieve your goal. Regression models with two continuous independent variables can be visualized as a 3-D space: The blue lines represent the association between $x$ and $y$ at each value of $z$. Without $z$ in the mod...
Multiple Regression or Separate Simple Regressions?
This is a somewhat confusing debate because I am not sure how the two-regression method is going to achieve your goal. Regression models with two continuous independent variables can be visualized as
Multiple Regression or Separate Simple Regressions? This is a somewhat confusing debate because I am not sure how the two-regression method is going to achieve your goal. Regression models with two continuous independent variables can be visualized as a 3-D space: The blue lines represent the association between $x$ ...
Multiple Regression or Separate Simple Regressions? This is a somewhat confusing debate because I am not sure how the two-regression method is going to achieve your goal. Regression models with two continuous independent variables can be visualized as