idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,101
How to show "not a number" on a line plot?
Stephen Few has an article, Displaying Missing Values and Incomplete Periods in Time Series, which discusses some possibilities such as using skipped, dashed or faded connections for missing Y values. Those work well when the X values are at regular intervals (above) but not so well when the X values are irregular (below). The difference is whether the positions of the missing values can be inferred or not. For irregular spaced X values, it's more appropriate to show the missing positions with some sort of marginal plot, which could be with dots (below) or with a rug plot.
How to show "not a number" on a line plot?
Stephen Few has an article, Displaying Missing Values and Incomplete Periods in Time Series, which discusses some possibilities such as using skipped, dashed or faded connections for missing Y values.
How to show "not a number" on a line plot? Stephen Few has an article, Displaying Missing Values and Incomplete Periods in Time Series, which discusses some possibilities such as using skipped, dashed or faded connections for missing Y values. Those work well when the X values are at regular intervals (above) but not so well when the X values are irregular (below). The difference is whether the positions of the missing values can be inferred or not. For irregular spaced X values, it's more appropriate to show the missing positions with some sort of marginal plot, which could be with dots (below) or with a rug plot.
How to show "not a number" on a line plot? Stephen Few has an article, Displaying Missing Values and Incomplete Periods in Time Series, which discusses some possibilities such as using skipped, dashed or faded connections for missing Y values.
46,102
How to show "not a number" on a line plot?
It has been argued that a "bad" visualization is one that deceives or distorts. A subtle form of deception in line plots is to connect successive points with line segments (or higher-order splines), because that presents a compelling but false visual representation that (a) intermediate values (not in the dataset) exist and (b) the missing points fall along those segments. When your purpose is to show the data, care is needed not to interpolate visually across the points having NaN values. At the same time--again to avoid a false impression--you need some visually obvious mechanism to show the x coordinates of the missing values, without actually drawing any points. These design constraints suggest decorating a default line plot with a rug plot. In R, with the data in arrays x and y, it looks like this: plot(x, y, type="l", lwd=2, main="Default R Plots") rug(x) The gaps in the graph clearly show where values are missing and the ticks on the "rug" at the bottom indicate exactly where they are missing (and how many values are missing). Unfortunately, this mechanism fails to show the isolated points! By erasing unnecessary ink, using line weight and color creatively, and posting the missing points, we can clarify this plot: The rug has been placed outside the drawing region to make it clearer and the ticks for the missing y values are made longer and clearer than the others. If this is too subtle, or if the objective is to draw attention to where the values are missing, you may prolong the rug plot to cover the drawing region and even draw all the data points: For those who would like to implement and improve on this, here is the R code to produce both types of plots. for(prominent in c(FALSE, TRUE)) { plot(x,y, type="n", bty="n", tck=0.025, main=ifelse(prominent, "Prominent", "Subtle")) abline(h=0, col="Gray") if(prominent) abline(v=x[is.na(y)], col="#d0202040") lines(x, y, lwd=2) if(prominent) { points(x, y, pch=21, bg="#d02020") } else { i <- !is.na(y) i <- !(c(FALSE,i[-length(i)]) | c(i[-1],FALSE)) points(x[i], y[i], pch=21, bg="#d02020", cex=0.75) } rug(x, -0.04, col="Gray") rug(x[is.na(y)], -0.065, lwd=2, col="#d02020") }
How to show "not a number" on a line plot?
It has been argued that a "bad" visualization is one that deceives or distorts. A subtle form of deception in line plots is to connect successive points with line segments (or higher-order splines),
How to show "not a number" on a line plot? It has been argued that a "bad" visualization is one that deceives or distorts. A subtle form of deception in line plots is to connect successive points with line segments (or higher-order splines), because that presents a compelling but false visual representation that (a) intermediate values (not in the dataset) exist and (b) the missing points fall along those segments. When your purpose is to show the data, care is needed not to interpolate visually across the points having NaN values. At the same time--again to avoid a false impression--you need some visually obvious mechanism to show the x coordinates of the missing values, without actually drawing any points. These design constraints suggest decorating a default line plot with a rug plot. In R, with the data in arrays x and y, it looks like this: plot(x, y, type="l", lwd=2, main="Default R Plots") rug(x) The gaps in the graph clearly show where values are missing and the ticks on the "rug" at the bottom indicate exactly where they are missing (and how many values are missing). Unfortunately, this mechanism fails to show the isolated points! By erasing unnecessary ink, using line weight and color creatively, and posting the missing points, we can clarify this plot: The rug has been placed outside the drawing region to make it clearer and the ticks for the missing y values are made longer and clearer than the others. If this is too subtle, or if the objective is to draw attention to where the values are missing, you may prolong the rug plot to cover the drawing region and even draw all the data points: For those who would like to implement and improve on this, here is the R code to produce both types of plots. for(prominent in c(FALSE, TRUE)) { plot(x,y, type="n", bty="n", tck=0.025, main=ifelse(prominent, "Prominent", "Subtle")) abline(h=0, col="Gray") if(prominent) abline(v=x[is.na(y)], col="#d0202040") lines(x, y, lwd=2) if(prominent) { points(x, y, pch=21, bg="#d02020") } else { i <- !is.na(y) i <- !(c(FALSE,i[-length(i)]) | c(i[-1],FALSE)) points(x[i], y[i], pch=21, bg="#d02020", cex=0.75) } rug(x, -0.04, col="Gray") rug(x[is.na(y)], -0.065, lwd=2, col="#d02020") }
How to show "not a number" on a line plot? It has been argued that a "bad" visualization is one that deceives or distorts. A subtle form of deception in line plots is to connect successive points with line segments (or higher-order splines),
46,103
Statistical Analysis on Sparse data?
Pretty sketchy question but answerable nonetheless. The answer is "yes," there is lots one can do with sparse data. This response is far from "complete" but will review a few options in a kind of "DIY" shotgun listing. In other words, it is up to the analyst to decide which option may be appropriate to pursue. The first consideration is to identify where the sparsity is occurring, e.g., is it in the features in terms of a large, complex, combinatorial set of possibilities or is it wrt the target or dependent variable which may have few observed responses, or both? Wrt sparse or rare events in the target variable, e.g., when response to a stimulus is recorded as 0,1 or "yes/no," and the response rate is very small, one common error is to model this using logistic regression. The mistake is this: it is well known that the logistic curve does not provide a good fit to the tails of its distribution. This means that with sparse or rare event data, logistic regression will produce biased results. Commonly recommended "solutions" for this problem are to go out and get a larger sample of data or, alternatively, to specifically subsample those segments that are both important to the analysis and sparsely populated. This is a bad idea for at least two reasons: one, it's not always possible to simply "get more data" and two, even when possible, it can be prohibitively costly in terms of time and money. Better solutions are possible: Gary King, Harvard political scientist and statistical methodologist, discusses rare event analysis... http://gking.harvard.edu/category/research-interests/methods/rare-events. Quantification and prediction of extreme events in a one-dimensional nonlinear dispersive wave model ... http://sandlab.mit.edu/Papers/14_PhysicaD.pdf https://www.analyticsvidhya.com/blog/2014/01/logistic-regression-rare-event/ Concerning features, one needs to distinguish between structural zeros which occur for logically impossible combinations and sparsity where a combination of features is possible, there just isn't enough information to populate that particular cell in the table. Consider healthcare or hospital data where a combination such as male patients given a diagnostic code of "pregnancy" is possible from a purely computational point of view in terms of cross-classifying a set of features, but males actually giving birth is impossible, i.e., it is to be considered a structural zero. But sex and gender are different constructs. So, until transgender patients (e.g., female to male gender) have children, this will remain a structural zero. As noted, sparsely populated features are different, requiring special tools to facilitate analysis from those employed for target variables. The following is a "laundry list" or shotgun set of options for dealing with sparse features. Much of it was gathered by simply browsing for the keywords "inference from sparse data." Choose carefully from the list: Adopt a Bayesian modeling framework. For instance, Gelman and Hill state in chapter 13 of their book Data Analysis Using Regression and Multilevel/Hierarchical Models that it is possible to analyse features with a sample size of 1. Frequentists might object to this claim. MCMC sampling provides a workaround for sparsely populated categorical features in that pooling data across the sampling iterations builds a distribution about a feature in the posterior, even in cases where there is a sample size of 1. Gelman, in his blog, also discusses sparsity here ... http://andrewgelman.com/2013/12/16/whither-the-bet-on-sparsity-principle-in-a-nonsparse-world/. Alan Agresti, Approximate Is Better than "Exact" for Interval Estimation of Binomial Proportions ...http://www.stat.ufl.edu/~aa/articles/agresti_coull_1998.pdf Data mining for rare events ... http://www-users.cs.umn.edu/~aleks/pakdd04_tutorial.pdf comparison of different methods for modelling rare events data ... http://lib.ugent.be/fulltxt/RUG01/002/163/708/RUG01-002163708_2014_0001_AC.pdf Statistical Inference Methods for Sparse Biological Time Series Data https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3114728/ Bayesian non-parametric models and inference for sparse and hierarchical latent structure ... http://cs.stanford.edu/people/davidknowles/daknowles_thesis.pdf Statistical Inference: n-gram Models over Sparse Data .. http://www.sims.berkeley.edu/~jhenke/Tdm/TDM-Ch6.ppt Lost in a random forest: Using Big Data to study rare events ... http://journals.sagepub.com/doi/pdf/10.1177/2053951715604333 And so on. Good luck.
Statistical Analysis on Sparse data?
Pretty sketchy question but answerable nonetheless. The answer is "yes," there is lots one can do with sparse data. This response is far from "complete" but will review a few options in a kind of "DIY
Statistical Analysis on Sparse data? Pretty sketchy question but answerable nonetheless. The answer is "yes," there is lots one can do with sparse data. This response is far from "complete" but will review a few options in a kind of "DIY" shotgun listing. In other words, it is up to the analyst to decide which option may be appropriate to pursue. The first consideration is to identify where the sparsity is occurring, e.g., is it in the features in terms of a large, complex, combinatorial set of possibilities or is it wrt the target or dependent variable which may have few observed responses, or both? Wrt sparse or rare events in the target variable, e.g., when response to a stimulus is recorded as 0,1 or "yes/no," and the response rate is very small, one common error is to model this using logistic regression. The mistake is this: it is well known that the logistic curve does not provide a good fit to the tails of its distribution. This means that with sparse or rare event data, logistic regression will produce biased results. Commonly recommended "solutions" for this problem are to go out and get a larger sample of data or, alternatively, to specifically subsample those segments that are both important to the analysis and sparsely populated. This is a bad idea for at least two reasons: one, it's not always possible to simply "get more data" and two, even when possible, it can be prohibitively costly in terms of time and money. Better solutions are possible: Gary King, Harvard political scientist and statistical methodologist, discusses rare event analysis... http://gking.harvard.edu/category/research-interests/methods/rare-events. Quantification and prediction of extreme events in a one-dimensional nonlinear dispersive wave model ... http://sandlab.mit.edu/Papers/14_PhysicaD.pdf https://www.analyticsvidhya.com/blog/2014/01/logistic-regression-rare-event/ Concerning features, one needs to distinguish between structural zeros which occur for logically impossible combinations and sparsity where a combination of features is possible, there just isn't enough information to populate that particular cell in the table. Consider healthcare or hospital data where a combination such as male patients given a diagnostic code of "pregnancy" is possible from a purely computational point of view in terms of cross-classifying a set of features, but males actually giving birth is impossible, i.e., it is to be considered a structural zero. But sex and gender are different constructs. So, until transgender patients (e.g., female to male gender) have children, this will remain a structural zero. As noted, sparsely populated features are different, requiring special tools to facilitate analysis from those employed for target variables. The following is a "laundry list" or shotgun set of options for dealing with sparse features. Much of it was gathered by simply browsing for the keywords "inference from sparse data." Choose carefully from the list: Adopt a Bayesian modeling framework. For instance, Gelman and Hill state in chapter 13 of their book Data Analysis Using Regression and Multilevel/Hierarchical Models that it is possible to analyse features with a sample size of 1. Frequentists might object to this claim. MCMC sampling provides a workaround for sparsely populated categorical features in that pooling data across the sampling iterations builds a distribution about a feature in the posterior, even in cases where there is a sample size of 1. Gelman, in his blog, also discusses sparsity here ... http://andrewgelman.com/2013/12/16/whither-the-bet-on-sparsity-principle-in-a-nonsparse-world/. Alan Agresti, Approximate Is Better than "Exact" for Interval Estimation of Binomial Proportions ...http://www.stat.ufl.edu/~aa/articles/agresti_coull_1998.pdf Data mining for rare events ... http://www-users.cs.umn.edu/~aleks/pakdd04_tutorial.pdf comparison of different methods for modelling rare events data ... http://lib.ugent.be/fulltxt/RUG01/002/163/708/RUG01-002163708_2014_0001_AC.pdf Statistical Inference Methods for Sparse Biological Time Series Data https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3114728/ Bayesian non-parametric models and inference for sparse and hierarchical latent structure ... http://cs.stanford.edu/people/davidknowles/daknowles_thesis.pdf Statistical Inference: n-gram Models over Sparse Data .. http://www.sims.berkeley.edu/~jhenke/Tdm/TDM-Ch6.ppt Lost in a random forest: Using Big Data to study rare events ... http://journals.sagepub.com/doi/pdf/10.1177/2053951715604333 And so on. Good luck.
Statistical Analysis on Sparse data? Pretty sketchy question but answerable nonetheless. The answer is "yes," there is lots one can do with sparse data. This response is far from "complete" but will review a few options in a kind of "DIY
46,104
What is the most beginner-friendly book for information geometry?
I also think these books are quite hard to read at the first place too (but I'm an applied guy). For me, it was simpler to start with scattered material/tutorial/applications using bits of IG such as: Pattern learning and recognition on statistical manifolds: An information-geometric review .
What is the most beginner-friendly book for information geometry?
I also think these books are quite hard to read at the first place too (but I'm an applied guy). For me, it was simpler to start with scattered material/tutorial/applications using bits of IG such as:
What is the most beginner-friendly book for information geometry? I also think these books are quite hard to read at the first place too (but I'm an applied guy). For me, it was simpler to start with scattered material/tutorial/applications using bits of IG such as: Pattern learning and recognition on statistical manifolds: An information-geometric review .
What is the most beginner-friendly book for information geometry? I also think these books are quite hard to read at the first place too (but I'm an applied guy). For me, it was simpler to start with scattered material/tutorial/applications using bits of IG such as:
46,105
Sum-to-zero constraint in one-way ANOVA
Consider for simplicity that $m=2$ and compare the models $\mu=0,\beta_1=0,\beta_2=2$, $\mu=1,\beta_1=-1,\beta_2=1$, $\mu=2,\beta_1=-2,\beta_2=0$. These models are all special cases of $(\mu,\beta_1,\beta_2)=(\mu,-\mu,2-\mu)$. You can see that whatever $\mu$ we choose, $\mu+\beta_1=0$ and $\mu+\beta_2=2$, so there's an infinite set of parameter-triples that match $E(Y_{1j})=0$ and $E(Y_{2j})=2$, and no way to distinguish between them. Consequently, while data will allow you to estimate the two group-means, those two pieces of information (two df) - no matter how precisely estimated - are not going to be enough to estimate the three parameters (three df) in the model -- there's an extra degree of freedom that allows you to move all three parameters in particular ways relative to each other while keeping the group-means the same. You need to restrict/constrain/regularize the situation in some way so that the model doesn't have more things to estimate than the design has the ability to identify.
Sum-to-zero constraint in one-way ANOVA
Consider for simplicity that $m=2$ and compare the models $\mu=0,\beta_1=0,\beta_2=2$, $\mu=1,\beta_1=-1,\beta_2=1$, $\mu=2,\beta_1=-2,\beta_2=0$. These models are all special cases of $(\mu,\beta_
Sum-to-zero constraint in one-way ANOVA Consider for simplicity that $m=2$ and compare the models $\mu=0,\beta_1=0,\beta_2=2$, $\mu=1,\beta_1=-1,\beta_2=1$, $\mu=2,\beta_1=-2,\beta_2=0$. These models are all special cases of $(\mu,\beta_1,\beta_2)=(\mu,-\mu,2-\mu)$. You can see that whatever $\mu$ we choose, $\mu+\beta_1=0$ and $\mu+\beta_2=2$, so there's an infinite set of parameter-triples that match $E(Y_{1j})=0$ and $E(Y_{2j})=2$, and no way to distinguish between them. Consequently, while data will allow you to estimate the two group-means, those two pieces of information (two df) - no matter how precisely estimated - are not going to be enough to estimate the three parameters (three df) in the model -- there's an extra degree of freedom that allows you to move all three parameters in particular ways relative to each other while keeping the group-means the same. You need to restrict/constrain/regularize the situation in some way so that the model doesn't have more things to estimate than the design has the ability to identify.
Sum-to-zero constraint in one-way ANOVA Consider for simplicity that $m=2$ and compare the models $\mu=0,\beta_1=0,\beta_2=2$, $\mu=1,\beta_1=-1,\beta_2=1$, $\mu=2,\beta_1=-2,\beta_2=0$. These models are all special cases of $(\mu,\beta_
46,106
Variance of $Y$ in regression model?
$x_i$ is one single non-random variable, so on itself it has a variance of 0, so the formula you wrote simplifies to just $\sigma^2$. Normally $y_i$ is expressed as follows: $$y_i \sim N(\beta_1 + \beta_2x_i, \;\sigma^2)$$ This way it should be evident how the variance of $y_i$ is determined. $\beta_1 + \beta_2x_i$ only contributes to the expected value of $y_i$.
Variance of $Y$ in regression model?
$x_i$ is one single non-random variable, so on itself it has a variance of 0, so the formula you wrote simplifies to just $\sigma^2$. Normally $y_i$ is expressed as follows: $$y_i \sim N(\beta_1 + \be
Variance of $Y$ in regression model? $x_i$ is one single non-random variable, so on itself it has a variance of 0, so the formula you wrote simplifies to just $\sigma^2$. Normally $y_i$ is expressed as follows: $$y_i \sim N(\beta_1 + \beta_2x_i, \;\sigma^2)$$ This way it should be evident how the variance of $y_i$ is determined. $\beta_1 + \beta_2x_i$ only contributes to the expected value of $y_i$.
Variance of $Y$ in regression model? $x_i$ is one single non-random variable, so on itself it has a variance of 0, so the formula you wrote simplifies to just $\sigma^2$. Normally $y_i$ is expressed as follows: $$y_i \sim N(\beta_1 + \be
46,107
Variance of $Y$ in regression model?
Let's say you have the regression equation: $$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ Different books, different lectures notes etc... follow two different approaches: Treat $x_i$ are scalars. They're entirely exogenous. They're not random. Treat $x_i$ as a random variable. The answer of @Jarko Dubbeldam takes approach (1). If $x_i$ is a scalar then simply: $$ \mathrm{Var}(y_i) = \mathrm{Var}(\epsilon_i )$$ In any settings, Approach 1 is excessively restrictive (and it isn't necessary). If you take approach two though, you would need to write: $$ \mathrm{Var}(y_i \mid x_i ) = \mathrm{Var}(\epsilon_i )$$
Variance of $Y$ in regression model?
Let's say you have the regression equation: $$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ Different books, different lectures notes etc... follow two different approaches: Treat $x_i$ are scalars. T
Variance of $Y$ in regression model? Let's say you have the regression equation: $$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ Different books, different lectures notes etc... follow two different approaches: Treat $x_i$ are scalars. They're entirely exogenous. They're not random. Treat $x_i$ as a random variable. The answer of @Jarko Dubbeldam takes approach (1). If $x_i$ is a scalar then simply: $$ \mathrm{Var}(y_i) = \mathrm{Var}(\epsilon_i )$$ In any settings, Approach 1 is excessively restrictive (and it isn't necessary). If you take approach two though, you would need to write: $$ \mathrm{Var}(y_i \mid x_i ) = \mathrm{Var}(\epsilon_i )$$
Variance of $Y$ in regression model? Let's say you have the regression equation: $$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ Different books, different lectures notes etc... follow two different approaches: Treat $x_i$ are scalars. T
46,108
Compound Poisson random variable
We'll denote $\mu_N = \text{E}(N)$, $\mu_X = \text{E}(X)$ and $\sigma_N^2 = \text{Var}(N)$. To find the covariance we can use the formula \begin{align} \text{Cov}(S, N) &= \text{E}(SN) - \text{E}(S) \text{E}(N) \\ &= \text{E}(SN) - \mu_N^2 \mu_X \end{align} where the second equality is found by taking an iterated expectation \begin{align} \text{E}(S) &= \text{E} [ \text{E}(S \mid N) ] \\ &= \text{E} \left [ \text{E} \left (\sum_{i=1}^{N} X_i \mid N \right ) \right ] \\ &= \text{E} [ N \text{E}(X) ] \\ &= \mu_N \mu_X . \end{align} To find $\text{E}(SN)$ we make a similar conditioning argument \begin{align} \text{E}(SN) &= \text{E} \left [ \text{E}(SN \mid N) \right ] \\ &= \text{E} \left [ N^2 \text{E}(X) \right ] \\ &= \mu_X \left ( \sigma_N^2 + \mu_N^2 \right ) \end{align} and so we get \begin{align} \text{Cov}(S, N) &= \mu_X \sigma_N^2 . \end{align}
Compound Poisson random variable
We'll denote $\mu_N = \text{E}(N)$, $\mu_X = \text{E}(X)$ and $\sigma_N^2 = \text{Var}(N)$. To find the covariance we can use the formula \begin{align} \text{Cov}(S, N) &= \text{E}(SN) - \text{E}(S)
Compound Poisson random variable We'll denote $\mu_N = \text{E}(N)$, $\mu_X = \text{E}(X)$ and $\sigma_N^2 = \text{Var}(N)$. To find the covariance we can use the formula \begin{align} \text{Cov}(S, N) &= \text{E}(SN) - \text{E}(S) \text{E}(N) \\ &= \text{E}(SN) - \mu_N^2 \mu_X \end{align} where the second equality is found by taking an iterated expectation \begin{align} \text{E}(S) &= \text{E} [ \text{E}(S \mid N) ] \\ &= \text{E} \left [ \text{E} \left (\sum_{i=1}^{N} X_i \mid N \right ) \right ] \\ &= \text{E} [ N \text{E}(X) ] \\ &= \mu_N \mu_X . \end{align} To find $\text{E}(SN)$ we make a similar conditioning argument \begin{align} \text{E}(SN) &= \text{E} \left [ \text{E}(SN \mid N) \right ] \\ &= \text{E} \left [ N^2 \text{E}(X) \right ] \\ &= \mu_X \left ( \sigma_N^2 + \mu_N^2 \right ) \end{align} and so we get \begin{align} \text{Cov}(S, N) &= \mu_X \sigma_N^2 . \end{align}
Compound Poisson random variable We'll denote $\mu_N = \text{E}(N)$, $\mu_X = \text{E}(X)$ and $\sigma_N^2 = \text{Var}(N)$. To find the covariance we can use the formula \begin{align} \text{Cov}(S, N) &= \text{E}(SN) - \text{E}(S)
46,109
What are the differences between generalized additive model, basis expansion and boosting?
Basis expansion implies a basis function. In mathematics, a basis function is an element of a particular basis for a function space. For example, sines and cosines form a basis for Fourier analysis and can duplicate any waveform shape (square waves, sawtooth waves, etc.) just by adding enough basis functions together. From Basis (linear aglebra) "In mathematics, a set of elements (vectors) in a vector space V is called a basis, or a set of basis vectors, if the vectors are linearly independent and every vector in the vector space is a linear combination of this set." The object of finding basis functions is to create a spanning set. For example, "The real vector space R$^3$ has {(-1,0,0), (0,1,0), (0,0,1)} as a spanning set. This particular spanning set is also a basis. If (-1,0,0) were replaced by (1,0,0), it would also form the canonical basis of R$^3$." For machine learning "basis expansion" is merely a fancy way of saying "adding more linear terms to the model." The term is, for example, used precisely once in BOOSTING ALGORITHMS: REGULARIZATION, PREDICTION AND MODEL FITTING By Peter Buhlmann and Torsten Hothorn, so that if you missed the meaning entirely, you would not be out by much. Generalized additive model in that same Buhlmann and Hothorn paper (which I would recommend reading) is just a way of saying we can add more linear terms to the model (e.g., used for Adaboost) to get an improvement in algorithm performance. This is just arithmetic addition of linear terms, so that covariance, interdependence, or interaction between terms is ignored. This has its limitations because probability densities add by convolution when they interact, which is not arithmetic addition. Boosting is a machine learning ensemble meta-algorithm for primarily reducing bias, and also variance in supervised learning, and a family of machine learning algorithms which convert weak learners to strong ones. Boosting is based on the question posed by Kearns and Valiant (1988, 1989) "Can a set of weak learners create a single strong learner?" A weak learner is defined to be a classifier which is only slightly correlated with the true classification (it can label examples better than random guessing). In contrast, a strong learner is a classifier that is arbitrarily well-correlated with the true classification. Math wise, this looks like weightings of the classifiers, where AdaBoost is the best known and other ensemble techniques are random forest and bagging.
What are the differences between generalized additive model, basis expansion and boosting?
Basis expansion implies a basis function. In mathematics, a basis function is an element of a particular basis for a function space. For example, sines and cosines form a basis for Fourier analysis an
What are the differences between generalized additive model, basis expansion and boosting? Basis expansion implies a basis function. In mathematics, a basis function is an element of a particular basis for a function space. For example, sines and cosines form a basis for Fourier analysis and can duplicate any waveform shape (square waves, sawtooth waves, etc.) just by adding enough basis functions together. From Basis (linear aglebra) "In mathematics, a set of elements (vectors) in a vector space V is called a basis, or a set of basis vectors, if the vectors are linearly independent and every vector in the vector space is a linear combination of this set." The object of finding basis functions is to create a spanning set. For example, "The real vector space R$^3$ has {(-1,0,0), (0,1,0), (0,0,1)} as a spanning set. This particular spanning set is also a basis. If (-1,0,0) were replaced by (1,0,0), it would also form the canonical basis of R$^3$." For machine learning "basis expansion" is merely a fancy way of saying "adding more linear terms to the model." The term is, for example, used precisely once in BOOSTING ALGORITHMS: REGULARIZATION, PREDICTION AND MODEL FITTING By Peter Buhlmann and Torsten Hothorn, so that if you missed the meaning entirely, you would not be out by much. Generalized additive model in that same Buhlmann and Hothorn paper (which I would recommend reading) is just a way of saying we can add more linear terms to the model (e.g., used for Adaboost) to get an improvement in algorithm performance. This is just arithmetic addition of linear terms, so that covariance, interdependence, or interaction between terms is ignored. This has its limitations because probability densities add by convolution when they interact, which is not arithmetic addition. Boosting is a machine learning ensemble meta-algorithm for primarily reducing bias, and also variance in supervised learning, and a family of machine learning algorithms which convert weak learners to strong ones. Boosting is based on the question posed by Kearns and Valiant (1988, 1989) "Can a set of weak learners create a single strong learner?" A weak learner is defined to be a classifier which is only slightly correlated with the true classification (it can label examples better than random guessing). In contrast, a strong learner is a classifier that is arbitrarily well-correlated with the true classification. Math wise, this looks like weightings of the classifiers, where AdaBoost is the best known and other ensemble techniques are random forest and bagging.
What are the differences between generalized additive model, basis expansion and boosting? Basis expansion implies a basis function. In mathematics, a basis function is an element of a particular basis for a function space. For example, sines and cosines form a basis for Fourier analysis an
46,110
GAM model selection
If you are using an extra penalty on each term, you can just fit the model and you are done (from the point of view of selection). The point of these penalties is allow for shrinkage of the perfectly smooth functions in the spline basis expansion as well as the wiggly functions. The results of the model fit account for the selection/shrinkage. If you remove the insignificant terms and then refit, the inference results (say in summary() output) would not include the "effect" of the previous selection. Assuming you have a well-chosen set of covariates and can fit the full model (a model with a smooth of each covariate plus any interactions you want) you should probably just work with the resulting fit of the shrunken full model. If a term is using effectively 0 degrees of freedom it is having no effect on the fit/predictions at all. For the non-significant terms that have positive EDFs, by keeping them in you are effectively stating that these covariates have a small but non-zero effect. If you remove these terms as you suggest, you are saying explicitly that the effect is zero. In short, don't fit the reduced model; work with the full model to which shrinkage was applied. The deviance explained of the reduced model can be lower as it has fewer terms with which to explain variation in the response. It's a bit like the $R^2$ of a model increasing as you add covariates.
GAM model selection
If you are using an extra penalty on each term, you can just fit the model and you are done (from the point of view of selection). The point of these penalties is allow for shrinkage of the perfectly
GAM model selection If you are using an extra penalty on each term, you can just fit the model and you are done (from the point of view of selection). The point of these penalties is allow for shrinkage of the perfectly smooth functions in the spline basis expansion as well as the wiggly functions. The results of the model fit account for the selection/shrinkage. If you remove the insignificant terms and then refit, the inference results (say in summary() output) would not include the "effect" of the previous selection. Assuming you have a well-chosen set of covariates and can fit the full model (a model with a smooth of each covariate plus any interactions you want) you should probably just work with the resulting fit of the shrunken full model. If a term is using effectively 0 degrees of freedom it is having no effect on the fit/predictions at all. For the non-significant terms that have positive EDFs, by keeping them in you are effectively stating that these covariates have a small but non-zero effect. If you remove these terms as you suggest, you are saying explicitly that the effect is zero. In short, don't fit the reduced model; work with the full model to which shrinkage was applied. The deviance explained of the reduced model can be lower as it has fewer terms with which to explain variation in the response. It's a bit like the $R^2$ of a model increasing as you add covariates.
GAM model selection If you are using an extra penalty on each term, you can just fit the model and you are done (from the point of view of selection). The point of these penalties is allow for shrinkage of the perfectly
46,111
why the lower.tail=F is used when mannualy calculating the p value from t score
Check out the documentation for R's pt() function. lower.tail logical; if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x].* In other words, when lower.tail=FALSE you get the probability to the right of X (the first of your two diagrams). Or just run it for yourself: > pt(2,10) [1] 0.963306 > pt(2,10,lower.tail = FALSE) [1] 0.03669402
why the lower.tail=F is used when mannualy calculating the p value from t score
Check out the documentation for R's pt() function. lower.tail logical; if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x].* In other words, when lower.tail=FALSE you get the probab
why the lower.tail=F is used when mannualy calculating the p value from t score Check out the documentation for R's pt() function. lower.tail logical; if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x].* In other words, when lower.tail=FALSE you get the probability to the right of X (the first of your two diagrams). Or just run it for yourself: > pt(2,10) [1] 0.963306 > pt(2,10,lower.tail = FALSE) [1] 0.03669402
why the lower.tail=F is used when mannualy calculating the p value from t score Check out the documentation for R's pt() function. lower.tail logical; if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x].* In other words, when lower.tail=FALSE you get the probab
46,112
why the lower.tail=F is used when mannualy calculating the p value from t score
Student t distribution is symmetric. So if you calculate the area under upper tail and multiply it by 2, you end up with the 2 tailed p-value of your test score.
why the lower.tail=F is used when mannualy calculating the p value from t score
Student t distribution is symmetric. So if you calculate the area under upper tail and multiply it by 2, you end up with the 2 tailed p-value of your test score.
why the lower.tail=F is used when mannualy calculating the p value from t score Student t distribution is symmetric. So if you calculate the area under upper tail and multiply it by 2, you end up with the 2 tailed p-value of your test score.
why the lower.tail=F is used when mannualy calculating the p value from t score Student t distribution is symmetric. So if you calculate the area under upper tail and multiply it by 2, you end up with the 2 tailed p-value of your test score.
46,113
Explanation for this event on a high-dimensional dataset
This is a question about high-dimensional Euclidean geometry, related to the "curse of dimensionality". It comes down to this: almost all the surface area of a sphere in $d$-dimensional Euclidean space $E^d$ is concentrated around its equator. The nearest neighbors of any point will tend to be scattered in random directions around its equator, so their average will be near its center--which is the center of the sphere itself. The rest of this post explains why this is, provides an estimate for how much the distance shrinks, and presents a simulation (in $100$ dimensions) to support the conclusions. There's a nice statistical demonstration of this equatorial concentration lemma. Given that the squared radii of the points are concentrated around $d$ (a consequence of their $\chi^2(d)$ distribution), which places them close to a sphere of radius $\sqrt{d}$, we need only consider the angles made between the nearest neighbors with the original point. For each such angle $\theta$, use $(\cos(\theta)+1)/2$ to measure its size. This value decreases from $1$ with close neighbors, through $1/2$ at the equator, down to $0$ for diametrically opposite points. It has a Beta$((d-1)/2,(d-1)/2)$ distribution. I don't want to presume too much knowledge of Beta distributions, because little is actually needed. Here is an elementary way to reason about this particular distribution. Its mean is $1/2$--on average all the remaining points are halfway between a given point and its opposite--and its variance is $1/(4d)$. Its spread, as measured by the standard deviation, therefore is $1/(2\sqrt{d})$. Chebyshev's Inequality states that for any $k\ge 1$, at most $1/k^2$ of the probability lies beyond $k/(2\sqrt{d})$ of the equator. That's all we need to know. Fixing the number of points $n$ and the number of nearest neighbors $t$, choose $k$ so large that $t/n$ is substantially greater than $1/k$: a small multiple of $n/t$ will do. As $d$ grows, the value $k/(2\sqrt{d})$ shrinks down to zero. Therefore most of the $t$ nearest neighbors of any chosen point will be close to the equator. With this geometric result in mind, the answer is now obvious. The nearest neighbors of any given point will be scattered randomly near the equator within limited distances of the equator relative to that point. The cylindrical symmetry of the situation around the point's axis indicates that on average those neighbors will be almost directly beneath the north pole. Thus we expect this average of the neighbors to be much nearer the origin than the original point itself, which is close to $\sqrt{d}$. That's more or less a heuristic argument but we can obtain quantitative results, too. We may freely rotate the coordinate system so that the original point is "up," because a rotation changes neither the distances nor the probability distribution of the points. The height, in particular, has a standard Normal distribution. The average of the $t$ largest out of $n$ independent heights will be close to the average of their expectations and less than the expected maximum height. An approximation for that expected maximum is $\Phi^{-1}(1-1/(n+1))$ where $\Phi$ is the standard Normal distribution function. At the same time, each coordinate of the average is the average of $t$ standard Normal variables. The first $n-1$ of these coordinates are approximately independent of the last and their averages have Normal distributions with zero mean and variance of $1/t$. The sum of their squares consequently is approximately a multiple of a $\chi^2(d-1)$ distribution, with a mean of $(d-1)/t$. Therefore the squared distance of this average is expected to be near $$h(n,d)=(d-1)/t + \Phi^{-1}(1-1/(n+1))^2$$ and likely a little bit less. This is a reasonable estimate when $t$ is small compared to $n$; it becomes a gross overestimate as $t$ grows. We may explore this situation via simulation, using (a) the $\chi^2(d)$ distribution of all squared distances as one reference and (b) this estimate $h(n,d)$ as another reference. Here, for instance, is a histogram of the lengths of the mean of $t=9$ nearest neighbors for $n=10^4$ points in $d=100$ dimensions. To standardize the comparison (which helps when varying $d$), all distances have been divided by $\sqrt{d}$, thereby bringing the points close to the unit sphere. The blue curve is the standardized $\chi^2(100)$ density. The vertical dashed red line is the (over)estimate of the typical distance between the average of the nine nearest neighbors and the center of the sphere. The histogram displays the simulated data, which are based on $10$ independent sets of points and $50$ randomly selected points out of each of those sets. That the histogram is situated close to, but a little less than, the vertical red line supports the preceding quantitative analysis. That both are substantially less than $1$ supports the statement in the question itself. For those who would like to experiment, here is the R code to produce similar situations. Take care when increasing any of the inputs, because the calculations can take a long time. n <- 1e4 # Number of points m <- 50 # Subset size k <- 9 # Number of nearest points d <- 100 # Dimension n.sim <- 10 # Number of trials #set.seed(17) sim.raw <- replicate(n.sim, { x <- matrix(rnorm((n+1)*d), d) sapply (sample.int(n+1, m), function(i) { distance.2 <- colSums((x - x[, i])^2) q <- x[, order(distance.2)[1:k + 1], drop=FALSE] q.mean <- rowMeans(q) sum(q.mean^2) }) }) sim <- sqrt(sim.raw / d) h <- sqrt((qnorm(1/(n+1))^2 + (d-1)/k)/d) # Overestimate of the mean hist(sim, xlim=range(c(sim, h, 0, 1 + 2*sqrt(d-1)/d)), freq=FALSE, xlab=paste("Length of standardized mean of", k, "nearest neighbors"), main=paste("n =", n, "points in d =", d, "dimensions"), sub=paste(n.sim, "trials using", m, "points per trial"), cex.sub=0.8) abline(v=h, col="Red", lty=3, lwd=2) abline(v=mean(sim), col="Gray", lwd=2) curve(2 * x * dchisq(x^2*d, df=d) * d, add=TRUE, col="Blue")
Explanation for this event on a high-dimensional dataset
This is a question about high-dimensional Euclidean geometry, related to the "curse of dimensionality". It comes down to this: almost all the surface area of a sphere in $d$-dimensional Euclidean spa
Explanation for this event on a high-dimensional dataset This is a question about high-dimensional Euclidean geometry, related to the "curse of dimensionality". It comes down to this: almost all the surface area of a sphere in $d$-dimensional Euclidean space $E^d$ is concentrated around its equator. The nearest neighbors of any point will tend to be scattered in random directions around its equator, so their average will be near its center--which is the center of the sphere itself. The rest of this post explains why this is, provides an estimate for how much the distance shrinks, and presents a simulation (in $100$ dimensions) to support the conclusions. There's a nice statistical demonstration of this equatorial concentration lemma. Given that the squared radii of the points are concentrated around $d$ (a consequence of their $\chi^2(d)$ distribution), which places them close to a sphere of radius $\sqrt{d}$, we need only consider the angles made between the nearest neighbors with the original point. For each such angle $\theta$, use $(\cos(\theta)+1)/2$ to measure its size. This value decreases from $1$ with close neighbors, through $1/2$ at the equator, down to $0$ for diametrically opposite points. It has a Beta$((d-1)/2,(d-1)/2)$ distribution. I don't want to presume too much knowledge of Beta distributions, because little is actually needed. Here is an elementary way to reason about this particular distribution. Its mean is $1/2$--on average all the remaining points are halfway between a given point and its opposite--and its variance is $1/(4d)$. Its spread, as measured by the standard deviation, therefore is $1/(2\sqrt{d})$. Chebyshev's Inequality states that for any $k\ge 1$, at most $1/k^2$ of the probability lies beyond $k/(2\sqrt{d})$ of the equator. That's all we need to know. Fixing the number of points $n$ and the number of nearest neighbors $t$, choose $k$ so large that $t/n$ is substantially greater than $1/k$: a small multiple of $n/t$ will do. As $d$ grows, the value $k/(2\sqrt{d})$ shrinks down to zero. Therefore most of the $t$ nearest neighbors of any chosen point will be close to the equator. With this geometric result in mind, the answer is now obvious. The nearest neighbors of any given point will be scattered randomly near the equator within limited distances of the equator relative to that point. The cylindrical symmetry of the situation around the point's axis indicates that on average those neighbors will be almost directly beneath the north pole. Thus we expect this average of the neighbors to be much nearer the origin than the original point itself, which is close to $\sqrt{d}$. That's more or less a heuristic argument but we can obtain quantitative results, too. We may freely rotate the coordinate system so that the original point is "up," because a rotation changes neither the distances nor the probability distribution of the points. The height, in particular, has a standard Normal distribution. The average of the $t$ largest out of $n$ independent heights will be close to the average of their expectations and less than the expected maximum height. An approximation for that expected maximum is $\Phi^{-1}(1-1/(n+1))$ where $\Phi$ is the standard Normal distribution function. At the same time, each coordinate of the average is the average of $t$ standard Normal variables. The first $n-1$ of these coordinates are approximately independent of the last and their averages have Normal distributions with zero mean and variance of $1/t$. The sum of their squares consequently is approximately a multiple of a $\chi^2(d-1)$ distribution, with a mean of $(d-1)/t$. Therefore the squared distance of this average is expected to be near $$h(n,d)=(d-1)/t + \Phi^{-1}(1-1/(n+1))^2$$ and likely a little bit less. This is a reasonable estimate when $t$ is small compared to $n$; it becomes a gross overestimate as $t$ grows. We may explore this situation via simulation, using (a) the $\chi^2(d)$ distribution of all squared distances as one reference and (b) this estimate $h(n,d)$ as another reference. Here, for instance, is a histogram of the lengths of the mean of $t=9$ nearest neighbors for $n=10^4$ points in $d=100$ dimensions. To standardize the comparison (which helps when varying $d$), all distances have been divided by $\sqrt{d}$, thereby bringing the points close to the unit sphere. The blue curve is the standardized $\chi^2(100)$ density. The vertical dashed red line is the (over)estimate of the typical distance between the average of the nine nearest neighbors and the center of the sphere. The histogram displays the simulated data, which are based on $10$ independent sets of points and $50$ randomly selected points out of each of those sets. That the histogram is situated close to, but a little less than, the vertical red line supports the preceding quantitative analysis. That both are substantially less than $1$ supports the statement in the question itself. For those who would like to experiment, here is the R code to produce similar situations. Take care when increasing any of the inputs, because the calculations can take a long time. n <- 1e4 # Number of points m <- 50 # Subset size k <- 9 # Number of nearest points d <- 100 # Dimension n.sim <- 10 # Number of trials #set.seed(17) sim.raw <- replicate(n.sim, { x <- matrix(rnorm((n+1)*d), d) sapply (sample.int(n+1, m), function(i) { distance.2 <- colSums((x - x[, i])^2) q <- x[, order(distance.2)[1:k + 1], drop=FALSE] q.mean <- rowMeans(q) sum(q.mean^2) }) }) sim <- sqrt(sim.raw / d) h <- sqrt((qnorm(1/(n+1))^2 + (d-1)/k)/d) # Overestimate of the mean hist(sim, xlim=range(c(sim, h, 0, 1 + 2*sqrt(d-1)/d)), freq=FALSE, xlab=paste("Length of standardized mean of", k, "nearest neighbors"), main=paste("n =", n, "points in d =", d, "dimensions"), sub=paste(n.sim, "trials using", m, "points per trial"), cex.sub=0.8) abline(v=h, col="Red", lty=3, lwd=2) abline(v=mean(sim), col="Gray", lwd=2) curve(2 * x * dchisq(x^2*d, df=d) * d, add=TRUE, col="Blue")
Explanation for this event on a high-dimensional dataset This is a question about high-dimensional Euclidean geometry, related to the "curse of dimensionality". It comes down to this: almost all the surface area of a sphere in $d$-dimensional Euclidean spa
46,114
Explanation for this event on a high-dimensional dataset
First of all, a comment on the question: Points in the sample $S$ won't be at $\sqrt{d}$ distance away from the center. Distance squared will follow a $\chi^2$ distribution with $d$ degrees of freedom and therefore with $d$ mean and $2d$ variance. Then, distance will be distributed around $\sqrt{d}$ but there is not a concentration of points in the $\sqrt{d}$ radius shell. And for the question itself, I don't see how you observe that $||c-\text{mean}(T)|| << \sqrt{d}$, and in fact $||c-\text{mean}(T)||$ depends a lot on how has been $T$ selected. I'll comment two extreme cases: If $T$ is much smaller than $S$: $T$ (the $t$ points closer to $p$) are likely to be very close to $p$ and therefore distribution of $\text{mean}(T)$ will be very close to the distribution of $p$, and $||c-\text{mean}(T)||^2$ will approximately follow a $\chi^2$ distribution with $d$ degrees of freedom and therefore with $d$ mean and $2d$ variance, just like distance of $c$ to $p$. If $T$ is nearly as large than $S$: Then $T$ is approximately a random sample from the d-dimensional spherical (unit variance) Gaussian distribution. When finding $\text{mean}(T)$, for each dimension we are averaging $t$ points from a normal distribution, and in the whole we get a d-dimensional spherical Gaussian distribution with $1/t$ variance, and $t·||c-\text{mean}(T)||^2$ will follow a $\chi^2$ distribution with $d$ degrees of freedom. Mean of $||c-\text{mean}(T)||^2$ will be $d/t$ and variance $2d/t$. Then $||c-\text{mean}(T)||$ will be distributed around $\sqrt{d/t}$, and for larger $t$ it will be much smaller than $\sqrt{d}$.
Explanation for this event on a high-dimensional dataset
First of all, a comment on the question: Points in the sample $S$ won't be at $\sqrt{d}$ distance away from the center. Distance squared will follow a $\chi^2$ distribution with $d$ degrees of freedom
Explanation for this event on a high-dimensional dataset First of all, a comment on the question: Points in the sample $S$ won't be at $\sqrt{d}$ distance away from the center. Distance squared will follow a $\chi^2$ distribution with $d$ degrees of freedom and therefore with $d$ mean and $2d$ variance. Then, distance will be distributed around $\sqrt{d}$ but there is not a concentration of points in the $\sqrt{d}$ radius shell. And for the question itself, I don't see how you observe that $||c-\text{mean}(T)|| << \sqrt{d}$, and in fact $||c-\text{mean}(T)||$ depends a lot on how has been $T$ selected. I'll comment two extreme cases: If $T$ is much smaller than $S$: $T$ (the $t$ points closer to $p$) are likely to be very close to $p$ and therefore distribution of $\text{mean}(T)$ will be very close to the distribution of $p$, and $||c-\text{mean}(T)||^2$ will approximately follow a $\chi^2$ distribution with $d$ degrees of freedom and therefore with $d$ mean and $2d$ variance, just like distance of $c$ to $p$. If $T$ is nearly as large than $S$: Then $T$ is approximately a random sample from the d-dimensional spherical (unit variance) Gaussian distribution. When finding $\text{mean}(T)$, for each dimension we are averaging $t$ points from a normal distribution, and in the whole we get a d-dimensional spherical Gaussian distribution with $1/t$ variance, and $t·||c-\text{mean}(T)||^2$ will follow a $\chi^2$ distribution with $d$ degrees of freedom. Mean of $||c-\text{mean}(T)||^2$ will be $d/t$ and variance $2d/t$. Then $||c-\text{mean}(T)||$ will be distributed around $\sqrt{d/t}$, and for larger $t$ it will be much smaller than $\sqrt{d}$.
Explanation for this event on a high-dimensional dataset First of all, a comment on the question: Points in the sample $S$ won't be at $\sqrt{d}$ distance away from the center. Distance squared will follow a $\chi^2$ distribution with $d$ degrees of freedom
46,115
Including ordinal independent variables in a linear mixed effects model (using the lme4 package in R)
(This answer applies to [generalized] linear models generally, not just mixed models.) This answer on SO discusses the interpretation of linear models with ordinal independent (predictor) variables. Here are two reasonable approaches, not clear which is best: treat the score as numeric. Advantages: simple, parsimonious (only takes one parameter). Disadvantages: assumes that the degree of change between each successive pair of scores is identical. convert the score to an ordered factor; in R, this automatically (by default) uses orthogonal polynomial contrasts. This will use the same number of parameters as treating the score as an unordered factor (and will give the same overall predictions, goodness-of-fit, etc.), but will give more interpretable parameters in terms of linear, quadratic, cubic ... terms. You may be able to reduce the number of terms (equivalent to using a lower-order orthogonal polynomial). Advantages: makes no assumptions about the size of differences. Disadvantages: less parsimonious. Depending on what your goal is (prediction, hypothesis testing, etc.), you may be able to find some level of intermediate complexity (by regularizing/penalizing, or more crudely by reverting to lower-order orthog. polynomials, or by fitting a generalized additive model using a spline function of the scores), but the two options above are the simplest.
Including ordinal independent variables in a linear mixed effects model (using the lme4 package in R
(This answer applies to [generalized] linear models generally, not just mixed models.) This answer on SO discusses the interpretation of linear models with ordinal independent (predictor) variables. H
Including ordinal independent variables in a linear mixed effects model (using the lme4 package in R) (This answer applies to [generalized] linear models generally, not just mixed models.) This answer on SO discusses the interpretation of linear models with ordinal independent (predictor) variables. Here are two reasonable approaches, not clear which is best: treat the score as numeric. Advantages: simple, parsimonious (only takes one parameter). Disadvantages: assumes that the degree of change between each successive pair of scores is identical. convert the score to an ordered factor; in R, this automatically (by default) uses orthogonal polynomial contrasts. This will use the same number of parameters as treating the score as an unordered factor (and will give the same overall predictions, goodness-of-fit, etc.), but will give more interpretable parameters in terms of linear, quadratic, cubic ... terms. You may be able to reduce the number of terms (equivalent to using a lower-order orthogonal polynomial). Advantages: makes no assumptions about the size of differences. Disadvantages: less parsimonious. Depending on what your goal is (prediction, hypothesis testing, etc.), you may be able to find some level of intermediate complexity (by regularizing/penalizing, or more crudely by reverting to lower-order orthog. polynomials, or by fitting a generalized additive model using a spline function of the scores), but the two options above are the simplest.
Including ordinal independent variables in a linear mixed effects model (using the lme4 package in R (This answer applies to [generalized] linear models generally, not just mixed models.) This answer on SO discusses the interpretation of linear models with ordinal independent (predictor) variables. H
46,116
Is it possible to over-train a classifier?
Pankaj Daga's expansion is great, I'll take care of the illustration. Here is a typical curve when training a neural networks: The reported F1-score for the test set should to be the F1-score of the test set of the epoch where the F1-score of the validation set was the highest. (this is called "test best" on the figure)
Is it possible to over-train a classifier?
Pankaj Daga's expansion is great, I'll take care of the illustration. Here is a typical curve when training a neural networks: The reported F1-score for the test set should to be the F1-score of the
Is it possible to over-train a classifier? Pankaj Daga's expansion is great, I'll take care of the illustration. Here is a typical curve when training a neural networks: The reported F1-score for the test set should to be the F1-score of the test set of the epoch where the F1-score of the validation set was the highest. (this is called "test best" on the figure)
Is it possible to over-train a classifier? Pankaj Daga's expansion is great, I'll take care of the illustration. Here is a typical curve when training a neural networks: The reported F1-score for the test set should to be the F1-score of the
46,117
Is it possible to over-train a classifier?
Your comment regarding the epoch is true. So, if you use too few epoches, you may underfit and using too many epoches can result in overfitting. As you know you can always increase the training accuracy arbitrarily by increasing model complexity and increasing the number of epoch steps. One way to try and alleviate this problem could be through early stopping. In pseudocode: Split data into training, validation and test sets. At every epoch or every N epoch: evaluate network error on validation dataset. if the validation error is lower than previous best, save network to epoch. The final model is the one with the best performance on validation set. This is very similar to the classical cross validation techniques you use in machine learning approaches. Regarding convergence, you usually say the network has converged to some local minima if your error metric and weights are relatively constant over several iterations.
Is it possible to over-train a classifier?
Your comment regarding the epoch is true. So, if you use too few epoches, you may underfit and using too many epoches can result in overfitting. As you know you can always increase the training accura
Is it possible to over-train a classifier? Your comment regarding the epoch is true. So, if you use too few epoches, you may underfit and using too many epoches can result in overfitting. As you know you can always increase the training accuracy arbitrarily by increasing model complexity and increasing the number of epoch steps. One way to try and alleviate this problem could be through early stopping. In pseudocode: Split data into training, validation and test sets. At every epoch or every N epoch: evaluate network error on validation dataset. if the validation error is lower than previous best, save network to epoch. The final model is the one with the best performance on validation set. This is very similar to the classical cross validation techniques you use in machine learning approaches. Regarding convergence, you usually say the network has converged to some local minima if your error metric and weights are relatively constant over several iterations.
Is it possible to over-train a classifier? Your comment regarding the epoch is true. So, if you use too few epoches, you may underfit and using too many epoches can result in overfitting. As you know you can always increase the training accura
46,118
Missing Data Mixed Effects Modelling for Repeated Measures
Imputation using within subject means isn't a great idea because it will result in biased (too small) standard errors and possibly biased estimates. Assuming that the data are missing at random, a much better idea is to use multiple imputation. The mice package in R has the capability to impute continuous variables in a mixed efects framework with a single random effect (grouping variable) - just specify 2l.norm as the grouping variable. For example, suppose our analysis model is > require(mice) > require(lme4) > m0 <- lmer(teachpop~sex+texp+popular + (1|school), data=popmis) > confint(m0) 2.5 % 97.5 % .sig01 0.44905533 0.62574295 .sigma 0.54368549 0.59259188 (Intercept) 2.03118933 2.67864796 sex -0.07108881 0.09183821 texp 0.03024598 0.06505065 popular 0.22257646 0.32572600 Due to missingness in the predictor popular this model may be biased. So we will use multiple imputation: > ini <- mice(popmis, maxit=0) > (pred <- ini$pred) pupil school popular sex texp const teachpop pupil 0 0 0 0 0 0 0 school 0 0 0 0 0 0 0 popular 1 1 0 1 1 0 1 sex 0 0 0 0 0 0 0 texp 0 0 0 0 0 0 0 const 0 0 0 0 0 0 0 teachpop 0 0 0 0 0 0 0 This is the default predictor matrix for the imputation model. Only popular has missing values, and we are going to impute them using a mixed model where school is the grouping factor, and the other variables are fixed effects. To do this, we use -2 to tell mice that school is the grouping variable, and 2 for the fixed effects: > pred["popular",] <- c(0, -2, 0, 2, 2, 2, 0) > (pred) So now we have: pupil school popular sex texp const teachpop pupil 0 0 0 0 0 0 0 school 0 0 0 0 0 0 0 popular 0 -2 0 2 2 2 0 sex 0 0 0 0 0 0 0 texp 0 0 0 0 0 0 0 const 0 0 0 0 0 0 0 teachpop 0 0 0 0 0 0 0 We have set up the predictor matrix, so we can now create 10 multiply imputed datasets using the 2l.norm method to impute values for popular > imp <- mice(popmis, meth = c("","","2l.norm","","","",""), pred = pred, maxit=10, m = 10) Now we run the mixed model on each of the imputed datasets: > fit <- with(imp, lmer(teachpop~sex+texp+popular + (1|school))) ...and pool the results: > summary(pool(fit)) est se t df Pr(>|t|) lo 95 hi 95 nmis (Intercept) 2.73951576 0.165053863 16.597708 1991.5874 0.000000e+00 2.41581941 3.06321211 NA sex 0.08620420 0.031042794 2.776947 915.1865 5.599307e-03 0.02528087 0.14712753 0 texp 0.05682495 0.009713717 5.849970 1991.4452 5.733929e-09 0.03777484 0.07587506 0 popular 0.16696926 0.018760706 8.899945 1980.9159 0.000000e+00 0.13017647 0.20376205 848
Missing Data Mixed Effects Modelling for Repeated Measures
Imputation using within subject means isn't a great idea because it will result in biased (too small) standard errors and possibly biased estimates. Assuming that the data are missing at random, a muc
Missing Data Mixed Effects Modelling for Repeated Measures Imputation using within subject means isn't a great idea because it will result in biased (too small) standard errors and possibly biased estimates. Assuming that the data are missing at random, a much better idea is to use multiple imputation. The mice package in R has the capability to impute continuous variables in a mixed efects framework with a single random effect (grouping variable) - just specify 2l.norm as the grouping variable. For example, suppose our analysis model is > require(mice) > require(lme4) > m0 <- lmer(teachpop~sex+texp+popular + (1|school), data=popmis) > confint(m0) 2.5 % 97.5 % .sig01 0.44905533 0.62574295 .sigma 0.54368549 0.59259188 (Intercept) 2.03118933 2.67864796 sex -0.07108881 0.09183821 texp 0.03024598 0.06505065 popular 0.22257646 0.32572600 Due to missingness in the predictor popular this model may be biased. So we will use multiple imputation: > ini <- mice(popmis, maxit=0) > (pred <- ini$pred) pupil school popular sex texp const teachpop pupil 0 0 0 0 0 0 0 school 0 0 0 0 0 0 0 popular 1 1 0 1 1 0 1 sex 0 0 0 0 0 0 0 texp 0 0 0 0 0 0 0 const 0 0 0 0 0 0 0 teachpop 0 0 0 0 0 0 0 This is the default predictor matrix for the imputation model. Only popular has missing values, and we are going to impute them using a mixed model where school is the grouping factor, and the other variables are fixed effects. To do this, we use -2 to tell mice that school is the grouping variable, and 2 for the fixed effects: > pred["popular",] <- c(0, -2, 0, 2, 2, 2, 0) > (pred) So now we have: pupil school popular sex texp const teachpop pupil 0 0 0 0 0 0 0 school 0 0 0 0 0 0 0 popular 0 -2 0 2 2 2 0 sex 0 0 0 0 0 0 0 texp 0 0 0 0 0 0 0 const 0 0 0 0 0 0 0 teachpop 0 0 0 0 0 0 0 We have set up the predictor matrix, so we can now create 10 multiply imputed datasets using the 2l.norm method to impute values for popular > imp <- mice(popmis, meth = c("","","2l.norm","","","",""), pred = pred, maxit=10, m = 10) Now we run the mixed model on each of the imputed datasets: > fit <- with(imp, lmer(teachpop~sex+texp+popular + (1|school))) ...and pool the results: > summary(pool(fit)) est se t df Pr(>|t|) lo 95 hi 95 nmis (Intercept) 2.73951576 0.165053863 16.597708 1991.5874 0.000000e+00 2.41581941 3.06321211 NA sex 0.08620420 0.031042794 2.776947 915.1865 5.599307e-03 0.02528087 0.14712753 0 texp 0.05682495 0.009713717 5.849970 1991.4452 5.733929e-09 0.03777484 0.07587506 0 popular 0.16696926 0.018760706 8.899945 1980.9159 0.000000e+00 0.13017647 0.20376205 848
Missing Data Mixed Effects Modelling for Repeated Measures Imputation using within subject means isn't a great idea because it will result in biased (too small) standard errors and possibly biased estimates. Assuming that the data are missing at random, a muc
46,119
Missing Data Mixed Effects Modelling for Repeated Measures
In his book Stef van Buuren describes the difficulties in multilevel modelling when a level-1 or level-2 predictor is missing. He advices to use 2l.pmm from miceadds, and setting the value to 3 in the predictor matrix as outlined here under '7.10.2 Random intercepts, missing level-1 predictor'. This adds the cluster means for the imputed variable to the model. According to this, a value of 3 in the predictor matrix of mice means 'imputation model with a fixed effect, random intercept and cluster means'. This seems to be an update on this tutorial where the accepted answer is described. As Stef van Buuren also mentions, alternatives would be ad-hoc solutions, eg. listwise deletion (in your case obviously does not make sense), or multilevel imputation with joint models, eg with the R package jomo. P.S.: To add to OP's question of long vs wide: Stef van Buuren also mentions that mixed models can handle missing data in the outcome easily (see here). The multilevel model is actually “made to solve” the problem of missing values in the outcome. (...) Missing outcome data are easily handled in modern likelihood-based methods (...). Mixed-effects models can be fit with maximum-likelihood methods, which take care of missing data in the dependent variable. This principle can be extended to address missing data in explanatory variables in (multilevel) software for structural equation modeling like Mplus (Muthén, Muthén, and Asparouhov 2016) and gllamm (Rabe-Hesketh, Skrondal, and Pickles 2002).
Missing Data Mixed Effects Modelling for Repeated Measures
In his book Stef van Buuren describes the difficulties in multilevel modelling when a level-1 or level-2 predictor is missing. He advices to use 2l.pmm from miceadds, and setting the value to 3 in the
Missing Data Mixed Effects Modelling for Repeated Measures In his book Stef van Buuren describes the difficulties in multilevel modelling when a level-1 or level-2 predictor is missing. He advices to use 2l.pmm from miceadds, and setting the value to 3 in the predictor matrix as outlined here under '7.10.2 Random intercepts, missing level-1 predictor'. This adds the cluster means for the imputed variable to the model. According to this, a value of 3 in the predictor matrix of mice means 'imputation model with a fixed effect, random intercept and cluster means'. This seems to be an update on this tutorial where the accepted answer is described. As Stef van Buuren also mentions, alternatives would be ad-hoc solutions, eg. listwise deletion (in your case obviously does not make sense), or multilevel imputation with joint models, eg with the R package jomo. P.S.: To add to OP's question of long vs wide: Stef van Buuren also mentions that mixed models can handle missing data in the outcome easily (see here). The multilevel model is actually “made to solve” the problem of missing values in the outcome. (...) Missing outcome data are easily handled in modern likelihood-based methods (...). Mixed-effects models can be fit with maximum-likelihood methods, which take care of missing data in the dependent variable. This principle can be extended to address missing data in explanatory variables in (multilevel) software for structural equation modeling like Mplus (Muthén, Muthén, and Asparouhov 2016) and gllamm (Rabe-Hesketh, Skrondal, and Pickles 2002).
Missing Data Mixed Effects Modelling for Repeated Measures In his book Stef van Buuren describes the difficulties in multilevel modelling when a level-1 or level-2 predictor is missing. He advices to use 2l.pmm from miceadds, and setting the value to 3 in the
46,120
How to sample value from Symmetric Geometric Distribution
I mostly agree with @Glen_b but I think the correct probability function is $$f(\theta) = \begin{cases} p_g &\theta = 0\\ \frac{1}{2}p_g(1-p_g)^{|\theta|-1} &\theta \neq 0\end{cases}$$ for integer $\theta$. This seems to be the only way to get the correct variance, and it is the same as the given formula up to a scalar. As Glen_b said, a draw from this distirbution can be obtained by drawing from a geometric distirbution and multiplying the result by a uniform random choice of $+1$ or $-1$. You can work this out by hand and/or verify it by simulation. I also agree that the intention is probably that the proposed value $\theta_{new}$ is meant to be $\theta + x$ where $x$ is a random draw from $f$. This is the only way I can see for the text in your link to make sense. Unfortunately, we cannot know for sure because we don't have access to the source!
How to sample value from Symmetric Geometric Distribution
I mostly agree with @Glen_b but I think the correct probability function is $$f(\theta) = \begin{cases} p_g &\theta = 0\\ \frac{1}{2}p_g(1-p_g)^{|\theta|-1} &\theta \neq 0\end{cases}$$ for integer $\t
How to sample value from Symmetric Geometric Distribution I mostly agree with @Glen_b but I think the correct probability function is $$f(\theta) = \begin{cases} p_g &\theta = 0\\ \frac{1}{2}p_g(1-p_g)^{|\theta|-1} &\theta \neq 0\end{cases}$$ for integer $\theta$. This seems to be the only way to get the correct variance, and it is the same as the given formula up to a scalar. As Glen_b said, a draw from this distirbution can be obtained by drawing from a geometric distirbution and multiplying the result by a uniform random choice of $+1$ or $-1$. You can work this out by hand and/or verify it by simulation. I also agree that the intention is probably that the proposed value $\theta_{new}$ is meant to be $\theta + x$ where $x$ is a random draw from $f$. This is the only way I can see for the text in your link to make sense. Unfortunately, we cannot know for sure because we don't have access to the source!
How to sample value from Symmetric Geometric Distribution I mostly agree with @Glen_b but I think the correct probability function is $$f(\theta) = \begin{cases} p_g &\theta = 0\\ \frac{1}{2}p_g(1-p_g)^{|\theta|-1} &\theta \neq 0\end{cases}$$ for integer $\t
46,121
How to sample value from Symmetric Geometric Distribution
Discrete Laplace distribution is a very similar to the one you describe (check Inusah and Kozubowski, 2006, and Kotz, Kozubowski and Podgorski, 2012). Discrete Laplace distribution has probability mass function: $$ f(x) = \frac{1-p}{1+p} p^{|x-\mu|} $$ and cumulative distribution function $$ F(x) = \left\{\begin{array}{ll} \frac{p^{-|x-\mu|}}{1+p} & x < 0 \\ 1 - \frac{p^{|x-\mu|+1}}{1+p} & x \ge 0 \end{array}\right. $$ The name discrete Laplace comes from the fact that if $U \sim \mathrm{Geometric}(1-p)$ and $V \sim \mathrm{Geometric}(1-p)$, then $U-V \sim \mathrm{DiscreteLaplace}(p)$, where geometric distribution is related to discrete Laplace distribution in similar way as exponential distribution is related to Laplace distribution. Sampling from it is straightforward: you draw $U$ and $V$ from geometric distribution parametrized by $q = 1-p$, and then take $U-V$. In R it is implemented in DiscreteLaplace, disclap and extraDistr packages. Below you can see your distribution (in black) and discrete Laplace distribution (in red) parametrized by different values of $p$ (for discrete Laplace by $1-p$). As you can see, they differ but the idea behind them is very similar. Kotz, S., Kozubowski, T., & Podgorski, K. (2012). The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance. Springer Science & Business Media. Inusah, S., & Kozubowski, T.J. (2006). A discrete analogue of the Laplace distribution. Journal of statistical planning and inference, 136(3), 1090-1102.
How to sample value from Symmetric Geometric Distribution
Discrete Laplace distribution is a very similar to the one you describe (check Inusah and Kozubowski, 2006, and Kotz, Kozubowski and Podgorski, 2012). Discrete Laplace distribution has probability ma
How to sample value from Symmetric Geometric Distribution Discrete Laplace distribution is a very similar to the one you describe (check Inusah and Kozubowski, 2006, and Kotz, Kozubowski and Podgorski, 2012). Discrete Laplace distribution has probability mass function: $$ f(x) = \frac{1-p}{1+p} p^{|x-\mu|} $$ and cumulative distribution function $$ F(x) = \left\{\begin{array}{ll} \frac{p^{-|x-\mu|}}{1+p} & x < 0 \\ 1 - \frac{p^{|x-\mu|+1}}{1+p} & x \ge 0 \end{array}\right. $$ The name discrete Laplace comes from the fact that if $U \sim \mathrm{Geometric}(1-p)$ and $V \sim \mathrm{Geometric}(1-p)$, then $U-V \sim \mathrm{DiscreteLaplace}(p)$, where geometric distribution is related to discrete Laplace distribution in similar way as exponential distribution is related to Laplace distribution. Sampling from it is straightforward: you draw $U$ and $V$ from geometric distribution parametrized by $q = 1-p$, and then take $U-V$. In R it is implemented in DiscreteLaplace, disclap and extraDistr packages. Below you can see your distribution (in black) and discrete Laplace distribution (in red) parametrized by different values of $p$ (for discrete Laplace by $1-p$). As you can see, they differ but the idea behind them is very similar. Kotz, S., Kozubowski, T., & Podgorski, K. (2012). The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance. Springer Science & Business Media. Inusah, S., & Kozubowski, T.J. (2006). A discrete analogue of the Laplace distribution. Journal of statistical planning and inference, 136(3), 1090-1102.
How to sample value from Symmetric Geometric Distribution Discrete Laplace distribution is a very similar to the one you describe (check Inusah and Kozubowski, 2006, and Kotz, Kozubowski and Podgorski, 2012). Discrete Laplace distribution has probability ma
46,122
How to sample value from Symmetric Geometric Distribution
The linked document is unclear; it doesn't even indicate the random variable (which I will call $X$). I assume it means $p_g$ & $\theta$ are parameters. It doesn't define the values taken by $X$ - which I believe should appear in the exponent as $|x-\theta|$ not as $|\theta|$ - nor the values taken by $\theta$ (which I presume to be integer) and it refers to the distribution as discrete but calls it a density. I believe a correct implementation of a "symmetric geometric" is as follows: $$p_X(x;\theta,p_g)= \frac{p_g (1 - p_g)^{\mid x-\theta\mid}}{2- p_g},\: x\in \mathbb{Z};\, \theta\in \mathbb{Z}, 0<p_g<1$$ (note the change in the denominator also) This distribution has the property that as you move out from the center, the ratio of probability to the next probability further into the tail remains constant (in geometric ratio). This is what makes it "geometric". Here's an example, with $p_g=0.3$ and $\theta=10$: As you see it's symmetric, geometric and centered at $\theta$. If the distribution has (sub)exponential tails, that should work fine as a proposal. What they attempted to define (but failed to) is slightly different: $$p_X(x;\theta,p_g)= \frac{p_g (1 - p_g)^{|x-\theta|}}{2(1- p_g)},\: x\in \mathbb{Z};\, \theta\in \mathbb{Z}, 0<p_g<1$$ This is not in geometric ratio, because the central spike is twice the height it should be to be in the same ratio to the values either side as those values are to the ones further out again. While I don't think it merits the names symmetric geometric, this distribution can be generated by generating a geometric (the version indexed from 0), attaching a random sign and shifting by $\theta$ (i.e. add $\theta$). (The one I defined in the beginning is slightly more complicated to generate, but one way to do it is as above but if you generate a $-0$ you throw it out and generate again.) A different one suitable for heavier tails would be a discrete equivalent of a table-mountain distribution, such as $$p_X(x;\theta,p_g)= \frac{\min(1,\mid\! x-\theta\!\mid^{-s})}{2\zeta(s)+1},\: x\in \mathbb{Z};\, \theta\in \mathbb{Z},s>1$$ (specifically with $s=2$ for the usual table mountain proposal). This is effectively a symmetric version of a zeta(s) distribution with a "flat" bit at $\theta$ (in that it has the same probability as the values either side of it). You could replace the value in the numerator at $\theta$ (i.e. $1$) with an arbitrary positive value (if you also fix the denominator), or expand the "flat" part (ditto) or use a different $s$ if needed.
How to sample value from Symmetric Geometric Distribution
The linked document is unclear; it doesn't even indicate the random variable (which I will call $X$). I assume it means $p_g$ & $\theta$ are parameters. It doesn't define the values taken by $X$ - w
How to sample value from Symmetric Geometric Distribution The linked document is unclear; it doesn't even indicate the random variable (which I will call $X$). I assume it means $p_g$ & $\theta$ are parameters. It doesn't define the values taken by $X$ - which I believe should appear in the exponent as $|x-\theta|$ not as $|\theta|$ - nor the values taken by $\theta$ (which I presume to be integer) and it refers to the distribution as discrete but calls it a density. I believe a correct implementation of a "symmetric geometric" is as follows: $$p_X(x;\theta,p_g)= \frac{p_g (1 - p_g)^{\mid x-\theta\mid}}{2- p_g},\: x\in \mathbb{Z};\, \theta\in \mathbb{Z}, 0<p_g<1$$ (note the change in the denominator also) This distribution has the property that as you move out from the center, the ratio of probability to the next probability further into the tail remains constant (in geometric ratio). This is what makes it "geometric". Here's an example, with $p_g=0.3$ and $\theta=10$: As you see it's symmetric, geometric and centered at $\theta$. If the distribution has (sub)exponential tails, that should work fine as a proposal. What they attempted to define (but failed to) is slightly different: $$p_X(x;\theta,p_g)= \frac{p_g (1 - p_g)^{|x-\theta|}}{2(1- p_g)},\: x\in \mathbb{Z};\, \theta\in \mathbb{Z}, 0<p_g<1$$ This is not in geometric ratio, because the central spike is twice the height it should be to be in the same ratio to the values either side as those values are to the ones further out again. While I don't think it merits the names symmetric geometric, this distribution can be generated by generating a geometric (the version indexed from 0), attaching a random sign and shifting by $\theta$ (i.e. add $\theta$). (The one I defined in the beginning is slightly more complicated to generate, but one way to do it is as above but if you generate a $-0$ you throw it out and generate again.) A different one suitable for heavier tails would be a discrete equivalent of a table-mountain distribution, such as $$p_X(x;\theta,p_g)= \frac{\min(1,\mid\! x-\theta\!\mid^{-s})}{2\zeta(s)+1},\: x\in \mathbb{Z};\, \theta\in \mathbb{Z},s>1$$ (specifically with $s=2$ for the usual table mountain proposal). This is effectively a symmetric version of a zeta(s) distribution with a "flat" bit at $\theta$ (in that it has the same probability as the values either side of it). You could replace the value in the numerator at $\theta$ (i.e. $1$) with an arbitrary positive value (if you also fix the denominator), or expand the "flat" part (ditto) or use a different $s$ if needed.
How to sample value from Symmetric Geometric Distribution The linked document is unclear; it doesn't even indicate the random variable (which I will call $X$). I assume it means $p_g$ & $\theta$ are parameters. It doesn't define the values taken by $X$ - w
46,123
Generalized Additive Model interpretation with ordered categorical family in R
In these models, the linear predictor is a latent variable, with estimated thresholds $t_i$ that mark the transitions between levels of the ordered categorical response. The plots you show in the question are the smooth contributions of the four variables to the linear predictor, thresholds along which demarcate the categories. The figure below illustrates this for a linear predictor comprised of a smooth function of a single continuous variable for a four category response. The "effect" of body size on the linear predictor is smooth as shown by the solid black line and the grey confidence interval. By definition in the ocat family, the first threshold, $t_1$ is always at -1, which in the figure is the boundary between least concern and vulnerable. Two additional thresholds are estimated for the boundaries between the further categories. The summary() method will print out the thresholds (-1 plus the other estimated ones). For the example you quoted this is: > summary(b) Family: Ordered Categorical(-1,0.07,5.15) Link function: identity Formula: y ~ s(x0) + s(x1) + s(x2) + s(x3) Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.1221 0.1319 0.926 0.354 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(x0) 3.317 4.116 21.623 0.000263 *** s(x1) 3.115 3.871 188.368 < 2e-16 *** s(x2) 7.814 8.616 402.300 < 2e-16 *** s(x3) 1.593 1.970 0.936 0.640434 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Deviance explained = 57.7% -REML = 283.82 Scale est. = 1 n = 400 or these can be extracted via > b$family$getTheta(TRUE) ## the estimated cut points [1] -1.00000000 0.07295739 5.14663505 Looking at your lower-left of the four smoothers from the example, we would interpret this as showing that for $x_2$ as $x_2$ increases from low to medium values we would, conditional upon the values of the other covariates tend to see an increase in the probability that an observation is from one of the categories above the baseline one. But this effect is diminished at higher values of $x_2$. For $x_1$, we see a roughly linear effect of increased probability of belonging to higher order categories as the values of $x_1$ increases, with the effect being more rapid for $x_1 \geq \sim 0.5$. I have a more complete worked example in some course materials that I prepared with David Miller and Eric Pedersen, which you can find here. You can find the course website/slides/data on github here with the raw files here. The figure above was prepared by Eric for those workshop materials.
Generalized Additive Model interpretation with ordered categorical family in R
In these models, the linear predictor is a latent variable, with estimated thresholds $t_i$ that mark the transitions between levels of the ordered categorical response. The plots you show in the ques
Generalized Additive Model interpretation with ordered categorical family in R In these models, the linear predictor is a latent variable, with estimated thresholds $t_i$ that mark the transitions between levels of the ordered categorical response. The plots you show in the question are the smooth contributions of the four variables to the linear predictor, thresholds along which demarcate the categories. The figure below illustrates this for a linear predictor comprised of a smooth function of a single continuous variable for a four category response. The "effect" of body size on the linear predictor is smooth as shown by the solid black line and the grey confidence interval. By definition in the ocat family, the first threshold, $t_1$ is always at -1, which in the figure is the boundary between least concern and vulnerable. Two additional thresholds are estimated for the boundaries between the further categories. The summary() method will print out the thresholds (-1 plus the other estimated ones). For the example you quoted this is: > summary(b) Family: Ordered Categorical(-1,0.07,5.15) Link function: identity Formula: y ~ s(x0) + s(x1) + s(x2) + s(x3) Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.1221 0.1319 0.926 0.354 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(x0) 3.317 4.116 21.623 0.000263 *** s(x1) 3.115 3.871 188.368 < 2e-16 *** s(x2) 7.814 8.616 402.300 < 2e-16 *** s(x3) 1.593 1.970 0.936 0.640434 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Deviance explained = 57.7% -REML = 283.82 Scale est. = 1 n = 400 or these can be extracted via > b$family$getTheta(TRUE) ## the estimated cut points [1] -1.00000000 0.07295739 5.14663505 Looking at your lower-left of the four smoothers from the example, we would interpret this as showing that for $x_2$ as $x_2$ increases from low to medium values we would, conditional upon the values of the other covariates tend to see an increase in the probability that an observation is from one of the categories above the baseline one. But this effect is diminished at higher values of $x_2$. For $x_1$, we see a roughly linear effect of increased probability of belonging to higher order categories as the values of $x_1$ increases, with the effect being more rapid for $x_1 \geq \sim 0.5$. I have a more complete worked example in some course materials that I prepared with David Miller and Eric Pedersen, which you can find here. You can find the course website/slides/data on github here with the raw files here. The figure above was prepared by Eric for those workshop materials.
Generalized Additive Model interpretation with ordered categorical family in R In these models, the linear predictor is a latent variable, with estimated thresholds $t_i$ that mark the transitions between levels of the ordered categorical response. The plots you show in the ques
46,124
Seasonal data deemed stationary by ADF and KPSS tests
Both the augmented Dickey-Fuller (ADF) test and the Kwiatkowski, Phillips, Schmidt and Shin (KPSS) test are tailored for detecting nonstationarity in the form of a unit root in the process. (The test equations explicitly allow for a unit root; see the refence below.) However, they are not tailored for detecting other forms of nonstationarity. Therefore, it is not surprising that they do not detect nonstationarity of the seasonal kind. The result of the ADF test ($p$-value below 0.05) suggests that the null hypothesis of presence of a unit root can be rejected at 95% confidence level. The result of the KPSS test ($p$-value above 0.05) suggests that the null hypothesis of absence of a unit root presence of unit root cannot be rejected at 95% confidence level. (The bullet points are there just to confirm what you implied.) For an accessible and intuitive yet technically precise treatment of the ADF and the KPSS tests I suggest Eric Zivot's "Modelling Financial Time Series with S-PLUS" (2nd ed., 2006) Chapter 4 "Unit Root Tests" (especially sections 4.3 and 4.4).
Seasonal data deemed stationary by ADF and KPSS tests
Both the augmented Dickey-Fuller (ADF) test and the Kwiatkowski, Phillips, Schmidt and Shin (KPSS) test are tailored for detecting nonstationarity in the form of a unit root in the process. (The test
Seasonal data deemed stationary by ADF and KPSS tests Both the augmented Dickey-Fuller (ADF) test and the Kwiatkowski, Phillips, Schmidt and Shin (KPSS) test are tailored for detecting nonstationarity in the form of a unit root in the process. (The test equations explicitly allow for a unit root; see the refence below.) However, they are not tailored for detecting other forms of nonstationarity. Therefore, it is not surprising that they do not detect nonstationarity of the seasonal kind. The result of the ADF test ($p$-value below 0.05) suggests that the null hypothesis of presence of a unit root can be rejected at 95% confidence level. The result of the KPSS test ($p$-value above 0.05) suggests that the null hypothesis of absence of a unit root presence of unit root cannot be rejected at 95% confidence level. (The bullet points are there just to confirm what you implied.) For an accessible and intuitive yet technically precise treatment of the ADF and the KPSS tests I suggest Eric Zivot's "Modelling Financial Time Series with S-PLUS" (2nd ed., 2006) Chapter 4 "Unit Root Tests" (especially sections 4.3 and 4.4).
Seasonal data deemed stationary by ADF and KPSS tests Both the augmented Dickey-Fuller (ADF) test and the Kwiatkowski, Phillips, Schmidt and Shin (KPSS) test are tailored for detecting nonstationarity in the form of a unit root in the process. (The test
46,125
Seasonal data deemed stationary by ADF and KPSS tests
You can use HEGY test for seasonality or CH test for seasonality to check for seasonal unit roots. Better to use HEGY test. ADF and KPSS test for non seasonal unit roots. Since your seasonality is strong, take the seasonal difference and proceed to test for non seasonal unit roots. Probably seasonal differencing will remove all forms of non-stationarity and give you a stationary series. This results in a seasonally differenced series. When fitting a VAR always proceed to test for ADF tests after seasonally adjusting your data.
Seasonal data deemed stationary by ADF and KPSS tests
You can use HEGY test for seasonality or CH test for seasonality to check for seasonal unit roots. Better to use HEGY test. ADF and KPSS test for non seasonal unit roots. Since your seasonality is str
Seasonal data deemed stationary by ADF and KPSS tests You can use HEGY test for seasonality or CH test for seasonality to check for seasonal unit roots. Better to use HEGY test. ADF and KPSS test for non seasonal unit roots. Since your seasonality is strong, take the seasonal difference and proceed to test for non seasonal unit roots. Probably seasonal differencing will remove all forms of non-stationarity and give you a stationary series. This results in a seasonally differenced series. When fitting a VAR always proceed to test for ADF tests after seasonally adjusting your data.
Seasonal data deemed stationary by ADF and KPSS tests You can use HEGY test for seasonality or CH test for seasonality to check for seasonal unit roots. Better to use HEGY test. ADF and KPSS test for non seasonal unit roots. Since your seasonality is str
46,126
Seasonal data deemed stationary by ADF and KPSS tests
It is because the DF/ADF is testing for the first difference (aka the trend) only. It could iteratively cover larger differences (aka the seasonalities) as well but it is defined as it is. This video https://www.youtube.com/watch?v=1opjnegd_hA explains the equations where you will see the value of delta.
Seasonal data deemed stationary by ADF and KPSS tests
It is because the DF/ADF is testing for the first difference (aka the trend) only. It could iteratively cover larger differences (aka the seasonalities) as well but it is defined as it is. This video
Seasonal data deemed stationary by ADF and KPSS tests It is because the DF/ADF is testing for the first difference (aka the trend) only. It could iteratively cover larger differences (aka the seasonalities) as well but it is defined as it is. This video https://www.youtube.com/watch?v=1opjnegd_hA explains the equations where you will see the value of delta.
Seasonal data deemed stationary by ADF and KPSS tests It is because the DF/ADF is testing for the first difference (aka the trend) only. It could iteratively cover larger differences (aka the seasonalities) as well but it is defined as it is. This video
46,127
What's the difference between univariate and multivariate cox regression?
I think that many people who use the words "multivariate regression" with Cox models really mean to say "multiple regression." (I will confess to having done that myself; it's common in the literature.) "Multiple regression" means having more than one predictor in a regression model, while "multivariate regression" is a term perhaps better reserved for situations where there is more than one outcome variable being considered together. In a Cox regression you are typically modeling just a single outcome variable, survival of some sort. If you are preparing results for publication in a medical journal, the editors and reviewers will typically expect to see a table of single-variable relations of predictor variables to outcome (your "univariate" regressions). These single-variable relations, however, are seldom very informative due to relations among the values of the predictors and potential interactions among the predictors with respect to outcome. These issues can be handled by Cox multiple regression, which gives you the best chance of evaluating each of the predictors with all the others taken into account, and which allows directly for testing of interactions. You have to be careful not to evaluate too many predictors together in a model, however. A useful rule of thumb is that you should limit your analysis to no more than 1 predictor per 10-20 events (recurrences or deaths in oncology) in a standard Cox multiple-regression model. Note that there can be a true multivariate Cox regression that evaluates multiple types of outcome together (e.g., both recurrence and death times in cancer studies), or that treats multiple events on the same individual with multivariate techniques, as in standard multivariate linear regression. This paper is one often-cited reference, in case that is what you actually mean. But in my experience, I think most people in the clinical literature say "multivariate Cox regression" when they really mean "Cox multiple regression." It would be wise to get some more direct advice from a local statistician, as there are many issues that need to be considered in building a reliable survival model. Working with an experienced practitioner can also be an efficient way to learn for yourself.
What's the difference between univariate and multivariate cox regression?
I think that many people who use the words "multivariate regression" with Cox models really mean to say "multiple regression." (I will confess to having done that myself; it's common in the literature
What's the difference between univariate and multivariate cox regression? I think that many people who use the words "multivariate regression" with Cox models really mean to say "multiple regression." (I will confess to having done that myself; it's common in the literature.) "Multiple regression" means having more than one predictor in a regression model, while "multivariate regression" is a term perhaps better reserved for situations where there is more than one outcome variable being considered together. In a Cox regression you are typically modeling just a single outcome variable, survival of some sort. If you are preparing results for publication in a medical journal, the editors and reviewers will typically expect to see a table of single-variable relations of predictor variables to outcome (your "univariate" regressions). These single-variable relations, however, are seldom very informative due to relations among the values of the predictors and potential interactions among the predictors with respect to outcome. These issues can be handled by Cox multiple regression, which gives you the best chance of evaluating each of the predictors with all the others taken into account, and which allows directly for testing of interactions. You have to be careful not to evaluate too many predictors together in a model, however. A useful rule of thumb is that you should limit your analysis to no more than 1 predictor per 10-20 events (recurrences or deaths in oncology) in a standard Cox multiple-regression model. Note that there can be a true multivariate Cox regression that evaluates multiple types of outcome together (e.g., both recurrence and death times in cancer studies), or that treats multiple events on the same individual with multivariate techniques, as in standard multivariate linear regression. This paper is one often-cited reference, in case that is what you actually mean. But in my experience, I think most people in the clinical literature say "multivariate Cox regression" when they really mean "Cox multiple regression." It would be wise to get some more direct advice from a local statistician, as there are many issues that need to be considered in building a reliable survival model. Working with an experienced practitioner can also be an efficient way to learn for yourself.
What's the difference between univariate and multivariate cox regression? I think that many people who use the words "multivariate regression" with Cox models really mean to say "multiple regression." (I will confess to having done that myself; it's common in the literature
46,128
What's the difference between univariate and multivariate cox regression?
You should opt to do multivariable cox regression analysis (Not multivariate). As rightly point out by @EdM multivaraite means having more than one outcome variable, whereas, in survival analysis you have only one outcome variable, i.e. time-to-event of interest. Since, in oncology the group of patients under study, in most cases, is heterogenous, my advice would be to conduct multivariable analysis.
What's the difference between univariate and multivariate cox regression?
You should opt to do multivariable cox regression analysis (Not multivariate). As rightly point out by @EdM multivaraite means having more than one outcome variable, whereas, in survival analysis you
What's the difference between univariate and multivariate cox regression? You should opt to do multivariable cox regression analysis (Not multivariate). As rightly point out by @EdM multivaraite means having more than one outcome variable, whereas, in survival analysis you have only one outcome variable, i.e. time-to-event of interest. Since, in oncology the group of patients under study, in most cases, is heterogenous, my advice would be to conduct multivariable analysis.
What's the difference between univariate and multivariate cox regression? You should opt to do multivariable cox regression analysis (Not multivariate). As rightly point out by @EdM multivaraite means having more than one outcome variable, whereas, in survival analysis you
46,129
suitable non-linear equation to capture a 'J-shaped' relationship between x and y
You just need to choose some sensible options in nls. Here I used constrained nls, forcing the coefficients to have opposite signs. I ran this after your code above: mfit=nls(scatter ~ a1*exp(-b1*age)+a2*exp(b2*age), start=list(a1=.01,a2=.02,b1=.04,b2=.04),lower=list(0,0,0,0), algorithm="port",trace=TRUE) lines(age,fitted(mfit),col="yellow2",lwd=2,lty=3) The fitted curve is the light dashed line right up the middle of the purple curve
suitable non-linear equation to capture a 'J-shaped' relationship between x and y
You just need to choose some sensible options in nls. Here I used constrained nls, forcing the coefficients to have opposite signs. I ran this after your code above: mfit=nls(scatter ~ a1*exp(-b1*age)
suitable non-linear equation to capture a 'J-shaped' relationship between x and y You just need to choose some sensible options in nls. Here I used constrained nls, forcing the coefficients to have opposite signs. I ran this after your code above: mfit=nls(scatter ~ a1*exp(-b1*age)+a2*exp(b2*age), start=list(a1=.01,a2=.02,b1=.04,b2=.04),lower=list(0,0,0,0), algorithm="port",trace=TRUE) lines(age,fitted(mfit),col="yellow2",lwd=2,lty=3) The fitted curve is the light dashed line right up the middle of the purple curve
suitable non-linear equation to capture a 'J-shaped' relationship between x and y You just need to choose some sensible options in nls. Here I used constrained nls, forcing the coefficients to have opposite signs. I ran this after your code above: mfit=nls(scatter ~ a1*exp(-b1*age)
46,130
suitable non-linear equation to capture a 'J-shaped' relationship between x and y
FIRST APPROACH: Hurst et al. use the following form for a similar problem (as far as I can see): mortality = a + b * age * exp(c * age) where a, b, and c are parameters. The call to nls would be along the lines dat <- data.frame(age=age, mortality=scatter) fit <- nls( mortality ~ a + b * age * exp(c*age), data=dat, start=list(a=0.05, b=-0.001, c=-0.001) ) plot(mortality~age, data=dat) lines(predict(fit)) where dat is the data.frame with your data, including the columns mortality and age. The fit does not look very nice, though. SECOND APPROACH: To exploit the known functional relation, you can use constrained optimisation to get a reasonable fit: ll <- function(par, dat) -sum(dnorm(dat[,"mortality"], mean=par[1]*exp(par[2]*dat[,"age"]) + par[3]*exp(par[4]*dat[,"age"]), sd=par[5], log=TRUE) ) fit_mle <- constrOptim( theta=c(1,-0.05,1, 0.05, 1), f=ll, grad=NULL, ui=diag(c(1,-1,1,1,1)), ci=rep(0,5), dat=dat ) p <- fit_mle[["par"]] y <- p[1]*exp(p[2]*age)+p[3]*exp(p[4]*age) plot(mortality~age, data=dat) lines(y) The important constraints are that the second parameter needs to be negative and the fourth parameter positive.
suitable non-linear equation to capture a 'J-shaped' relationship between x and y
FIRST APPROACH: Hurst et al. use the following form for a similar problem (as far as I can see): mortality = a + b * age * exp(c * age) where a, b, and c are parameters. The call to nls would be alon
suitable non-linear equation to capture a 'J-shaped' relationship between x and y FIRST APPROACH: Hurst et al. use the following form for a similar problem (as far as I can see): mortality = a + b * age * exp(c * age) where a, b, and c are parameters. The call to nls would be along the lines dat <- data.frame(age=age, mortality=scatter) fit <- nls( mortality ~ a + b * age * exp(c*age), data=dat, start=list(a=0.05, b=-0.001, c=-0.001) ) plot(mortality~age, data=dat) lines(predict(fit)) where dat is the data.frame with your data, including the columns mortality and age. The fit does not look very nice, though. SECOND APPROACH: To exploit the known functional relation, you can use constrained optimisation to get a reasonable fit: ll <- function(par, dat) -sum(dnorm(dat[,"mortality"], mean=par[1]*exp(par[2]*dat[,"age"]) + par[3]*exp(par[4]*dat[,"age"]), sd=par[5], log=TRUE) ) fit_mle <- constrOptim( theta=c(1,-0.05,1, 0.05, 1), f=ll, grad=NULL, ui=diag(c(1,-1,1,1,1)), ci=rep(0,5), dat=dat ) p <- fit_mle[["par"]] y <- p[1]*exp(p[2]*age)+p[3]*exp(p[4]*age) plot(mortality~age, data=dat) lines(y) The important constraints are that the second parameter needs to be negative and the fourth parameter positive.
suitable non-linear equation to capture a 'J-shaped' relationship between x and y FIRST APPROACH: Hurst et al. use the following form for a similar problem (as far as I can see): mortality = a + b * age * exp(c * age) where a, b, and c are parameters. The call to nls would be alon
46,131
Raw data outperforms Z-score transformed data in SVM classification
Keep in mind why people typically scale features prior to estimating an SVM. The notion is that the data are on different scales, and this happenstance of how things were measured might not be desirable -- for example, measuring some length quantity in meters versus kilometers. Obviously one will have a much larger range even though both represent the same physical quantity. However, there's no reason that the new scaling must be better. While it's true that the rescaled features will all vary in comparable units, it's also possible that the original scaling happened to encode the data such that some important features had more prominence in the model. Consider the example of two different versions of the Gauissian RBF kernel: $K_1(x,x^\prime)=\exp(-\gamma||x-x^\prime||^2_2).$ This is an isotropic kernel, meaning that the same scaling ($\gamma$) is applied in all directions. A more general kernel function might have the form $K_2(x,x^\prime)=\exp\big(-(x-x^\prime)\Gamma(x-x^\prime)\big);$ it is anisotropic as $\Gamma$ is a diagonal PSD matrix, with each element applying a different scaling to each direction. The advantage of this kernel function is that it will vary more strongly in some directions than others. Coming back to your question, it's possible to imagine that your data have, for whatever reason, some features that are more important than others, and that this coincides with the scale on which they are measured. Placing them on the new scale where they all appear on similar scales and are all treated as equally important means that unimportant or noise features cloud the signal. As an aside, don't use accuracy as a metric for comparing models: Why is accuracy not the best measure for assessing classification models? "The Case Against Accuracy Estimation for Comparing Induction Algorithms", Foster Provost, Tom Fawcett, Ron Kohavi
Raw data outperforms Z-score transformed data in SVM classification
Keep in mind why people typically scale features prior to estimating an SVM. The notion is that the data are on different scales, and this happenstance of how things were measured might not be desirab
Raw data outperforms Z-score transformed data in SVM classification Keep in mind why people typically scale features prior to estimating an SVM. The notion is that the data are on different scales, and this happenstance of how things were measured might not be desirable -- for example, measuring some length quantity in meters versus kilometers. Obviously one will have a much larger range even though both represent the same physical quantity. However, there's no reason that the new scaling must be better. While it's true that the rescaled features will all vary in comparable units, it's also possible that the original scaling happened to encode the data such that some important features had more prominence in the model. Consider the example of two different versions of the Gauissian RBF kernel: $K_1(x,x^\prime)=\exp(-\gamma||x-x^\prime||^2_2).$ This is an isotropic kernel, meaning that the same scaling ($\gamma$) is applied in all directions. A more general kernel function might have the form $K_2(x,x^\prime)=\exp\big(-(x-x^\prime)\Gamma(x-x^\prime)\big);$ it is anisotropic as $\Gamma$ is a diagonal PSD matrix, with each element applying a different scaling to each direction. The advantage of this kernel function is that it will vary more strongly in some directions than others. Coming back to your question, it's possible to imagine that your data have, for whatever reason, some features that are more important than others, and that this coincides with the scale on which they are measured. Placing them on the new scale where they all appear on similar scales and are all treated as equally important means that unimportant or noise features cloud the signal. As an aside, don't use accuracy as a metric for comparing models: Why is accuracy not the best measure for assessing classification models? "The Case Against Accuracy Estimation for Comparing Induction Algorithms", Foster Provost, Tom Fawcett, Ron Kohavi
Raw data outperforms Z-score transformed data in SVM classification Keep in mind why people typically scale features prior to estimating an SVM. The notion is that the data are on different scales, and this happenstance of how things were measured might not be desirab
46,132
Raw data outperforms Z-score transformed data in SVM classification
SVM is minimizing hinge loss with ridge regularization $$ \min_\mathbf w \sum_i(1-y_i \mathbf w\cdot \mathbf x_i)_+ +\lambda ||\mathbf w||^2 $$ So, the scaling will make differences when we have the regularization term. My hypothesis would be the original scale of your features impacts regularization on different features and make the performance better, but after the scaling, that disappears. For example, you have 2 features, the first feature is in scale of $10,000$ and second feature is in scale of $0.1$. if you do not perform scaling, the SVM will regularize much more on 2nd feature, and almost have no effects for the weight of on 1st feature. if you do perform scaling, the SVM will regularize both features equally You can validate my hypothesis to check the "feature importance" in your data. If you see, features in larger magnitude is much more important, at the same time you have "many useless features" in small scale. Then, my hypothesis might be right.
Raw data outperforms Z-score transformed data in SVM classification
SVM is minimizing hinge loss with ridge regularization $$ \min_\mathbf w \sum_i(1-y_i \mathbf w\cdot \mathbf x_i)_+ +\lambda ||\mathbf w||^2 $$ So, the scaling will make differences when we have the r
Raw data outperforms Z-score transformed data in SVM classification SVM is minimizing hinge loss with ridge regularization $$ \min_\mathbf w \sum_i(1-y_i \mathbf w\cdot \mathbf x_i)_+ +\lambda ||\mathbf w||^2 $$ So, the scaling will make differences when we have the regularization term. My hypothesis would be the original scale of your features impacts regularization on different features and make the performance better, but after the scaling, that disappears. For example, you have 2 features, the first feature is in scale of $10,000$ and second feature is in scale of $0.1$. if you do not perform scaling, the SVM will regularize much more on 2nd feature, and almost have no effects for the weight of on 1st feature. if you do perform scaling, the SVM will regularize both features equally You can validate my hypothesis to check the "feature importance" in your data. If you see, features in larger magnitude is much more important, at the same time you have "many useless features" in small scale. Then, my hypothesis might be right.
Raw data outperforms Z-score transformed data in SVM classification SVM is minimizing hinge loss with ridge regularization $$ \min_\mathbf w \sum_i(1-y_i \mathbf w\cdot \mathbf x_i)_+ +\lambda ||\mathbf w||^2 $$ So, the scaling will make differences when we have the r
46,133
Raw data outperforms Z-score transformed data in SVM classification
Two points: First: are your classes distributed equally (I think this is in General Abrial's link; but I haven't read it so unsure)? I.e. do you have 50% class A, and 50% class B? Or is it that you have one class to be more dominant? Accuracy is very sensitive to the class imbalance issue. E.g. if 90% of test cases are A, and 10% are class B, then a dumb classifier that always predicts 'A' will score 90% accuracy (clearly not a good classifier, yet gets 90%). Therefore, you should tell us the distribution of classes in order to allow us to conclude whether the 2% to 3% increase in accuracy is actually due to good generalization (as opposed to a dumb model that is taking trivial advantage of an imbalanced class distribution). Second: Once we are happy that the accuracy is not abused, and right before we try to explain/justify what might be causing the 2% to 3% increase in accuracy, it is very important to first answer this question: Is this difference significant to begin with? Or is it due to sheer dumb luck? We have two hypothesis: Null: there is no systematic difference, and observed change is due to random chance. Alt: there is a systematic difference. In my view, usually 100 samples is too little to reject the null hypothesis when the difference in accuracy is only 2% to 3%. I really think you need to show statistical significance tests. By experience, I feel that you will fail to reject the null hypothesis with $p \le 0.05$ given that your samples are only 100. In my view, it is quite possible that such 2% to 3% difference could be due to the randomness associated with deciding the train-test split, or the initial randomization of k-fold cross-validation. In summary: Report the ratio of number of samples from class A to that of class B. Report the $p$ value of the observed difference in accuracy. Depending on this, we could conclude crazy things such as maybe the more accurate classifier is actually inferior if it is harmfully blindly sensitive to the class of majority. But maybe we could also conclude what you were expecting. Regardless, in my view, the input is not adequate to know what is happening and we need the above points addressed first.
Raw data outperforms Z-score transformed data in SVM classification
Two points: First: are your classes distributed equally (I think this is in General Abrial's link; but I haven't read it so unsure)? I.e. do you have 50% class A, and 50% class B? Or is it that you h
Raw data outperforms Z-score transformed data in SVM classification Two points: First: are your classes distributed equally (I think this is in General Abrial's link; but I haven't read it so unsure)? I.e. do you have 50% class A, and 50% class B? Or is it that you have one class to be more dominant? Accuracy is very sensitive to the class imbalance issue. E.g. if 90% of test cases are A, and 10% are class B, then a dumb classifier that always predicts 'A' will score 90% accuracy (clearly not a good classifier, yet gets 90%). Therefore, you should tell us the distribution of classes in order to allow us to conclude whether the 2% to 3% increase in accuracy is actually due to good generalization (as opposed to a dumb model that is taking trivial advantage of an imbalanced class distribution). Second: Once we are happy that the accuracy is not abused, and right before we try to explain/justify what might be causing the 2% to 3% increase in accuracy, it is very important to first answer this question: Is this difference significant to begin with? Or is it due to sheer dumb luck? We have two hypothesis: Null: there is no systematic difference, and observed change is due to random chance. Alt: there is a systematic difference. In my view, usually 100 samples is too little to reject the null hypothesis when the difference in accuracy is only 2% to 3%. I really think you need to show statistical significance tests. By experience, I feel that you will fail to reject the null hypothesis with $p \le 0.05$ given that your samples are only 100. In my view, it is quite possible that such 2% to 3% difference could be due to the randomness associated with deciding the train-test split, or the initial randomization of k-fold cross-validation. In summary: Report the ratio of number of samples from class A to that of class B. Report the $p$ value of the observed difference in accuracy. Depending on this, we could conclude crazy things such as maybe the more accurate classifier is actually inferior if it is harmfully blindly sensitive to the class of majority. But maybe we could also conclude what you were expecting. Regardless, in my view, the input is not adequate to know what is happening and we need the above points addressed first.
Raw data outperforms Z-score transformed data in SVM classification Two points: First: are your classes distributed equally (I think this is in General Abrial's link; but I haven't read it so unsure)? I.e. do you have 50% class A, and 50% class B? Or is it that you h
46,134
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$?
It is not incorrect. It is nonlinear in $\mathbf{w}$. Can you take a matrix, premultiply the weight vector by it, and get that? What I mean is that there is no $\mathbf{A}$ such that $\mathbf{A} \mathbf{w} = x w_1 \cdots w_l$. The key is to see that it is a function in $\mathbf{w}$. $$f(\mathbf{w}) = x w_1 \cdots w_l$$ Pick two of these vectors, $\mathbf{w^1}$ and $\mathbf{w^2}$. Clearly $$f(\mathbf{w^1} + \mathbf{w^2}) \neq f(\mathbf{w^1}) + f(\mathbf{w^2})$$. I think the trouble is that you are thinking about it as a function in $x$. Your proof is correct if you were trying to demonstrate it's linear in this guy. Also, if you fix all the weight elements except one, $w_i$, and think about the function as a function in this one $w_i$, then it would be linear in that. But it is not linear in weight vectors $\mathbf{w}$.
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i .
It is not incorrect. It is nonlinear in $\mathbf{w}$. Can you take a matrix, premultiply the weight vector by it, and get that? What I mean is that there is no $\mathbf{A}$ such that $\mathbf{A} \math
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$? It is not incorrect. It is nonlinear in $\mathbf{w}$. Can you take a matrix, premultiply the weight vector by it, and get that? What I mean is that there is no $\mathbf{A}$ such that $\mathbf{A} \mathbf{w} = x w_1 \cdots w_l$. The key is to see that it is a function in $\mathbf{w}$. $$f(\mathbf{w}) = x w_1 \cdots w_l$$ Pick two of these vectors, $\mathbf{w^1}$ and $\mathbf{w^2}$. Clearly $$f(\mathbf{w^1} + \mathbf{w^2}) \neq f(\mathbf{w^1}) + f(\mathbf{w^2})$$. I think the trouble is that you are thinking about it as a function in $x$. Your proof is correct if you were trying to demonstrate it's linear in this guy. Also, if you fix all the weight elements except one, $w_i$, and think about the function as a function in this one $w_i$, then it would be linear in that. But it is not linear in weight vectors $\mathbf{w}$.
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i . It is not incorrect. It is nonlinear in $\mathbf{w}$. Can you take a matrix, premultiply the weight vector by it, and get that? What I mean is that there is no $\mathbf{A}$ such that $\mathbf{A} \math
46,135
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$?
As @amoeba said, it is non-linear in the combination $\mathbf \prod_{i=1}^{n} w_{i}$. Let's see an example where each $w_{i}$ is doubled. Then the new value becomes $2^{n}$ times the old value whereas it should have been just 2 times the old value for a linear function.
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i .
As @amoeba said, it is non-linear in the combination $\mathbf \prod_{i=1}^{n} w_{i}$. Let's see an example where each $w_{i}$ is doubled. Then the new value becomes $2^{n}$ times the old value whereas
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$? As @amoeba said, it is non-linear in the combination $\mathbf \prod_{i=1}^{n} w_{i}$. Let's see an example where each $w_{i}$ is doubled. Then the new value becomes $2^{n}$ times the old value whereas it should have been just 2 times the old value for a linear function.
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i . As @amoeba said, it is non-linear in the combination $\mathbf \prod_{i=1}^{n} w_{i}$. Let's see an example where each $w_{i}$ is doubled. Then the new value becomes $2^{n}$ times the old value whereas
46,136
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$?
To be fair, it is easy to misunderstand it to mean "a nonlinear function of each of the weights $w_i$." In that case, your analysis would be correct. For what it's worth, since we're working with real numbers here, you can use a simpler definition of linear function: $$ f(\alpha x) = \alpha f(x) $$ In other words, that scaling the input scales the output by the same amount. Clearly for our function $$ \hat{y} = x w_1 ... w_i ... w_l $$ Multiplying $x$ by some $\alpha$ will also scale $\hat{y}$ by the same amount, and similarly for each $w_i$.
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i .
To be fair, it is easy to misunderstand it to mean "a nonlinear function of each of the weights $w_i$." In that case, your analysis would be correct. For what it's worth, since we're working with real
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i ... w_l$ is a non-linear function of $w_i$? To be fair, it is easy to misunderstand it to mean "a nonlinear function of each of the weights $w_i$." In that case, your analysis would be correct. For what it's worth, since we're working with real numbers here, you can use a simpler definition of linear function: $$ f(\alpha x) = \alpha f(x) $$ In other words, that scaling the input scales the output by the same amount. Clearly for our function $$ \hat{y} = x w_1 ... w_i ... w_l $$ Multiplying $x$ by some $\alpha$ will also scale $\hat{y}$ by the same amount, and similarly for each $w_i$.
Why does Bengio, Goodfellow and Courville deep learning theory book claim $\hat{y} = x w_1 ... w_i . To be fair, it is easy to misunderstand it to mean "a nonlinear function of each of the weights $w_i$." In that case, your analysis would be correct. For what it's worth, since we're working with real
46,137
How is repeated-measures ANOVA a special case of linear mixed models?
Repeated measurements ANOVA is a special case of linear mixed effects models because it is less-flexible in regards with the structure of the "random-effects" part of a linear mixed effects model as well as it assumes some further conveniences for the data at hand. The repeated measurements ANOVA starts with the general form: \begin{align} y_{ij} = \mu_{ij} + \pi_{ij}+ \epsilon_{ij} \end{align} where $i$ indexes the subject, $j$ the time-point, $\mu_{ij}$ is the mean at time $j$ for individual $i$, $\pi_{ij}$ is the consistent departure of $y_{ij}$ fron $\mu_{ij}$ for the the $i$-th individual and $\epsilon_{ij}$ are the errors. By consistent here one means that under (hypothetical) repetitions from the same individual, $y_{ij}$ has mean $\mu_{ij} + \pi_{ij}$. It is what one would describe as the conditional mean response in the context of an LME. This fine but looking at the "random-effects" part the repeated measures ANOVA assumes that the distribution of the response variables has compound symmetry. This means that all response variables have equal variance, and each pair of response variables have a common correlation. (This strongly relates to the concept of sphericity - theoretically you only need sphericity rather than CS but it is (very) hard to get sphericity without CS. Huynh & Feldt, 1970, JASA). On the other hand, to quote Davis, 2002 Chapt. 6: "The linear mixed models approach to repeated measurements views the analysis as a univariate regression analysis of responses with correlated errors." To that respect the correlation can be many different structures, Toepliz, AR(1), compound symmetry (as above), random intercept and slope, etc. You can mix different error sources "without" a problem (eg. even aov documentation says that "If there are two or more error strata, the methods used are statistically inefficient without balance, and it may be better to use lme in package nlme..."). Finally, coming to the conveniences part: The repeated measurements ANOVA model cannot: 1. have variation among experimental units with respect to the number and timing of observations. 2. handle missing data or 3. time-dependent covariates. These goodies comes with linear mixed models only.
How is repeated-measures ANOVA a special case of linear mixed models?
Repeated measurements ANOVA is a special case of linear mixed effects models because it is less-flexible in regards with the structure of the "random-effects" part of a linear mixed effects model as w
How is repeated-measures ANOVA a special case of linear mixed models? Repeated measurements ANOVA is a special case of linear mixed effects models because it is less-flexible in regards with the structure of the "random-effects" part of a linear mixed effects model as well as it assumes some further conveniences for the data at hand. The repeated measurements ANOVA starts with the general form: \begin{align} y_{ij} = \mu_{ij} + \pi_{ij}+ \epsilon_{ij} \end{align} where $i$ indexes the subject, $j$ the time-point, $\mu_{ij}$ is the mean at time $j$ for individual $i$, $\pi_{ij}$ is the consistent departure of $y_{ij}$ fron $\mu_{ij}$ for the the $i$-th individual and $\epsilon_{ij}$ are the errors. By consistent here one means that under (hypothetical) repetitions from the same individual, $y_{ij}$ has mean $\mu_{ij} + \pi_{ij}$. It is what one would describe as the conditional mean response in the context of an LME. This fine but looking at the "random-effects" part the repeated measures ANOVA assumes that the distribution of the response variables has compound symmetry. This means that all response variables have equal variance, and each pair of response variables have a common correlation. (This strongly relates to the concept of sphericity - theoretically you only need sphericity rather than CS but it is (very) hard to get sphericity without CS. Huynh & Feldt, 1970, JASA). On the other hand, to quote Davis, 2002 Chapt. 6: "The linear mixed models approach to repeated measurements views the analysis as a univariate regression analysis of responses with correlated errors." To that respect the correlation can be many different structures, Toepliz, AR(1), compound symmetry (as above), random intercept and slope, etc. You can mix different error sources "without" a problem (eg. even aov documentation says that "If there are two or more error strata, the methods used are statistically inefficient without balance, and it may be better to use lme in package nlme..."). Finally, coming to the conveniences part: The repeated measurements ANOVA model cannot: 1. have variation among experimental units with respect to the number and timing of observations. 2. handle missing data or 3. time-dependent covariates. These goodies comes with linear mixed models only.
How is repeated-measures ANOVA a special case of linear mixed models? Repeated measurements ANOVA is a special case of linear mixed effects models because it is less-flexible in regards with the structure of the "random-effects" part of a linear mixed effects model as w
46,138
Finding the distribution of iid variables X, Y given distribution of X-Y
We know two independent random variables $X,Y$ has a common distribution, and we know both have zero expectation (or some other convenient centering). We don't know what that common distribution is, but we do know the distribution of $D=X-Y$. Is it possible to determine the distribution function of $X$ and $Y$? Let us assume that $X,Y,D$ has continuous distributions, with cumulative distribution function (cdf) and density $F,f$ (for $X,Y$) and $G,g$ (for $D$). First, note that $D$ always have a symmetric distribution: $$ \newcommand\myeq{\mathrel{\stackrel{{\mbox{ D}}}{=}}} X-Y \myeq Y-X \myeq -(X-Y) $$ since $X\myeq Y$, where the symbol $\myeq$ means "have the same distribution". So for the problem to be well posed, it is necessary that we postulate a symmetric distribution for the difference $D$. Now $$ G(u) =P(X-Y \le u) = \int_{-\infty}^\infty P(X \le u+Y \mid Y=y) f(y)\; dy \\ = \int F(u+y) f(y) \, dy $$ and by differentiation wrt $u$ under the integral sign we get $$ g(u) = \int f(u+y) f(y) \; dy $$ which is an integral equation. But, for a probability problem it is easier to formulate this in terms of the moment generating function (mgf) of the difference, assuming that it exists. If it do not exist, we can work in like manner using the characteristic function. So assume the difference have mgf $M(t)$ and $X,Y$ have common mgf $G(t)$. Since $D$ has a symmetric distribution, we have $$ \DeclareMathOperator{\E}{\mathbb{E}} M(t) = \E e^{tD} = \E e^{-tD} = M(-t) $$ so that necessarily $M(t)=M(-t)$. We also find $$ M(T)= \E e^{t(X-Y)} = \E e^{tX} \E e^{-tY} = G(t) G(-t) $$ giving the equation $M(t) = G(t) G(-t)$ which we can try to solve for $G(t)$. But this is not really enough, since nothing guarantees that a function $G(t)$ we find that way is a mgf for a probability distribution! Let us look at some examples. If $D$ has a centered normal distribution with variance 2, its mgf is $M(t)= e^{t^2} = \exp\left( \frac12 t^2 + \frac12 t^2\right)$ so we find the solution $G(t) = e^{\frac12 t^2}$ which indeed is the mgf of the standard normal distribution. If $D$ has a centered triangular distribution with density function $$ f(x) =\begin{cases} 1-|x|,& |x| \le 1 \\ 0, & |x| > 1 \end{cases} $$ then its mgf is $M(t) = \frac{e^t-2+e^{-t}}{t^2}$ which can be factored as $M(t) = \left(\frac{e^{t/2} - e^{-t/2}}{t}\right)^2$ giving the factor $G(t) = \frac{e^{t/2} - e^{-t/2}}{t}$ which is indeed the mgf of the uniform distribution on the interval $(-1/2, 1/2)$. Now let $D$ have the symmetric Laplace distribution with mgf (see wikipedia) $M(t)=\frac1{1-t^2}= \frac1{1-t} \cdot \frac1{1+t}=G(t)G(-t)$ with $G(t)=\frac1{1-t}$ which is indeed the mgf of an exponential distribution. In the first two examples, our solution was symmetric, but in this third example the solution is an asymmetric distribution. Is there in this case also a symmetric solution? we could try the alternative solution $$ G(t) =\sqrt{\frac1{1-t^2}} $$ But, I do not know if this is a valid mgf of some probability distribution. If it is, we would have shown that this problem do not necessarily have a unique solution. The OP did not tell us in which form he has information on the distribution of $D$. If he has an iid sample from $D$, maybe he could calculate an empirical estimate of the moment generating function, estimating $G(t)$ by taking the square root of the estimate of $M(t)$, and try to approxiamtely invert that by using a saddle point approximation, see How does saddlepoint approximation work? If he wants a better answer we need to know in which form is his information.
Finding the distribution of iid variables X, Y given distribution of X-Y
We know two independent random variables $X,Y$ has a common distribution, and we know both have zero expectation (or some other convenient centering). We don't know what that common distribution is,
Finding the distribution of iid variables X, Y given distribution of X-Y We know two independent random variables $X,Y$ has a common distribution, and we know both have zero expectation (or some other convenient centering). We don't know what that common distribution is, but we do know the distribution of $D=X-Y$. Is it possible to determine the distribution function of $X$ and $Y$? Let us assume that $X,Y,D$ has continuous distributions, with cumulative distribution function (cdf) and density $F,f$ (for $X,Y$) and $G,g$ (for $D$). First, note that $D$ always have a symmetric distribution: $$ \newcommand\myeq{\mathrel{\stackrel{{\mbox{ D}}}{=}}} X-Y \myeq Y-X \myeq -(X-Y) $$ since $X\myeq Y$, where the symbol $\myeq$ means "have the same distribution". So for the problem to be well posed, it is necessary that we postulate a symmetric distribution for the difference $D$. Now $$ G(u) =P(X-Y \le u) = \int_{-\infty}^\infty P(X \le u+Y \mid Y=y) f(y)\; dy \\ = \int F(u+y) f(y) \, dy $$ and by differentiation wrt $u$ under the integral sign we get $$ g(u) = \int f(u+y) f(y) \; dy $$ which is an integral equation. But, for a probability problem it is easier to formulate this in terms of the moment generating function (mgf) of the difference, assuming that it exists. If it do not exist, we can work in like manner using the characteristic function. So assume the difference have mgf $M(t)$ and $X,Y$ have common mgf $G(t)$. Since $D$ has a symmetric distribution, we have $$ \DeclareMathOperator{\E}{\mathbb{E}} M(t) = \E e^{tD} = \E e^{-tD} = M(-t) $$ so that necessarily $M(t)=M(-t)$. We also find $$ M(T)= \E e^{t(X-Y)} = \E e^{tX} \E e^{-tY} = G(t) G(-t) $$ giving the equation $M(t) = G(t) G(-t)$ which we can try to solve for $G(t)$. But this is not really enough, since nothing guarantees that a function $G(t)$ we find that way is a mgf for a probability distribution! Let us look at some examples. If $D$ has a centered normal distribution with variance 2, its mgf is $M(t)= e^{t^2} = \exp\left( \frac12 t^2 + \frac12 t^2\right)$ so we find the solution $G(t) = e^{\frac12 t^2}$ which indeed is the mgf of the standard normal distribution. If $D$ has a centered triangular distribution with density function $$ f(x) =\begin{cases} 1-|x|,& |x| \le 1 \\ 0, & |x| > 1 \end{cases} $$ then its mgf is $M(t) = \frac{e^t-2+e^{-t}}{t^2}$ which can be factored as $M(t) = \left(\frac{e^{t/2} - e^{-t/2}}{t}\right)^2$ giving the factor $G(t) = \frac{e^{t/2} - e^{-t/2}}{t}$ which is indeed the mgf of the uniform distribution on the interval $(-1/2, 1/2)$. Now let $D$ have the symmetric Laplace distribution with mgf (see wikipedia) $M(t)=\frac1{1-t^2}= \frac1{1-t} \cdot \frac1{1+t}=G(t)G(-t)$ with $G(t)=\frac1{1-t}$ which is indeed the mgf of an exponential distribution. In the first two examples, our solution was symmetric, but in this third example the solution is an asymmetric distribution. Is there in this case also a symmetric solution? we could try the alternative solution $$ G(t) =\sqrt{\frac1{1-t^2}} $$ But, I do not know if this is a valid mgf of some probability distribution. If it is, we would have shown that this problem do not necessarily have a unique solution. The OP did not tell us in which form he has information on the distribution of $D$. If he has an iid sample from $D$, maybe he could calculate an empirical estimate of the moment generating function, estimating $G(t)$ by taking the square root of the estimate of $M(t)$, and try to approxiamtely invert that by using a saddle point approximation, see How does saddlepoint approximation work? If he wants a better answer we need to know in which form is his information.
Finding the distribution of iid variables X, Y given distribution of X-Y We know two independent random variables $X,Y$ has a common distribution, and we know both have zero expectation (or some other convenient centering). We don't know what that common distribution is,
46,139
Finding the distribution of iid variables X, Y given distribution of X-Y
As a counter-example: If $X \sim N(\mu, \sigma^2)$ and $Y \sim N(\mu, \sigma^2)$ are iid, then $Z = X - Y$ is $N(0, 2 \sigma^2)$. Conversely, if we observe that $Z \sim N(0, 2 \sigma^2)$, then there are an infinite number of Normal parents $X \sim N(\mu, \sigma^2)$ that satisfy same ... i.e. while we can fix $\sigma$, there an infinite number of $\mu$ solutions.
Finding the distribution of iid variables X, Y given distribution of X-Y
As a counter-example: If $X \sim N(\mu, \sigma^2)$ and $Y \sim N(\mu, \sigma^2)$ are iid, then $Z = X - Y$ is $N(0, 2 \sigma^2)$. Conversely, if we observe that $Z \sim N(0, 2 \sigma^2)$, then there a
Finding the distribution of iid variables X, Y given distribution of X-Y As a counter-example: If $X \sim N(\mu, \sigma^2)$ and $Y \sim N(\mu, \sigma^2)$ are iid, then $Z = X - Y$ is $N(0, 2 \sigma^2)$. Conversely, if we observe that $Z \sim N(0, 2 \sigma^2)$, then there are an infinite number of Normal parents $X \sim N(\mu, \sigma^2)$ that satisfy same ... i.e. while we can fix $\sigma$, there an infinite number of $\mu$ solutions.
Finding the distribution of iid variables X, Y given distribution of X-Y As a counter-example: If $X \sim N(\mu, \sigma^2)$ and $Y \sim N(\mu, \sigma^2)$ are iid, then $Z = X - Y$ is $N(0, 2 \sigma^2)$. Conversely, if we observe that $Z \sim N(0, 2 \sigma^2)$, then there a
46,140
Error bars, linear regression and "standard deviation" for point
Your dream of a "global SD" for estimated errors in $x$ given a value of $y$ is not possible. If what you care about is the SD of a prediction of $x$ given a value of $y$, then what you should examine is the square root of equation (10) of the linked reference. The same result is provided in equation 5.25 of an online analytical chemistry textbook that I find more generally useful and of whose content you should, as a chemist, be aware. Say that you generate a standard curve with known values of $x$ and measured values of $y$. The slope of the standard curve by linear regression was $\beta_1$, and the standard deviation about the regression for the standard curve was: $$s_r=\sqrt{\frac{\sum_i(y_i-\hat{y_i})^2}{n-2}}$$ where $y_i$ are the individual observed values in the standard curve, $\hat{y_i}$ are the corresponding individual predicted values from the regression and $n$ is the number of observations making up the standard curve. You then make $m$ subsequent measurements of $y$ on a sample with unknown $x$ to estimate that unknown value of $x$, obtaining a mean value $\overline{Y}$. The standard deviation of the estimated value in $x$ based on this value of $\overline{Y}$ is then: $$s_x=\frac{s_r}{\beta_1}\sqrt{\frac{1}{m}+\frac{1}{n}+\frac{(\overline{Y}-\bar{y})^2}{\beta_1^2\sum_i(x_i-\bar{x})^2}}$$ In this equation individual $x$ values for generating the standard curve were $x_i$ with mean value $\bar{x}$; the corresponding $y$ values for the standard curve had mean value $\bar{y}$. Note that this standard deviation increases as the observed mean value $\overline{Y}$ for the unknown sample moves away from the mean value $\bar{y}$ determined when generating the standard curve. There thus is no global SD value for all $x$ predicted on the basis of measuring $y$. It must be calculated anew for each unknown sample. You will obtain the most precise results if your unknowns are close to the mean value of the standards used to generate the standard curve. Happily, there is an R package chemCal that can perform all these calculations in even a more general setting where different observations have different weights. The function of interest in that package is inverse.predict.
Error bars, linear regression and "standard deviation" for point
Your dream of a "global SD" for estimated errors in $x$ given a value of $y$ is not possible. If what you care about is the SD of a prediction of $x$ given a value of $y$, then what you should examine
Error bars, linear regression and "standard deviation" for point Your dream of a "global SD" for estimated errors in $x$ given a value of $y$ is not possible. If what you care about is the SD of a prediction of $x$ given a value of $y$, then what you should examine is the square root of equation (10) of the linked reference. The same result is provided in equation 5.25 of an online analytical chemistry textbook that I find more generally useful and of whose content you should, as a chemist, be aware. Say that you generate a standard curve with known values of $x$ and measured values of $y$. The slope of the standard curve by linear regression was $\beta_1$, and the standard deviation about the regression for the standard curve was: $$s_r=\sqrt{\frac{\sum_i(y_i-\hat{y_i})^2}{n-2}}$$ where $y_i$ are the individual observed values in the standard curve, $\hat{y_i}$ are the corresponding individual predicted values from the regression and $n$ is the number of observations making up the standard curve. You then make $m$ subsequent measurements of $y$ on a sample with unknown $x$ to estimate that unknown value of $x$, obtaining a mean value $\overline{Y}$. The standard deviation of the estimated value in $x$ based on this value of $\overline{Y}$ is then: $$s_x=\frac{s_r}{\beta_1}\sqrt{\frac{1}{m}+\frac{1}{n}+\frac{(\overline{Y}-\bar{y})^2}{\beta_1^2\sum_i(x_i-\bar{x})^2}}$$ In this equation individual $x$ values for generating the standard curve were $x_i$ with mean value $\bar{x}$; the corresponding $y$ values for the standard curve had mean value $\bar{y}$. Note that this standard deviation increases as the observed mean value $\overline{Y}$ for the unknown sample moves away from the mean value $\bar{y}$ determined when generating the standard curve. There thus is no global SD value for all $x$ predicted on the basis of measuring $y$. It must be calculated anew for each unknown sample. You will obtain the most precise results if your unknowns are close to the mean value of the standards used to generate the standard curve. Happily, there is an R package chemCal that can perform all these calculations in even a more general setting where different observations have different weights. The function of interest in that package is inverse.predict.
Error bars, linear regression and "standard deviation" for point Your dream of a "global SD" for estimated errors in $x$ given a value of $y$ is not possible. If what you care about is the SD of a prediction of $x$ given a value of $y$, then what you should examine
46,141
Error bars, linear regression and "standard deviation" for point
There is a relatively simple resolution of this problem: compute a “fiducial limit” based on “inverse regression” [Draper & Smith 1981]. The idea is to create confidence envelopes for the true line and then find the range of $X$ values where these envelopes enclose the target response. After introducing some notation (intended to match that in Draper & Smith), this answer performs a preliminary analysis of the situation, illustrates the idea with a plot of simulated data, and presents the formulas. It concludes with a brief discussion (in which a simple approximation is presented) and a reference to the principal source of this solution, Draper & Smith's regression textbook. (The source of this answer is a report I wrote years ago concerning ongoing monitoring of concentrations in the environment: the $X_i$ were time and the $Y_i$ were log concentrations. The problems of (a) monitoring to determine when a value will reach a predetermined target and (b) calibration of measurement systems--where the $X_i$ are known values and the $Y_i$ are the instrument's responses--are the two situations in which I have found this procedure to be most useful.) Let's establish notation. The data are $(X_i, Y_i)$, $i=1, 2, \ldots, n$. The model is $$Y_i = \beta_0 + \beta_1 X_i + \varepsilon_i$$ for unknown parameters $\beta_0$ (the intercept) and $\beta_1$ (the slope) and independent Normal, zero-mean variates $\varepsilon_i$ with common (unknown) variance $\sigma^2$. Ordinary Least Squares regression obtains estimates $b_0$, $b_1$, and $s$ of the unknowns $\beta_0$, $\beta_1$, and $\sigma$. The calculations that enter into those estimates include the means $\bar X$ and $\bar Y$ as well as the sum of squared deviations of the $X_i$, $$S_{XX} = \sum_{i=1}^n (X_i - \bar{X})^2.$$ Analysis To begin the analysis, note that the regression line necessarily passes through the point of averages $(\bar{X}, \bar{Y})$, signifying an average response $\bar Y$ attained at the average ordinate $\bar X$. Moreover, the abscissa $\bar Y$ is Normally distributed, uncorrelated with the estimated slope $b_1$, and has a standard error that decreases to zero as the amount of data increases. The value of $X$ for any given $Y_0$ can be estimated by starting here and extrapolating, yielding an estimate of $$\hat{X}_0 = (\bar{Y} - Y_0)/b_1 + \bar X.$$ The second step is to note that for any value $X$ we can compute an upper confidence limit for the fitted response at $X$. The need for a confidence limit arises from uncertainty about the values of the coefficients $\beta_0$ and $\beta_1$: we are not exactly sure of the true intercept and true slope, so the true line really could lie within a range of possible lines. The fitted response at $X$ can be written $$\hat{Y}(X) = \bar{Y} - b_1(X - \bar{X})$$ and the standard error of that fitted value equals $$\operatorname{se}(\hat{Y}(X)) = s\left(\frac{1}{n} + \frac{(X - \bar{X})^2}{S_{XX}}\right)^{1/2}.$$ The fitted value is Normally distributed, whence an upper confidence limit of confidence $1 - \alpha$ can be constructed of the form $$\operatorname{UCL}(X) = \hat{Y}(X) + t(n-2, \alpha) \operatorname{se}(\hat{Y}(X))$$ and a lower confidence limit (LCL) is constructed analogously. (As usual, $t$ refers to percentage points of a Student $t$ distribution.) As $X$ varies, the UCL and LCL trace hyperbolic arcs lying above and below the fitted line. The horizontal axis plots the $X$ values while the vertical axis plots the $Y$ values. The hyperbolic arcs are shown as green (LCL) and yellow (UCL) curves. The fiducial limits are found by intersecting these arcs with a horizontal line at the height $Y_0\,$ designated "Target" in the legend. The resulting UCL is shown with a diamond symbol. This illustration uses simulated data: this allows us to see how additional data might reasonably vary from what the calculations lead us to expect. (The reason why "observed" and "simulated" values have been visually connected is that this shows a plot of concentration vs. time of a presumably continuous process.) Solution To find the “upper fiducial limit,” or “inverse confidence limit for $X$ given $Y_0$” ([Draper & Smith 1981] section 1.7), find the largest solution $X$ of the equation $$Y_0 = \operatorname{UCL}(X),$$ if such a solution exists. This can be solved with the quadratic formula, giving $$\operatorname{UCL}(X) = \bar{X} + \frac{D_0 + g\sqrt{D_0^2 + (1-g^2)S_{XX}/n}}{1-g^2}, \tag{1}$$ where $$D_0 = (\bar Y - Y_0) / b_1$$ is the estimated value of $X$ corresponding to $Y_0$, $$g^2 = \frac{t^2 s^2}{b_1^2 S_{XX}}$$ is an auxiliary calculation, and $$t = t(n-2, \alpha).$$ A lower confidence limit on $X$ is obtained by using the negative square root $–g$ in $(1)$. (These formulas are equivalent to [Draper & Smith] equation 1.7.6. I write $g^2$ here in place of their $g$. This version is a little easier to compute with.) Discussion Neither confidence limit has to exist. They can be found only when there is confidence that the slope truly is nonzero. Draper & Smith suggest that computing confidence limits for $X$ is “not of much practical value” unless $g^2 < 0.2$, although they do not provide any justification for such an omnibus statement. When $g^2$ is relatively small, a good approximation is obtained by expanding $(1)$ in a power series in its positive square root $g$ and stopping after the linear term, yielding $$\operatorname{UCL}(X) \approx \bar{X} + D_0 + g\sqrt{D_0^2 + S_{XX}/n} + \cdots\tag{2}.$$ Note that $g^2$ is small when, relative to the estimated variance $s^2$, the estimated coefficient $b_1$ is large, the variance of the $X_i$ (that is, $S_{XX}/n$) is large, and $t$ is small (that is, extremely high confidence is not required). In short, any combination of a large absolute slope, wide spread in the $X_i$, large amounts of data, relatively small variation around a linear curve, and/or modest confidence needs will assure the approximation $(2)$ is a good one. Also note that as additional data are collected throughout a range of $X$ values that cover the confidence limit, $\operatorname{UCL}(X)$ converges to $\bar{X}+D_0$ , the estimated value, as one would expect of a genuine confidence limit. References Draper, NR and H Smith, 1981: Applied Regression Analysis, Second Edition. John Wiley & Sons, New York.
Error bars, linear regression and "standard deviation" for point
There is a relatively simple resolution of this problem: compute a “fiducial limit” based on “inverse regression” [Draper & Smith 1981]. The idea is to create confidence envelopes for the true line a
Error bars, linear regression and "standard deviation" for point There is a relatively simple resolution of this problem: compute a “fiducial limit” based on “inverse regression” [Draper & Smith 1981]. The idea is to create confidence envelopes for the true line and then find the range of $X$ values where these envelopes enclose the target response. After introducing some notation (intended to match that in Draper & Smith), this answer performs a preliminary analysis of the situation, illustrates the idea with a plot of simulated data, and presents the formulas. It concludes with a brief discussion (in which a simple approximation is presented) and a reference to the principal source of this solution, Draper & Smith's regression textbook. (The source of this answer is a report I wrote years ago concerning ongoing monitoring of concentrations in the environment: the $X_i$ were time and the $Y_i$ were log concentrations. The problems of (a) monitoring to determine when a value will reach a predetermined target and (b) calibration of measurement systems--where the $X_i$ are known values and the $Y_i$ are the instrument's responses--are the two situations in which I have found this procedure to be most useful.) Let's establish notation. The data are $(X_i, Y_i)$, $i=1, 2, \ldots, n$. The model is $$Y_i = \beta_0 + \beta_1 X_i + \varepsilon_i$$ for unknown parameters $\beta_0$ (the intercept) and $\beta_1$ (the slope) and independent Normal, zero-mean variates $\varepsilon_i$ with common (unknown) variance $\sigma^2$. Ordinary Least Squares regression obtains estimates $b_0$, $b_1$, and $s$ of the unknowns $\beta_0$, $\beta_1$, and $\sigma$. The calculations that enter into those estimates include the means $\bar X$ and $\bar Y$ as well as the sum of squared deviations of the $X_i$, $$S_{XX} = \sum_{i=1}^n (X_i - \bar{X})^2.$$ Analysis To begin the analysis, note that the regression line necessarily passes through the point of averages $(\bar{X}, \bar{Y})$, signifying an average response $\bar Y$ attained at the average ordinate $\bar X$. Moreover, the abscissa $\bar Y$ is Normally distributed, uncorrelated with the estimated slope $b_1$, and has a standard error that decreases to zero as the amount of data increases. The value of $X$ for any given $Y_0$ can be estimated by starting here and extrapolating, yielding an estimate of $$\hat{X}_0 = (\bar{Y} - Y_0)/b_1 + \bar X.$$ The second step is to note that for any value $X$ we can compute an upper confidence limit for the fitted response at $X$. The need for a confidence limit arises from uncertainty about the values of the coefficients $\beta_0$ and $\beta_1$: we are not exactly sure of the true intercept and true slope, so the true line really could lie within a range of possible lines. The fitted response at $X$ can be written $$\hat{Y}(X) = \bar{Y} - b_1(X - \bar{X})$$ and the standard error of that fitted value equals $$\operatorname{se}(\hat{Y}(X)) = s\left(\frac{1}{n} + \frac{(X - \bar{X})^2}{S_{XX}}\right)^{1/2}.$$ The fitted value is Normally distributed, whence an upper confidence limit of confidence $1 - \alpha$ can be constructed of the form $$\operatorname{UCL}(X) = \hat{Y}(X) + t(n-2, \alpha) \operatorname{se}(\hat{Y}(X))$$ and a lower confidence limit (LCL) is constructed analogously. (As usual, $t$ refers to percentage points of a Student $t$ distribution.) As $X$ varies, the UCL and LCL trace hyperbolic arcs lying above and below the fitted line. The horizontal axis plots the $X$ values while the vertical axis plots the $Y$ values. The hyperbolic arcs are shown as green (LCL) and yellow (UCL) curves. The fiducial limits are found by intersecting these arcs with a horizontal line at the height $Y_0\,$ designated "Target" in the legend. The resulting UCL is shown with a diamond symbol. This illustration uses simulated data: this allows us to see how additional data might reasonably vary from what the calculations lead us to expect. (The reason why "observed" and "simulated" values have been visually connected is that this shows a plot of concentration vs. time of a presumably continuous process.) Solution To find the “upper fiducial limit,” or “inverse confidence limit for $X$ given $Y_0$” ([Draper & Smith 1981] section 1.7), find the largest solution $X$ of the equation $$Y_0 = \operatorname{UCL}(X),$$ if such a solution exists. This can be solved with the quadratic formula, giving $$\operatorname{UCL}(X) = \bar{X} + \frac{D_0 + g\sqrt{D_0^2 + (1-g^2)S_{XX}/n}}{1-g^2}, \tag{1}$$ where $$D_0 = (\bar Y - Y_0) / b_1$$ is the estimated value of $X$ corresponding to $Y_0$, $$g^2 = \frac{t^2 s^2}{b_1^2 S_{XX}}$$ is an auxiliary calculation, and $$t = t(n-2, \alpha).$$ A lower confidence limit on $X$ is obtained by using the negative square root $–g$ in $(1)$. (These formulas are equivalent to [Draper & Smith] equation 1.7.6. I write $g^2$ here in place of their $g$. This version is a little easier to compute with.) Discussion Neither confidence limit has to exist. They can be found only when there is confidence that the slope truly is nonzero. Draper & Smith suggest that computing confidence limits for $X$ is “not of much practical value” unless $g^2 < 0.2$, although they do not provide any justification for such an omnibus statement. When $g^2$ is relatively small, a good approximation is obtained by expanding $(1)$ in a power series in its positive square root $g$ and stopping after the linear term, yielding $$\operatorname{UCL}(X) \approx \bar{X} + D_0 + g\sqrt{D_0^2 + S_{XX}/n} + \cdots\tag{2}.$$ Note that $g^2$ is small when, relative to the estimated variance $s^2$, the estimated coefficient $b_1$ is large, the variance of the $X_i$ (that is, $S_{XX}/n$) is large, and $t$ is small (that is, extremely high confidence is not required). In short, any combination of a large absolute slope, wide spread in the $X_i$, large amounts of data, relatively small variation around a linear curve, and/or modest confidence needs will assure the approximation $(2)$ is a good one. Also note that as additional data are collected throughout a range of $X$ values that cover the confidence limit, $\operatorname{UCL}(X)$ converges to $\bar{X}+D_0$ , the estimated value, as one would expect of a genuine confidence limit. References Draper, NR and H Smith, 1981: Applied Regression Analysis, Second Edition. John Wiley & Sons, New York.
Error bars, linear regression and "standard deviation" for point There is a relatively simple resolution of this problem: compute a “fiducial limit” based on “inverse regression” [Draper & Smith 1981]. The idea is to create confidence envelopes for the true line a
46,142
Beginner references to understand probabilistic principal component analysis (PPCA)
PPCA was introduced in Tipping & Bishop, 1999, Probabilistic Principal Component Analysis. I would say that this paper itself is one of the best references: it is concise and clear. Nevertheless, it might be difficult for a beginner. If so, you can try Bishop's textbook Pattern Recognition and Machine Learning, which is excellent and includes a thorough discussion of PPCA in Chapter 12. In order to prepare for this chapter, one would need to have some understanding of basic probability theory (Chapter 1), multivariate Gaussian distribution (Chapter 2), and expectation-maximization algorithm (Chapter 9). The entire book is freely available online in PDF.
Beginner references to understand probabilistic principal component analysis (PPCA)
PPCA was introduced in Tipping & Bishop, 1999, Probabilistic Principal Component Analysis. I would say that this paper itself is one of the best references: it is concise and clear. Nevertheless, it m
Beginner references to understand probabilistic principal component analysis (PPCA) PPCA was introduced in Tipping & Bishop, 1999, Probabilistic Principal Component Analysis. I would say that this paper itself is one of the best references: it is concise and clear. Nevertheless, it might be difficult for a beginner. If so, you can try Bishop's textbook Pattern Recognition and Machine Learning, which is excellent and includes a thorough discussion of PPCA in Chapter 12. In order to prepare for this chapter, one would need to have some understanding of basic probability theory (Chapter 1), multivariate Gaussian distribution (Chapter 2), and expectation-maximization algorithm (Chapter 9). The entire book is freely available online in PDF.
Beginner references to understand probabilistic principal component analysis (PPCA) PPCA was introduced in Tipping & Bishop, 1999, Probabilistic Principal Component Analysis. I would say that this paper itself is one of the best references: it is concise and clear. Nevertheless, it m
46,143
A non-uniform distribution of $p$-values...again
When you write this: Suppose X is a Rayleigh random variable (with shape parameter b). It can be shown that the random variable Y=X/mean(X) You're talking about dividing by the population mean $\mu_X$, a fixed constant, giving $Y_i=X_i/\mu_X$. Then in your experiment, you divide by the sample mean: standardize these numbers by dividing each number by the overall sample mean. This changes the distribution. The distribution of $X_i/\bar{X}$ is not Rayleigh. More specifically, I think you should tend to see fewer large deviations from the Rayleigh cdf in this statistic than you see with the one that actually has the standard Rayleigh distribution, because the sample estimate will produce a fitted cdf that's "closer to the data" than the true one; as a result dividing by that estimate produces a standardized ecdf that's closer to the hypothetical distribution than you'd get if you divided by the population mean. As a result I'd expect you should get an excess of large p-values and a deficit of small ones. Your results are pretty much exactly what I'd have anticipated. This effect is well known; we see it in other distributions. It's why the Lilliefors test* has smaller critical values than the Kolmogorov-Smirnov (which has no estimated parameters). The general idea is to use the same "largest difference in cdf" test statistic as with the Kolmogorov Smirnov but with the "theoretical" cdf being based on one or more estimated parameters (or equivalently, scaling the sample to some standard form using estimated parameters). * (unfortunately, the text of the Wikipedia article at the link presently suggests the Lilliefors test is only for normality, but he also covered the exponential case, as you can see in the "Sources" section at the bottom of the article) You could actually use the exponential-version of the Lilliefors test [1] for the Rayleigh distribution -- since the square of a Rayleigh random variate is exponential, you can just square your original data and test for exponential. (In this case you'd be dividing the squared data by its mean, not squaring your scaled values.) Note that the asymptotic 5% critical value for the Kolmogorov-Smirnov is $1.36/\sqrt{n}$ while that for the Lilliefors when testing the exponential is $1.077/\sqrt{n}$ (i.e. as I suggested above, dividing an exponential sample by its mean produces a scaled ecdf which tends to be closer to the hypothetical than if you divided by the population mean). [You could obtain critical values (and/or p-values) for a Lilliefors test using simulation under the null hypothesis. This is what Lilliefors actually did, but his simulation-sizes were pretty small (it was the 1960s, so computing facilities were limited) -- so you'd probably want to redo the simulation, particularly if you want p-values. If critical values are sufficient, there are more recent/more accurate tables available] Added in edit: After a bit of googling around, it looks like the idea of using Lilliefors test (for the exponential) to test for Rayleigh (after transforming) was discussed in Edgeman & Scott (1987) [2]. [1] Lilliefors, H. (1969), "On the Kolmogorov–Smirnov test for the exponential distribution with mean unknown", Journal of the American Statistical Association, Vol. 64 . pp. 387–389. [2] R.L. Edgeman, R.C. Scott (1987), "Lilliefors's test for transformed variables", Brazilian Journal of Probability and Statistics, 1, 101–112.
A non-uniform distribution of $p$-values...again
When you write this: Suppose X is a Rayleigh random variable (with shape parameter b). It can be shown that the random variable Y=X/mean(X) You're talking about dividing by the population mean $\mu
A non-uniform distribution of $p$-values...again When you write this: Suppose X is a Rayleigh random variable (with shape parameter b). It can be shown that the random variable Y=X/mean(X) You're talking about dividing by the population mean $\mu_X$, a fixed constant, giving $Y_i=X_i/\mu_X$. Then in your experiment, you divide by the sample mean: standardize these numbers by dividing each number by the overall sample mean. This changes the distribution. The distribution of $X_i/\bar{X}$ is not Rayleigh. More specifically, I think you should tend to see fewer large deviations from the Rayleigh cdf in this statistic than you see with the one that actually has the standard Rayleigh distribution, because the sample estimate will produce a fitted cdf that's "closer to the data" than the true one; as a result dividing by that estimate produces a standardized ecdf that's closer to the hypothetical distribution than you'd get if you divided by the population mean. As a result I'd expect you should get an excess of large p-values and a deficit of small ones. Your results are pretty much exactly what I'd have anticipated. This effect is well known; we see it in other distributions. It's why the Lilliefors test* has smaller critical values than the Kolmogorov-Smirnov (which has no estimated parameters). The general idea is to use the same "largest difference in cdf" test statistic as with the Kolmogorov Smirnov but with the "theoretical" cdf being based on one or more estimated parameters (or equivalently, scaling the sample to some standard form using estimated parameters). * (unfortunately, the text of the Wikipedia article at the link presently suggests the Lilliefors test is only for normality, but he also covered the exponential case, as you can see in the "Sources" section at the bottom of the article) You could actually use the exponential-version of the Lilliefors test [1] for the Rayleigh distribution -- since the square of a Rayleigh random variate is exponential, you can just square your original data and test for exponential. (In this case you'd be dividing the squared data by its mean, not squaring your scaled values.) Note that the asymptotic 5% critical value for the Kolmogorov-Smirnov is $1.36/\sqrt{n}$ while that for the Lilliefors when testing the exponential is $1.077/\sqrt{n}$ (i.e. as I suggested above, dividing an exponential sample by its mean produces a scaled ecdf which tends to be closer to the hypothetical than if you divided by the population mean). [You could obtain critical values (and/or p-values) for a Lilliefors test using simulation under the null hypothesis. This is what Lilliefors actually did, but his simulation-sizes were pretty small (it was the 1960s, so computing facilities were limited) -- so you'd probably want to redo the simulation, particularly if you want p-values. If critical values are sufficient, there are more recent/more accurate tables available] Added in edit: After a bit of googling around, it looks like the idea of using Lilliefors test (for the exponential) to test for Rayleigh (after transforming) was discussed in Edgeman & Scott (1987) [2]. [1] Lilliefors, H. (1969), "On the Kolmogorov–Smirnov test for the exponential distribution with mean unknown", Journal of the American Statistical Association, Vol. 64 . pp. 387–389. [2] R.L. Edgeman, R.C. Scott (1987), "Lilliefors's test for transformed variables", Brazilian Journal of Probability and Statistics, 1, 101–112.
A non-uniform distribution of $p$-values...again When you write this: Suppose X is a Rayleigh random variable (with shape parameter b). It can be shown that the random variable Y=X/mean(X) You're talking about dividing by the population mean $\mu
46,144
Possible to do Fourier decomposition using Linear Regression?
You can do that (estimate the magnitudes via regression), but you can actually estimate the magnitudes and the phases using regression, so it will still work when the phases are unknown, and it will work in the presence of noise in $y$ (in the sense at least that you can still estimate those coefficients, though themselves with some noise). Specifically, rather than just fitting $\cos$ terms, if you fit both $\sin$ and $\cos$ terms, you can obtain both magnitude and phase. Using trigonometric terms in this way is discussed here, which discusses fitting several harmonics of the period, as well as estimation of both phase and amplitude. If frequency is also unknown you have a nonlinear problem and in that case you would use other approaches (such as nonlinear regression discussed in another answer to the first question linked here).
Possible to do Fourier decomposition using Linear Regression?
You can do that (estimate the magnitudes via regression), but you can actually estimate the magnitudes and the phases using regression, so it will still work when the phases are unknown, and it will w
Possible to do Fourier decomposition using Linear Regression? You can do that (estimate the magnitudes via regression), but you can actually estimate the magnitudes and the phases using regression, so it will still work when the phases are unknown, and it will work in the presence of noise in $y$ (in the sense at least that you can still estimate those coefficients, though themselves with some noise). Specifically, rather than just fitting $\cos$ terms, if you fit both $\sin$ and $\cos$ terms, you can obtain both magnitude and phase. Using trigonometric terms in this way is discussed here, which discusses fitting several harmonics of the period, as well as estimation of both phase and amplitude. If frequency is also unknown you have a nonlinear problem and in that case you would use other approaches (such as nonlinear regression discussed in another answer to the first question linked here).
Possible to do Fourier decomposition using Linear Regression? You can do that (estimate the magnitudes via regression), but you can actually estimate the magnitudes and the phases using regression, so it will still work when the phases are unknown, and it will w
46,145
Possible to do Fourier decomposition using Linear Regression?
Of course you can do this, meaning neither the regression nor the Fourier transformation police will come and bust your house. But would it make sense? I think not. I guess you are trying to decompose a function(!) of x, where $y_i=f(x_i)$. But this is not what regression is about. Regression assumes that you do not see $f(x_i)$ directly but only noisy observations $y_i=f(x_i) + \epsilon_i$ of it. And only from these errors you get the normal distribution of $y$. On the other hand you can always define a subspace $span\{\phi_i, i=1,\ldots,n\}$ and project $f$ onto this subspace orthogonally to get coefficients $w$ which are then least squares optimal. But this makes only sense if there is something meaningful or interesting about your subspace. Furthermore this only works for finite dimensional subspaces and Fourier analysis is really more about infinite dimensional problems.
Possible to do Fourier decomposition using Linear Regression?
Of course you can do this, meaning neither the regression nor the Fourier transformation police will come and bust your house. But would it make sense? I think not. I guess you are trying to decompose
Possible to do Fourier decomposition using Linear Regression? Of course you can do this, meaning neither the regression nor the Fourier transformation police will come and bust your house. But would it make sense? I think not. I guess you are trying to decompose a function(!) of x, where $y_i=f(x_i)$. But this is not what regression is about. Regression assumes that you do not see $f(x_i)$ directly but only noisy observations $y_i=f(x_i) + \epsilon_i$ of it. And only from these errors you get the normal distribution of $y$. On the other hand you can always define a subspace $span\{\phi_i, i=1,\ldots,n\}$ and project $f$ onto this subspace orthogonally to get coefficients $w$ which are then least squares optimal. But this makes only sense if there is something meaningful or interesting about your subspace. Furthermore this only works for finite dimensional subspaces and Fourier analysis is really more about infinite dimensional problems.
Possible to do Fourier decomposition using Linear Regression? Of course you can do this, meaning neither the regression nor the Fourier transformation police will come and bust your house. But would it make sense? I think not. I guess you are trying to decompose
46,146
Train waiting time in probability
Picture in your mind's eye the whole train schedule is already generated; it looks like a line with marks on it, where the marks represent a train arriving. On average, two consecutive marks are fifteen minutes apart half the time, and 45 minutes apart half the time. Now, imagine a person arrives; this means randomly dropping a point somewhere on the line. What do you expect the distance to be between the person and the next mark? First, think of the relative probability of landing in each gap size of mark, and then deal with each case separately. Does this help? I can finish answering but I thought it's more available to provide some insight so you could finish it on your own.
Train waiting time in probability
Picture in your mind's eye the whole train schedule is already generated; it looks like a line with marks on it, where the marks represent a train arriving. On average, two consecutive marks are fifte
Train waiting time in probability Picture in your mind's eye the whole train schedule is already generated; it looks like a line with marks on it, where the marks represent a train arriving. On average, two consecutive marks are fifteen minutes apart half the time, and 45 minutes apart half the time. Now, imagine a person arrives; this means randomly dropping a point somewhere on the line. What do you expect the distance to be between the person and the next mark? First, think of the relative probability of landing in each gap size of mark, and then deal with each case separately. Does this help? I can finish answering but I thought it's more available to provide some insight so you could finish it on your own.
Train waiting time in probability Picture in your mind's eye the whole train schedule is already generated; it looks like a line with marks on it, where the marks represent a train arriving. On average, two consecutive marks are fifte
46,147
Train waiting time in probability
Your simulator is correct. Since 15 minutes and 45 minutes intervals are equally likely, you end up in a 15 minute interval in 25% of the time and in a 45 minute interval in 75% of the time. In a 15 minute interval, you have to wait $15 \cdot \frac12 = 7.5$ minutes on average. In a 45 minute interval, you have to wait $45 \cdot \frac12 = 22.5$ minutes on average. This gives a expected waiting time of $\frac14 \cdot 7.5 + \frac34 \cdot 22.5 = 18.75$.
Train waiting time in probability
Your simulator is correct. Since 15 minutes and 45 minutes intervals are equally likely, you end up in a 15 minute interval in 25% of the time and in a 45 minute interval in 75% of the time. In a 15 m
Train waiting time in probability Your simulator is correct. Since 15 minutes and 45 minutes intervals are equally likely, you end up in a 15 minute interval in 25% of the time and in a 45 minute interval in 75% of the time. In a 15 minute interval, you have to wait $15 \cdot \frac12 = 7.5$ minutes on average. In a 45 minute interval, you have to wait $45 \cdot \frac12 = 22.5$ minutes on average. This gives a expected waiting time of $\frac14 \cdot 7.5 + \frac34 \cdot 22.5 = 18.75$.
Train waiting time in probability Your simulator is correct. Since 15 minutes and 45 minutes intervals are equally likely, you end up in a 15 minute interval in 25% of the time and in a 45 minute interval in 75% of the time. In a 15 m
46,148
How to determine how many variables and what kind of variables a table of data has?
There's most likely no single correct answer to the question "how many variables does this dataset have", one can structure the data in different ways as you've shown leading to different numbers of columns. However, there's probably a good answer to "what structure would make this dataset most amenable for analysis", and that would probably be the first version you presented. Hadley Wickham has done a bit of writing about this on what he calls "tidy data" (see this paper). When a dataset is tidy it's in its most ably-analyzed form. Ie. it's in its most basic form for an easy and consistent way for transformations to be applied on top, so that further analysis can be done. He argues that the best ways to structure a dataset for analysis are when: Each variable forms a column. Each observation forms a row. Each type of observational unit forms a table. The first dataset you presented, the one with 3 columns, would fit as tidy under these guidelines. He also outlines 5 common ways datasets get untidy: Column headers are values, not variable names. Multiple variables are stored in one column. Variables are stored in both rows and columns. Multiple types of observational units are stored in the same table. A single observational unit is stored in multiple tables. The second version of your dataset, which has four columns, exhibits issue #1. This can be seen through Section 3.1 of his paper where he refers to the dataset about religion and income. Again, there's probably no "correct" answer to the question of how many variables does this dataset have, but to the question how many columns should this dataset have, the right answer would be 3. Tidy data makes it easy and provides a consistent way to perform additional transformation to a dataset to get it into whatever future form is needed for analysis. Eg. Your second dataset with 4 columns is easily created with a Pandas group_by or an R aggregate from the first dataset.
How to determine how many variables and what kind of variables a table of data has?
There's most likely no single correct answer to the question "how many variables does this dataset have", one can structure the data in different ways as you've shown leading to different numbers of c
How to determine how many variables and what kind of variables a table of data has? There's most likely no single correct answer to the question "how many variables does this dataset have", one can structure the data in different ways as you've shown leading to different numbers of columns. However, there's probably a good answer to "what structure would make this dataset most amenable for analysis", and that would probably be the first version you presented. Hadley Wickham has done a bit of writing about this on what he calls "tidy data" (see this paper). When a dataset is tidy it's in its most ably-analyzed form. Ie. it's in its most basic form for an easy and consistent way for transformations to be applied on top, so that further analysis can be done. He argues that the best ways to structure a dataset for analysis are when: Each variable forms a column. Each observation forms a row. Each type of observational unit forms a table. The first dataset you presented, the one with 3 columns, would fit as tidy under these guidelines. He also outlines 5 common ways datasets get untidy: Column headers are values, not variable names. Multiple variables are stored in one column. Variables are stored in both rows and columns. Multiple types of observational units are stored in the same table. A single observational unit is stored in multiple tables. The second version of your dataset, which has four columns, exhibits issue #1. This can be seen through Section 3.1 of his paper where he refers to the dataset about religion and income. Again, there's probably no "correct" answer to the question of how many variables does this dataset have, but to the question how many columns should this dataset have, the right answer would be 3. Tidy data makes it easy and provides a consistent way to perform additional transformation to a dataset to get it into whatever future form is needed for analysis. Eg. Your second dataset with 4 columns is easily created with a Pandas group_by or an R aggregate from the first dataset.
How to determine how many variables and what kind of variables a table of data has? There's most likely no single correct answer to the question "how many variables does this dataset have", one can structure the data in different ways as you've shown leading to different numbers of c
46,149
How to determine how many variables and what kind of variables a table of data has?
@nick-eng pretty much answered it all (+1)! I just thought I could add some examples to illustrate his points and to show why the long-format (your first table) is more efficient to work with, especially when you are working with Hadley Wickham's R packages ggplot2 and plyr. But I do have to say that I often prefer to use the wide-format when reporting mean values in manuscripts. -- Begin edit -- As @ttnphns rightly points out (see comment under OP's question), many analyses require the data to be in the long-format, whereas multivariate analyses usually need to have the dependent variables as individual columns. This also holds for repeated measures when analyzed with the Anova function of the car package. -- End edit -- I used your first table and read it into R. With the dput() function, I can let R print the data into the console from where I can copy and paste it here so that other people can work with it easily: d <- structure(list(Season = structure(c(4L, 4L, 4L, 2L, 2L, 2L, 3L, 3L, 3L, 1L, 1L, 1L), .Label = c("Fall", "Spring", "Summer", "Winter" ), class = "factor"), Type = structure(c(3L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L), .Label = c("Expenses", "Profit", "Sales"), class = "factor"), Dollars = c(1000L, 400L, 250L, 1170L, 460L, 250L, 660L, 1120L, 300L, 1030L, 540L, 350L)), .Names = c("Season", "Type", "Dollars"), class = "data.frame", row.names = c(NA, -12L )) Make a graph using ggplot2: require(ggplot2) ggplot(d, aes(x=Season, y=Dollars)) + geom_bar(stat="identity", fill="grey") + # Especially for the next line you need the data in long format facet_wrap(~Type) Summarizing data and calculating mean and standard error: require(plyr) d.season <- ddply(d, .(Season), summarise, MEAN=mean(Dollars), ERROR=sd(Dollars)/sqrt(length(Dollars))) Make another graph using ggplot2 using the summarized data d.season: ggplot(d.season, aes(x = Season, y = MEAN)) + geom_bar(stat = "identity", fill = "grey") + geom_errorbar(aes(ymax = MEAN + ERROR, ymin = MEAN - ERROR), width = 0.2) + labs(y = "Dollars") Now, switching back and forth between the wide and long format using the functions dcast() and melt() from the package reshape2. Note that the data will now be alphabetically ordered: require(reshape2) Long to wide format: d.wide <- dcast(d, Season ~ Type, value.var = "Dollars") > d.wide Season Expenses Profit Sales 1 Fall 540 350 1030 2 Spring 460 250 1170 3 Summer 1120 300 660 4 Winter 400 250 1000 Wide to long format: d.long <- melt(d.wide, id.vars = "Season", variable.name = "Type", value.name = "Dollars") > d.long Season Type Dollars 1 Fall Expenses 540 2 Spring Expenses 460 3 Summer Expenses 1120 4 Winter Expenses 400 5 Fall Profit 350 6 Spring Profit 250 7 Summer Profit 300 8 Winter Profit 250 9 Fall Sales 1030 10 Spring Sales 1170 11 Summer Sales 660 12 Winter Sales 1000 Compare to original data frame (not alphabetically ordered): > d Season Type Dollars 1 Winter Sales 1000 2 Winter Expenses 400 3 Winter Profit 250 4 Spring Sales 1170 5 Spring Expenses 460 6 Spring Profit 250 7 Summer Sales 660 8 Summer Expenses 1120 9 Summer Profit 300 10 Fall Sales 1030 11 Fall Expenses 540 12 Fall Profit 350
How to determine how many variables and what kind of variables a table of data has?
@nick-eng pretty much answered it all (+1)! I just thought I could add some examples to illustrate his points and to show why the long-format (your first table) is more efficient to work with, especia
How to determine how many variables and what kind of variables a table of data has? @nick-eng pretty much answered it all (+1)! I just thought I could add some examples to illustrate his points and to show why the long-format (your first table) is more efficient to work with, especially when you are working with Hadley Wickham's R packages ggplot2 and plyr. But I do have to say that I often prefer to use the wide-format when reporting mean values in manuscripts. -- Begin edit -- As @ttnphns rightly points out (see comment under OP's question), many analyses require the data to be in the long-format, whereas multivariate analyses usually need to have the dependent variables as individual columns. This also holds for repeated measures when analyzed with the Anova function of the car package. -- End edit -- I used your first table and read it into R. With the dput() function, I can let R print the data into the console from where I can copy and paste it here so that other people can work with it easily: d <- structure(list(Season = structure(c(4L, 4L, 4L, 2L, 2L, 2L, 3L, 3L, 3L, 1L, 1L, 1L), .Label = c("Fall", "Spring", "Summer", "Winter" ), class = "factor"), Type = structure(c(3L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L), .Label = c("Expenses", "Profit", "Sales"), class = "factor"), Dollars = c(1000L, 400L, 250L, 1170L, 460L, 250L, 660L, 1120L, 300L, 1030L, 540L, 350L)), .Names = c("Season", "Type", "Dollars"), class = "data.frame", row.names = c(NA, -12L )) Make a graph using ggplot2: require(ggplot2) ggplot(d, aes(x=Season, y=Dollars)) + geom_bar(stat="identity", fill="grey") + # Especially for the next line you need the data in long format facet_wrap(~Type) Summarizing data and calculating mean and standard error: require(plyr) d.season <- ddply(d, .(Season), summarise, MEAN=mean(Dollars), ERROR=sd(Dollars)/sqrt(length(Dollars))) Make another graph using ggplot2 using the summarized data d.season: ggplot(d.season, aes(x = Season, y = MEAN)) + geom_bar(stat = "identity", fill = "grey") + geom_errorbar(aes(ymax = MEAN + ERROR, ymin = MEAN - ERROR), width = 0.2) + labs(y = "Dollars") Now, switching back and forth between the wide and long format using the functions dcast() and melt() from the package reshape2. Note that the data will now be alphabetically ordered: require(reshape2) Long to wide format: d.wide <- dcast(d, Season ~ Type, value.var = "Dollars") > d.wide Season Expenses Profit Sales 1 Fall 540 350 1030 2 Spring 460 250 1170 3 Summer 1120 300 660 4 Winter 400 250 1000 Wide to long format: d.long <- melt(d.wide, id.vars = "Season", variable.name = "Type", value.name = "Dollars") > d.long Season Type Dollars 1 Fall Expenses 540 2 Spring Expenses 460 3 Summer Expenses 1120 4 Winter Expenses 400 5 Fall Profit 350 6 Spring Profit 250 7 Summer Profit 300 8 Winter Profit 250 9 Fall Sales 1030 10 Spring Sales 1170 11 Summer Sales 660 12 Winter Sales 1000 Compare to original data frame (not alphabetically ordered): > d Season Type Dollars 1 Winter Sales 1000 2 Winter Expenses 400 3 Winter Profit 250 4 Spring Sales 1170 5 Spring Expenses 460 6 Spring Profit 250 7 Summer Sales 660 8 Summer Expenses 1120 9 Summer Profit 300 10 Fall Sales 1030 11 Fall Expenses 540 12 Fall Profit 350
How to determine how many variables and what kind of variables a table of data has? @nick-eng pretty much answered it all (+1)! I just thought I could add some examples to illustrate his points and to show why the long-format (your first table) is more efficient to work with, especia
46,150
Bayesian vs. frequentist estimation
The simple answer for all your questions is: in Bayesian model you include a priori information in your model besides the data, $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ so if you include in your model additional information, than your estimates (mean, standard deviation etc.) can possibly differ from maximum likelihood estimates. You can use noninformative priors, and in this case estimates should be the same as in maximum likelihood case. One example where Bayesian methods can outperform other methods are cases where we are dealing with small samples (see nice example here), even in cases such as predicting future mid-air collisions in times when non such collision happened yet - in such cases out-of-data information helps us to learn from insufficient data. You ask also about posterior distribution vs. sample mean and about point estimates. In posterior distribution you have a whole distribution for your parameter of interest instead of a point value, you can take mean (or median, or possibly other statistics) of this distribution to get a point value. If you are interested only in point estimate, you can use maximum a posteriori methods and not bother with posterior distribution. Finally, neither of the methods are "better", or "worse". They are just different. By reviewing multiple questions tagged as bayesian you can learn much about their pros and cons, for example: When are Bayesian methods preferable to Frequentist?, or Why are Bayesian methods widely considered particularly "convenient"?.
Bayesian vs. frequentist estimation
The simple answer for all your questions is: in Bayesian model you include a priori information in your model besides the data, $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ so
Bayesian vs. frequentist estimation The simple answer for all your questions is: in Bayesian model you include a priori information in your model besides the data, $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ so if you include in your model additional information, than your estimates (mean, standard deviation etc.) can possibly differ from maximum likelihood estimates. You can use noninformative priors, and in this case estimates should be the same as in maximum likelihood case. One example where Bayesian methods can outperform other methods are cases where we are dealing with small samples (see nice example here), even in cases such as predicting future mid-air collisions in times when non such collision happened yet - in such cases out-of-data information helps us to learn from insufficient data. You ask also about posterior distribution vs. sample mean and about point estimates. In posterior distribution you have a whole distribution for your parameter of interest instead of a point value, you can take mean (or median, or possibly other statistics) of this distribution to get a point value. If you are interested only in point estimate, you can use maximum a posteriori methods and not bother with posterior distribution. Finally, neither of the methods are "better", or "worse". They are just different. By reviewing multiple questions tagged as bayesian you can learn much about their pros and cons, for example: When are Bayesian methods preferable to Frequentist?, or Why are Bayesian methods widely considered particularly "convenient"?.
Bayesian vs. frequentist estimation The simple answer for all your questions is: in Bayesian model you include a priori information in your model besides the data, $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ so
46,151
Bayesian vs. frequentist estimation
I've covered this topic on other threads listed below. I will address your specific questions here. So how does this a-posteriori distribution compare to the sample mean point estimate? Is the expected value of the a-posteriori distribution equal to the sample mean? Is the standard deviation of the a-posteriori distribution equal to the standard deviation of the sample mean? Often the mean and standard deviation of the posterior distribution will be similar to the sample mean and its standard error, but not always. It depends on the selected prior distribution and the probability model for the data. Which one of the two methods is the better one - or does the Bayesian approach just give you more information (it gives you a distribution whereas the frequentist estimation only gives you point values) The short answer to this question is that it depends on what you want to measure, the experiment or the experimenter. The frequentist defines probability in terms of a long-run frequency of the experiment. Inference on an unknown fixed parameter uses confidence levels and p-values based on the concept of sampling from repeated experiments. Historical data (external information) is incorporated into an analysis by pooling observations through the likelihood or performing a summary-level meta-analysis (Johnson 2021a). The frequentist can construct a confidence distribution or confidence curve that shows the plausibility of each hypothesis using p-values. This is analogous to the Bayesian posterior. Bayesians define probability as the belief of the experimenter, so there are no limits to how the prior can be formed to influence the inference on a parameter. It also means these statements of belief are unfalsifiable. Rather than incorporating historical data (external information) through the likelihood, Bayesians will incorporate this through the prior. If the prior distribution is chosen in such a way that the posterior is dominated by the likelihood, Bayesian belief is more objectively viewed as a form of confidence based on frequency probability of the experiment (Johnson 2021b). This is an argument for using frequentist methods. Is it possible to get point estimates like in the frequentist estimation from an a-posterior distibution (e.g. the expected value for an point estimation for the true mean)? Yes, Bayesians will use the mean, median, or mode of the posterior distribution as a point estimate for a parameter. In non-normal settings and when considering non-linear transformations of parameters, using the posterior mean can introduce bias in estimation (considering that the parameter is an unknown fixed quantity).
Bayesian vs. frequentist estimation
I've covered this topic on other threads listed below. I will address your specific questions here. So how does this a-posteriori distribution compare to the sample mean point estimate? Is the expe
Bayesian vs. frequentist estimation I've covered this topic on other threads listed below. I will address your specific questions here. So how does this a-posteriori distribution compare to the sample mean point estimate? Is the expected value of the a-posteriori distribution equal to the sample mean? Is the standard deviation of the a-posteriori distribution equal to the standard deviation of the sample mean? Often the mean and standard deviation of the posterior distribution will be similar to the sample mean and its standard error, but not always. It depends on the selected prior distribution and the probability model for the data. Which one of the two methods is the better one - or does the Bayesian approach just give you more information (it gives you a distribution whereas the frequentist estimation only gives you point values) The short answer to this question is that it depends on what you want to measure, the experiment or the experimenter. The frequentist defines probability in terms of a long-run frequency of the experiment. Inference on an unknown fixed parameter uses confidence levels and p-values based on the concept of sampling from repeated experiments. Historical data (external information) is incorporated into an analysis by pooling observations through the likelihood or performing a summary-level meta-analysis (Johnson 2021a). The frequentist can construct a confidence distribution or confidence curve that shows the plausibility of each hypothesis using p-values. This is analogous to the Bayesian posterior. Bayesians define probability as the belief of the experimenter, so there are no limits to how the prior can be formed to influence the inference on a parameter. It also means these statements of belief are unfalsifiable. Rather than incorporating historical data (external information) through the likelihood, Bayesians will incorporate this through the prior. If the prior distribution is chosen in such a way that the posterior is dominated by the likelihood, Bayesian belief is more objectively viewed as a form of confidence based on frequency probability of the experiment (Johnson 2021b). This is an argument for using frequentist methods. Is it possible to get point estimates like in the frequentist estimation from an a-posterior distibution (e.g. the expected value for an point estimation for the true mean)? Yes, Bayesians will use the mean, median, or mode of the posterior distribution as a point estimate for a parameter. In non-normal settings and when considering non-linear transformations of parameters, using the posterior mean can introduce bias in estimation (considering that the parameter is an unknown fixed quantity).
Bayesian vs. frequentist estimation I've covered this topic on other threads listed below. I will address your specific questions here. So how does this a-posteriori distribution compare to the sample mean point estimate? Is the expe
46,152
Maximal model for linear mixed-effects model for repeated mesaures design
The maximal structure would need to include also a random effect for the interaction between color and shape, that is: Y ~ color * shape + (color + shape + color:shape | subject) This will result in all your predictors (color, shape and their interaction) having a fixed effect (constant for all subjects), and a random effect (individual fluctuations around the estimated fixed effect). In this sense the model is the maximal one. Note that it might not be fully equivalent to a repeated-measures ANOVA as it doesn't make equally strict assumptions on the correlational structure (see Tom's answer). If you don't include the interaction in the random effect part of the formula, individual variation in the interaction effect will not be considered as "random", and the model will not be equivalent to a repeated-measures ANOVA. Of course, the variance of the random deviates for the interaction (or any other random effect) might be so small that including it in the model do not improve much the fit. You can check this not only with the AIC, but with a likelihood ratio test, as model with vs without one random effect are nested one another. In principle if the likelihood ratio test is not significant, it means that you can safely remove that random effect. Simplifying the random effect structures by removing negligible components would be an example of what in the article you linked is called data-driven approach. You can simplify the model in this way, and it would still be equivalent to a repeated-measures ANOVA: Y ~ color*shape + (1|subject) + (0+color|subject) + (0+shape|subject) + (0+color:shape|subject) This syntax tells lmer to not estimate the correlations of random deviates across subjects. The drawback here is that, for example, you won't be able to tell whether subjects that have a large effect of color tend to have also a larger effect of shape (or smaller effect, in case of negative correlation). You can easily include a between-subjects predictor, the only difference is that you can't add a random effect for it. "gender" for example cannot have a random effect grouped according to subject, but it can interact with the other fixed effects, e.g.: Y ~ color * shape * gender + (color + shape + color:shape | subject)
Maximal model for linear mixed-effects model for repeated mesaures design
The maximal structure would need to include also a random effect for the interaction between color and shape, that is: Y ~ color * shape + (color + shape + color:shape | subject) This will result in
Maximal model for linear mixed-effects model for repeated mesaures design The maximal structure would need to include also a random effect for the interaction between color and shape, that is: Y ~ color * shape + (color + shape + color:shape | subject) This will result in all your predictors (color, shape and their interaction) having a fixed effect (constant for all subjects), and a random effect (individual fluctuations around the estimated fixed effect). In this sense the model is the maximal one. Note that it might not be fully equivalent to a repeated-measures ANOVA as it doesn't make equally strict assumptions on the correlational structure (see Tom's answer). If you don't include the interaction in the random effect part of the formula, individual variation in the interaction effect will not be considered as "random", and the model will not be equivalent to a repeated-measures ANOVA. Of course, the variance of the random deviates for the interaction (or any other random effect) might be so small that including it in the model do not improve much the fit. You can check this not only with the AIC, but with a likelihood ratio test, as model with vs without one random effect are nested one another. In principle if the likelihood ratio test is not significant, it means that you can safely remove that random effect. Simplifying the random effect structures by removing negligible components would be an example of what in the article you linked is called data-driven approach. You can simplify the model in this way, and it would still be equivalent to a repeated-measures ANOVA: Y ~ color*shape + (1|subject) + (0+color|subject) + (0+shape|subject) + (0+color:shape|subject) This syntax tells lmer to not estimate the correlations of random deviates across subjects. The drawback here is that, for example, you won't be able to tell whether subjects that have a large effect of color tend to have also a larger effect of shape (or smaller effect, in case of negative correlation). You can easily include a between-subjects predictor, the only difference is that you can't add a random effect for it. "gender" for example cannot have a random effect grouped according to subject, but it can interact with the other fixed effects, e.g.: Y ~ color * shape * gender + (color + shape + color:shape | subject)
Maximal model for linear mixed-effects model for repeated mesaures design The maximal structure would need to include also a random effect for the interaction between color and shape, that is: Y ~ color * shape + (color + shape + color:shape | subject) This will result in
46,153
Maximal model for linear mixed-effects model for repeated mesaures design
First I should say that if your aim was to formulate a mixed model that was exactly analogous to a repeated measures ANOVA you would also have to enforce compound symmetry, which in lme would be done as follows library(lmerTest) library(nlme) fit=lme(Y~ color*shape, random=~1|subject, correlation=corCompSymm(form=~1|id),weights=NULL,data=data) Y ~ color*shape + 1|subject anova(fit) summary(fit) (you could also use a general correlation structure to relax the assumption of compound symmetry) Adding random slopes in lmer can sometimes improve your fit, but not always. Best is to check the Aikaike Information Criterion (AIC(fit)) and see if it is actually better than a simpler random intercept model. Difference in interpretation would basically be that in a random intercept model, all that you add to the model is some random per subject variation in mean reaction time. If you add random slopes then this will also allow the effect of color and/or shape on reaction time to vary across subjects. Note also that you could allow correlated or uncorrelated random intercepts & slopes. A random intercept model (1|subject) would merely include random variation in mean reaction time across subjects A correlated intercept & slope model (color|subject) = (1+color|subject) would have a random effect of color on reaction time for each subject (so that the effect of color on reaction time is different across subjects) and would include a correlated estimate of a per-subject intercept (ie so that mean reaction time is different per subject and that this difference could be correlated to some extent with the difference in response to each color) A random slope model (0+color|subject) = (-1+color|subject) would allow for a random effect of color on reaction time (so that the effect of color on reaction time is different across subjects) but would force the mean intercept to be the same for all subjects (ie so that the mean reaction time of all subjects would be the same (after correcting for the effect of color and shape if you would include those as fixed terms)) Finally you could also fit a random slope & intercept model with uncorrelated slopes & intercepts using (1|subject) + (0+color|subject) as this would allow random intercepts over subjects (ie so that mean reaction time is different per subject) and allow for uncorrelated random variation in the effect of color on reaction time per subject (so that the effect of color on reaction time is different across subjects) So I supposed a full model would be Y ~ color*shape + (color|subject) + (shape|subject) (with correlated random slopes and intercepts) or Y ~ color*shape + (1|subject) + (0+color|subject) + (0+shape|subject) (with uncorrelated random slopes and intercepts) In lme you could also still fit different types of correlation and variance structures as well though. Best to use AIC to compare the fit of those.
Maximal model for linear mixed-effects model for repeated mesaures design
First I should say that if your aim was to formulate a mixed model that was exactly analogous to a repeated measures ANOVA you would also have to enforce compound symmetry, which in lme would be done
Maximal model for linear mixed-effects model for repeated mesaures design First I should say that if your aim was to formulate a mixed model that was exactly analogous to a repeated measures ANOVA you would also have to enforce compound symmetry, which in lme would be done as follows library(lmerTest) library(nlme) fit=lme(Y~ color*shape, random=~1|subject, correlation=corCompSymm(form=~1|id),weights=NULL,data=data) Y ~ color*shape + 1|subject anova(fit) summary(fit) (you could also use a general correlation structure to relax the assumption of compound symmetry) Adding random slopes in lmer can sometimes improve your fit, but not always. Best is to check the Aikaike Information Criterion (AIC(fit)) and see if it is actually better than a simpler random intercept model. Difference in interpretation would basically be that in a random intercept model, all that you add to the model is some random per subject variation in mean reaction time. If you add random slopes then this will also allow the effect of color and/or shape on reaction time to vary across subjects. Note also that you could allow correlated or uncorrelated random intercepts & slopes. A random intercept model (1|subject) would merely include random variation in mean reaction time across subjects A correlated intercept & slope model (color|subject) = (1+color|subject) would have a random effect of color on reaction time for each subject (so that the effect of color on reaction time is different across subjects) and would include a correlated estimate of a per-subject intercept (ie so that mean reaction time is different per subject and that this difference could be correlated to some extent with the difference in response to each color) A random slope model (0+color|subject) = (-1+color|subject) would allow for a random effect of color on reaction time (so that the effect of color on reaction time is different across subjects) but would force the mean intercept to be the same for all subjects (ie so that the mean reaction time of all subjects would be the same (after correcting for the effect of color and shape if you would include those as fixed terms)) Finally you could also fit a random slope & intercept model with uncorrelated slopes & intercepts using (1|subject) + (0+color|subject) as this would allow random intercepts over subjects (ie so that mean reaction time is different per subject) and allow for uncorrelated random variation in the effect of color on reaction time per subject (so that the effect of color on reaction time is different across subjects) So I supposed a full model would be Y ~ color*shape + (color|subject) + (shape|subject) (with correlated random slopes and intercepts) or Y ~ color*shape + (1|subject) + (0+color|subject) + (0+shape|subject) (with uncorrelated random slopes and intercepts) In lme you could also still fit different types of correlation and variance structures as well though. Best to use AIC to compare the fit of those.
Maximal model for linear mixed-effects model for repeated mesaures design First I should say that if your aim was to formulate a mixed model that was exactly analogous to a repeated measures ANOVA you would also have to enforce compound symmetry, which in lme would be done
46,154
Pr(Z>|z|) values and the level of significance
Firstly, the p-value given for the Z-statistic would have to be interpreted as how likely it is that a result as extreme or more extreme than that observed would have occured under the null hypothesis. I.e. 0.96 would in principle mean that the data are providing very little evidence that the variable is needed (while small values such as, say, $p\leq 0.05$ would provide evidence for the likely relevance of the variable, as pointed out by others already). However, a lack of clear evidence that the variable is needed in the model to explain the this particular data set would not imply evidence that the variable is not needed. That would require a difference approach and with a very larege standard error one would not normally be able to say that the variable does not have an effect. Also, it is a very bad idea to decide which variables are to be included in a model based on p-values and then fitting the model with or without them as if no model selection had occurred. Secondly, as also pointed out by others, when you get this huge a coefficient (corresponds to an odds ratio of $e^{-14.29}$) and standard error from logistic regression, you typically have some problem. E.g. the algorithm did not converge or there is complete separation in the data. If your model really did only include an intercept, then perhaps there are no events at all, and all records did not have an outcome? If so, then a standard logistic regression may not be able to tell you a lot. There are some alternatives for such sparse data situations (e.g. a Bayesian analysis including the available prior information).
Pr(Z>|z|) values and the level of significance
Firstly, the p-value given for the Z-statistic would have to be interpreted as how likely it is that a result as extreme or more extreme than that observed would have occured under the null hypothesis
Pr(Z>|z|) values and the level of significance Firstly, the p-value given for the Z-statistic would have to be interpreted as how likely it is that a result as extreme or more extreme than that observed would have occured under the null hypothesis. I.e. 0.96 would in principle mean that the data are providing very little evidence that the variable is needed (while small values such as, say, $p\leq 0.05$ would provide evidence for the likely relevance of the variable, as pointed out by others already). However, a lack of clear evidence that the variable is needed in the model to explain the this particular data set would not imply evidence that the variable is not needed. That would require a difference approach and with a very larege standard error one would not normally be able to say that the variable does not have an effect. Also, it is a very bad idea to decide which variables are to be included in a model based on p-values and then fitting the model with or without them as if no model selection had occurred. Secondly, as also pointed out by others, when you get this huge a coefficient (corresponds to an odds ratio of $e^{-14.29}$) and standard error from logistic regression, you typically have some problem. E.g. the algorithm did not converge or there is complete separation in the data. If your model really did only include an intercept, then perhaps there are no events at all, and all records did not have an outcome? If so, then a standard logistic regression may not be able to tell you a lot. There are some alternatives for such sparse data situations (e.g. a Bayesian analysis including the available prior information).
Pr(Z>|z|) values and the level of significance Firstly, the p-value given for the Z-statistic would have to be interpreted as how likely it is that a result as extreme or more extreme than that observed would have occured under the null hypothesis
46,155
Pr(Z>|z|) values and the level of significance
You are using the normal approximation and specifically the Wald test so you do what you would do in a regular t-test. That is, you reject the null hypothesis if the probability of the event $\left\{Z \geq |z| \right\}$ is lower than the conventional threshold of $0.05$. Alternatively you fail to reject the null hypothesis if your p-value is not small enough.
Pr(Z>|z|) values and the level of significance
You are using the normal approximation and specifically the Wald test so you do what you would do in a regular t-test. That is, you reject the null hypothesis if the probability of the event $\left\{Z
Pr(Z>|z|) values and the level of significance You are using the normal approximation and specifically the Wald test so you do what you would do in a regular t-test. That is, you reject the null hypothesis if the probability of the event $\left\{Z \geq |z| \right\}$ is lower than the conventional threshold of $0.05$. Alternatively you fail to reject the null hypothesis if your p-value is not small enough.
Pr(Z>|z|) values and the level of significance You are using the normal approximation and specifically the Wald test so you do what you would do in a regular t-test. That is, you reject the null hypothesis if the probability of the event $\left\{Z
46,156
Pr(Z>|z|) values and the level of significance
The value of the coefficient and its large standard error suggest that what we are seeing here is separation or the Hauck-Donner effect which has its own tag hauck-donner-effect which has a clear and helpful wiki excerpt. I think therefore the debate about $t$ versus $z$ is a red herring. Profile likelihood would be the way to go or reformulating the problem.
Pr(Z>|z|) values and the level of significance
The value of the coefficient and its large standard error suggest that what we are seeing here is separation or the Hauck-Donner effect which has its own tag hauck-donner-effect which has a clear and
Pr(Z>|z|) values and the level of significance The value of the coefficient and its large standard error suggest that what we are seeing here is separation or the Hauck-Donner effect which has its own tag hauck-donner-effect which has a clear and helpful wiki excerpt. I think therefore the debate about $t$ versus $z$ is a red herring. Profile likelihood would be the way to go or reformulating the problem.
Pr(Z>|z|) values and the level of significance The value of the coefficient and its large standard error suggest that what we are seeing here is separation or the Hauck-Donner effect which has its own tag hauck-donner-effect which has a clear and
46,157
Definition of improper priors
The classical definition of an improper prior in Bayesian statistics is one of a measure $\text{d}\pi$ with infinite mass $$\int_\Theta \text{d}\pi(\theta)=+\infty$$ See, e.g., Hartigan's Bayes Theory, which formalises quite nicely the use of improper priors. Any measure $\text{d}\pi$ with finite mass can be normalised into a probability measure with mass $1$. See also this related Cross validated entry on improper priors.
Definition of improper priors
The classical definition of an improper prior in Bayesian statistics is one of a measure $\text{d}\pi$ with infinite mass $$\int_\Theta \text{d}\pi(\theta)=+\infty$$ See, e.g., Hartigan's Bayes Theory
Definition of improper priors The classical definition of an improper prior in Bayesian statistics is one of a measure $\text{d}\pi$ with infinite mass $$\int_\Theta \text{d}\pi(\theta)=+\infty$$ See, e.g., Hartigan's Bayes Theory, which formalises quite nicely the use of improper priors. Any measure $\text{d}\pi$ with finite mass can be normalised into a probability measure with mass $1$. See also this related Cross validated entry on improper priors.
Definition of improper priors The classical definition of an improper prior in Bayesian statistics is one of a measure $\text{d}\pi$ with infinite mass $$\int_\Theta \text{d}\pi(\theta)=+\infty$$ See, e.g., Hartigan's Bayes Theory
46,158
Definition of improper priors
Since in the case of $p (\theta) \propto f (\theta) $ with $\int_{\theta \in \Theta} f (\theta) d\theta = c$ integrating to some constant you just need to normalize the density to $f (\theta)/c $, that defines a proper prior.
Definition of improper priors
Since in the case of $p (\theta) \propto f (\theta) $ with $\int_{\theta \in \Theta} f (\theta) d\theta = c$ integrating to some constant you just need to normalize the density to $f (\theta)/c $, tha
Definition of improper priors Since in the case of $p (\theta) \propto f (\theta) $ with $\int_{\theta \in \Theta} f (\theta) d\theta = c$ integrating to some constant you just need to normalize the density to $f (\theta)/c $, that defines a proper prior.
Definition of improper priors Since in the case of $p (\theta) \propto f (\theta) $ with $\int_{\theta \in \Theta} f (\theta) d\theta = c$ integrating to some constant you just need to normalize the density to $f (\theta)/c $, tha
46,159
Independent samples t-test with unequal sample sizes
t-test requires a set of assumptions. It assumes your data is i.i.d. (independent and identically distributed) and comes from a normal distribution. If you care to compare the means of the two groups (and they follow the assumptions), then yes - you can use that test. As JohnK, you may wish to note if you want to assume equal variance for the two populations (and it is a reasonable rule of thumb to not make that assumption). Under the assumptions, this test works for small and large sample sizes (and in the case of large sample sizes will approach the z-test).
Independent samples t-test with unequal sample sizes
t-test requires a set of assumptions. It assumes your data is i.i.d. (independent and identically distributed) and comes from a normal distribution. If you care to compare the means of the two groups
Independent samples t-test with unequal sample sizes t-test requires a set of assumptions. It assumes your data is i.i.d. (independent and identically distributed) and comes from a normal distribution. If you care to compare the means of the two groups (and they follow the assumptions), then yes - you can use that test. As JohnK, you may wish to note if you want to assume equal variance for the two populations (and it is a reasonable rule of thumb to not make that assumption). Under the assumptions, this test works for small and large sample sizes (and in the case of large sample sizes will approach the z-test).
Independent samples t-test with unequal sample sizes t-test requires a set of assumptions. It assumes your data is i.i.d. (independent and identically distributed) and comes from a normal distribution. If you care to compare the means of the two groups
46,160
Independent samples t-test with unequal sample sizes
The two-sample t-test makes no assumption about equal sample sizes. However, if you have $2n$ observations, the best allocation of them is into two groups, each with $n$ observations. This is part of the experimental design; if you already have your observations, then you don’t get to allocate them into groups. (They either got the coronavirus miracle drug or a placebo, for instance, but if you’re designing the study, the best allocation of 10 patients would be into 5 treatment subjects and 5 placebo subjects, at least as far as statistical power goes.) The two-sample t-test does assume equal variances in the two groups. There is a slight modification called the Welch t-test that accounts for unequal variances. This is the default in R and can be performed in Python, too. Other software like SAS, Stata, and SPSS ought to be able to do the Welch test, though I’ve never done it.
Independent samples t-test with unequal sample sizes
The two-sample t-test makes no assumption about equal sample sizes. However, if you have $2n$ observations, the best allocation of them is into two groups, each with $n$ observations. This is part of
Independent samples t-test with unequal sample sizes The two-sample t-test makes no assumption about equal sample sizes. However, if you have $2n$ observations, the best allocation of them is into two groups, each with $n$ observations. This is part of the experimental design; if you already have your observations, then you don’t get to allocate them into groups. (They either got the coronavirus miracle drug or a placebo, for instance, but if you’re designing the study, the best allocation of 10 patients would be into 5 treatment subjects and 5 placebo subjects, at least as far as statistical power goes.) The two-sample t-test does assume equal variances in the two groups. There is a slight modification called the Welch t-test that accounts for unequal variances. This is the default in R and can be performed in Python, too. Other software like SAS, Stata, and SPSS ought to be able to do the Welch test, though I’ve never done it.
Independent samples t-test with unequal sample sizes The two-sample t-test makes no assumption about equal sample sizes. However, if you have $2n$ observations, the best allocation of them is into two groups, each with $n$ observations. This is part of
46,161
Independent samples t-test with unequal sample sizes
Sure, you do not need equal sample sizes between group. You even not need equal variance, if you use Welch test specification of t-test, which estimate common variance in denominator part. Also you do not need normality of data, just normality of sample means and respectively their difference (if you test H0 as E(Diff)=0)
Independent samples t-test with unequal sample sizes
Sure, you do not need equal sample sizes between group. You even not need equal variance, if you use Welch test specification of t-test, which estimate common variance in denominator part. Also you do
Independent samples t-test with unequal sample sizes Sure, you do not need equal sample sizes between group. You even not need equal variance, if you use Welch test specification of t-test, which estimate common variance in denominator part. Also you do not need normality of data, just normality of sample means and respectively their difference (if you test H0 as E(Diff)=0)
Independent samples t-test with unequal sample sizes Sure, you do not need equal sample sizes between group. You even not need equal variance, if you use Welch test specification of t-test, which estimate common variance in denominator part. Also you do
46,162
Estimating sample mean from a biased sample (whose generative process is known)
Formalizing gung's suggestion, you can estimate the sample mean by inverse probability weighting, also known as the Horvitz-Thompson estimator. It is admissible in the class of unbiased estimators. The H-T estimator can be used to estimate the sum $S = \sum_{i=1}^n y_i$ of sample values in a population using a random subsample, as well as the mean. Let's examine the sum estimator first. To model the subsampling, let $B_i \sim \text{Bernoulli}(p_i)$. Then the sum of the random subsample is $$\sum_{i=1}^n y_i B_i$$ the H-T estimator $\hat{S}$ of the population sum is $$\hat{S} = \sum_{i=1}^n y_i B_i / p_i$$ It is easy to see that $\hat{S}$ is unbiased: $$\mathbb{E}[\hat{S}] = \sum_{i=1}^n y_i \mathbb{E}[B_i] / p_i = \sum_{i=1}^n y_i p_i / p_i = S$$ To estimate the mean $S/n$ we can simply use $\hat{S}/n$ if $n$ is known. Otherwise $n$ can be estimated using inverse probability weighting once again: $$\hat{n} = \sum_{i=1}^n B_i/p_i$$ Both $\hat{S}$ and $\hat{n}$ are unbiased, but $\hat{S}/\hat{n}$ may have some bias. However, it should be small when the variance of numerator and denominator are well-controlled - for example in the large sample limit, provided the $p_i$ are not too small. Here's some R code that shows how the H-T mean estimator works. We assume $n$ is known and compute $\hat{S}/n$, but it is easy to make it calculate $\hat{S}/\hat{n}$ instead. n=1000 pop = 66+2*rnorm(n) incl_prob = runif(n) nTrial = 500 ht_est=numeric(nTrial) for (i in 1:nTrial) { included = as.logical(rbinom(n,1,incl_prob)) ht_est[i] = 1/n * sum(pop[included] / incl_prob[included]) } print(paste0('population mean: ',round(mean(pop),2))) print(paste0('average Horvitz-Thompson estimate: ',round(mean(ht_est),2))) print(paste0('standard error in Horvitz-Thompson estimate: ',round(sd(ht_est),2))) This code makes a single random population of 1000 subjects, subsamples from that population with a subject-dependent probability, then computes the H-T estimator. It does the subsampling & H-T estimation 500 times on the same population to help illustrate the estimator's accuracy. Here's a sample run: [1] "population mean: 65.94" [1] "average Horvitz-Thompson estimate: 65.9" [1] "standard error in Horvitz-Thompson estimate: 5.09" The first number is the population mean. The population is random, but is generated once at the beginning of the code and is fixed thereafter. Each of the 500 estimation trials takes a different random subsample from this single population. The second number is the average of 500 Horvitz-Thompson estimates of the population mean, each from a different random subsample pop[included] of the same fixed population pop. Notice how close it is to the population mean, illustrating the unbiased property of the H-T estimator. The third number is the standard deviation of those 500 estimates. It is an estimate of the standard error for any given H-T estimate of the population mean. You might wonder why the average H-T estimate is so much closer to the population mean than the standard error would suggest. This is because we have averaged 500 H-T estimates together, and the error in these estimates is roughly $\sigma / \sqrt{T}$ where $\sigma$ is the standard deviation (in this case 5.09) and $T$ is the number of trials. In our code $T = 500$ so $\sigma / \sqrt{T} = 0.22$, which is on the order of the actual deviation, $0.4$, between the population mean and the 500 averaged H-T estimates.
Estimating sample mean from a biased sample (whose generative process is known)
Formalizing gung's suggestion, you can estimate the sample mean by inverse probability weighting, also known as the Horvitz-Thompson estimator. It is admissible in the class of unbiased estimators. Th
Estimating sample mean from a biased sample (whose generative process is known) Formalizing gung's suggestion, you can estimate the sample mean by inverse probability weighting, also known as the Horvitz-Thompson estimator. It is admissible in the class of unbiased estimators. The H-T estimator can be used to estimate the sum $S = \sum_{i=1}^n y_i$ of sample values in a population using a random subsample, as well as the mean. Let's examine the sum estimator first. To model the subsampling, let $B_i \sim \text{Bernoulli}(p_i)$. Then the sum of the random subsample is $$\sum_{i=1}^n y_i B_i$$ the H-T estimator $\hat{S}$ of the population sum is $$\hat{S} = \sum_{i=1}^n y_i B_i / p_i$$ It is easy to see that $\hat{S}$ is unbiased: $$\mathbb{E}[\hat{S}] = \sum_{i=1}^n y_i \mathbb{E}[B_i] / p_i = \sum_{i=1}^n y_i p_i / p_i = S$$ To estimate the mean $S/n$ we can simply use $\hat{S}/n$ if $n$ is known. Otherwise $n$ can be estimated using inverse probability weighting once again: $$\hat{n} = \sum_{i=1}^n B_i/p_i$$ Both $\hat{S}$ and $\hat{n}$ are unbiased, but $\hat{S}/\hat{n}$ may have some bias. However, it should be small when the variance of numerator and denominator are well-controlled - for example in the large sample limit, provided the $p_i$ are not too small. Here's some R code that shows how the H-T mean estimator works. We assume $n$ is known and compute $\hat{S}/n$, but it is easy to make it calculate $\hat{S}/\hat{n}$ instead. n=1000 pop = 66+2*rnorm(n) incl_prob = runif(n) nTrial = 500 ht_est=numeric(nTrial) for (i in 1:nTrial) { included = as.logical(rbinom(n,1,incl_prob)) ht_est[i] = 1/n * sum(pop[included] / incl_prob[included]) } print(paste0('population mean: ',round(mean(pop),2))) print(paste0('average Horvitz-Thompson estimate: ',round(mean(ht_est),2))) print(paste0('standard error in Horvitz-Thompson estimate: ',round(sd(ht_est),2))) This code makes a single random population of 1000 subjects, subsamples from that population with a subject-dependent probability, then computes the H-T estimator. It does the subsampling & H-T estimation 500 times on the same population to help illustrate the estimator's accuracy. Here's a sample run: [1] "population mean: 65.94" [1] "average Horvitz-Thompson estimate: 65.9" [1] "standard error in Horvitz-Thompson estimate: 5.09" The first number is the population mean. The population is random, but is generated once at the beginning of the code and is fixed thereafter. Each of the 500 estimation trials takes a different random subsample from this single population. The second number is the average of 500 Horvitz-Thompson estimates of the population mean, each from a different random subsample pop[included] of the same fixed population pop. Notice how close it is to the population mean, illustrating the unbiased property of the H-T estimator. The third number is the standard deviation of those 500 estimates. It is an estimate of the standard error for any given H-T estimate of the population mean. You might wonder why the average H-T estimate is so much closer to the population mean than the standard error would suggest. This is because we have averaged 500 H-T estimates together, and the error in these estimates is roughly $\sigma / \sqrt{T}$ where $\sigma$ is the standard deviation (in this case 5.09) and $T$ is the number of trials. In our code $T = 500$ so $\sigma / \sqrt{T} = 0.22$, which is on the order of the actual deviation, $0.4$, between the population mean and the 500 averaged H-T estimates.
Estimating sample mean from a biased sample (whose generative process is known) Formalizing gung's suggestion, you can estimate the sample mean by inverse probability weighting, also known as the Horvitz-Thompson estimator. It is admissible in the class of unbiased estimators. Th
46,163
R mixed models: lme, lmer or both? which one is relevant for my data and why?
It doesn't look like these are "totally different results"; the fixed-effects coefficients are the same between the 2 outputs, with slightly different estimates for standard errors and and random effects. That's not surprising given that the lmer fit used restricted maximum likelihood (REML) while the lme fit used maximum likelihood (ML). See this Cross Validated page for an introduction to the differences between these approaches. The lack of p-values in the output from lmer is a conscious choice by the authors of the package, as discussed in the documentation of the package and on this Cross Validated page. The argument is that p-values as commonly calculated for coefficients in such analyses are misleading. When the lme4 package is loaded the command help("pvalues") provides a guide for ways to proceed, as noted on page 35 of the current vignette. Follow-up query: Thanks for the answers. I can see that in the fixed effects coeffients are the same in both outputs. t values are the almost the same. Although I am not interested in plots (random) effects the results are not the same (residuals). My questions: (1) In lme4 package (lmer) the author intentionally ommitted p-values with some warnings, so what about nlme pacakge? (2) Are p-values resulting from lme reliablle? These p-values are in line with the trends of my data. (3) Is there any reason for my data that lmer is more relevant to use than lme? Please? Response: There are 2 issues to think about here. First is the choice between the REML and ML approaches to the model fit. (The 'method' argument you gave to lmer is not recognized; to get a fit with ML in lmer you have to specify "REML=FALSE" as @f_coppens noted. You can get a fit with REML in lme with the argument 'method="REML"' or just omit the argument in lme as REML is the default for lme). The REML lmer fit versus the ML lme fit almost certainly accounts for the differences in estimated random effects, differences in estimated errors of coefficients, and resulting differences in t-values. For making the choice between REML and ML, see this Cross Validated page. For a relatively small study such as this (at least relative to the number of parameters you are trying to estimate), you probably want REML. The second issue is how to calculate p-values. A major underlying issue is how to estimate the degrees of freedom remaining in the model after you account for the number of estimated parameters; you need to know the degrees of freedom to translate t-values to p-values. I suppose there is nothing wrong in reporting the p-values provided by lme so long as you specify that you used lme to get them. Those p-values are probably too low, but a reader will at least be able to know how to think about them and their limitations. You might be better off, however, using the approaches recommended by help("pvalues") under the lme4 package, or provided by accessory packages like lmerTest. Either way, it's important to think about the underlying issues rather than simply to accept the standard output from statistical software. Follow-up query: Thank you so much. All your suggestions worked. I run the models in both lme and lmer in the default form without specifying a method (REML being the default) and I found the results almost identical except for slight difference in p values associated with difference in DFs. If you are interested in, I will post the results? model1=lme(Abundance~Year+Topography+land_use, random=~1|Plot, data=BIData) model2=lmer(Abundance~Year+Topography+land_use+(1|Plot), data=BIData)
R mixed models: lme, lmer or both? which one is relevant for my data and why?
It doesn't look like these are "totally different results"; the fixed-effects coefficients are the same between the 2 outputs, with slightly different estimates for standard errors and and random effe
R mixed models: lme, lmer or both? which one is relevant for my data and why? It doesn't look like these are "totally different results"; the fixed-effects coefficients are the same between the 2 outputs, with slightly different estimates for standard errors and and random effects. That's not surprising given that the lmer fit used restricted maximum likelihood (REML) while the lme fit used maximum likelihood (ML). See this Cross Validated page for an introduction to the differences between these approaches. The lack of p-values in the output from lmer is a conscious choice by the authors of the package, as discussed in the documentation of the package and on this Cross Validated page. The argument is that p-values as commonly calculated for coefficients in such analyses are misleading. When the lme4 package is loaded the command help("pvalues") provides a guide for ways to proceed, as noted on page 35 of the current vignette. Follow-up query: Thanks for the answers. I can see that in the fixed effects coeffients are the same in both outputs. t values are the almost the same. Although I am not interested in plots (random) effects the results are not the same (residuals). My questions: (1) In lme4 package (lmer) the author intentionally ommitted p-values with some warnings, so what about nlme pacakge? (2) Are p-values resulting from lme reliablle? These p-values are in line with the trends of my data. (3) Is there any reason for my data that lmer is more relevant to use than lme? Please? Response: There are 2 issues to think about here. First is the choice between the REML and ML approaches to the model fit. (The 'method' argument you gave to lmer is not recognized; to get a fit with ML in lmer you have to specify "REML=FALSE" as @f_coppens noted. You can get a fit with REML in lme with the argument 'method="REML"' or just omit the argument in lme as REML is the default for lme). The REML lmer fit versus the ML lme fit almost certainly accounts for the differences in estimated random effects, differences in estimated errors of coefficients, and resulting differences in t-values. For making the choice between REML and ML, see this Cross Validated page. For a relatively small study such as this (at least relative to the number of parameters you are trying to estimate), you probably want REML. The second issue is how to calculate p-values. A major underlying issue is how to estimate the degrees of freedom remaining in the model after you account for the number of estimated parameters; you need to know the degrees of freedom to translate t-values to p-values. I suppose there is nothing wrong in reporting the p-values provided by lme so long as you specify that you used lme to get them. Those p-values are probably too low, but a reader will at least be able to know how to think about them and their limitations. You might be better off, however, using the approaches recommended by help("pvalues") under the lme4 package, or provided by accessory packages like lmerTest. Either way, it's important to think about the underlying issues rather than simply to accept the standard output from statistical software. Follow-up query: Thank you so much. All your suggestions worked. I run the models in both lme and lmer in the default form without specifying a method (REML being the default) and I found the results almost identical except for slight difference in p values associated with difference in DFs. If you are interested in, I will post the results? model1=lme(Abundance~Year+Topography+land_use, random=~1|Plot, data=BIData) model2=lmer(Abundance~Year+Topography+land_use+(1|Plot), data=BIData)
R mixed models: lme, lmer or both? which one is relevant for my data and why? It doesn't look like these are "totally different results"; the fixed-effects coefficients are the same between the 2 outputs, with slightly different estimates for standard errors and and random effe
46,164
Is E(Y|X) a function of Y?
$$g_{Y\mid X = x}(y\mid x) =\frac {f_{X,Y}(x,y)}{\int_{-\infty}^{\infty}f_{X,Y}(x,y)dy} = \frac {f_{X,Y}(x,y)}{f_{X}(x)}$$ and so $$E[Y\mid X = x] = \int_{-\infty}^{\infty} y g_{Y\mid X = x}(y\mid x) \ dy = \frac{1}{f_{X}(x)} \int_{-\infty}^{\infty} y f_{X,Y}(x,y) \ dy = h(x)\tag{1}$$ Thus, the number $E[Y\mid X = x]$ depends on your choice of $x$. It does not depend on $y$ at all; $y$ is just a variable of integration and disappears when we plug in the limits on that integral in $(1)$. We can think of this number as the realization or sample of a random variable $Z$ where $Z$ has value $E[Y\mid X = x]$ whenever it so happens that $X$ has value $x$. So, we can think of the random variable $Z = E[Y \mid X]$ as a function $h(X)$ of $X$ since if we sample $X$ we can figure out what the sample of $Z$ must be. $Z$ is not a function of $Y$ at all! The law of iterated expectation says that the expected value of $Z = h(X)$, which is a function of $X$ and not at all of $Y$, quite by magic, happens to equal $E[Y]$, the expected value of $Y$, that is, $$E[Z] = E[h(X)] = E\left[\, E[Y\mid X]\,\right] = E[Y].$$
Is E(Y|X) a function of Y?
$$g_{Y\mid X = x}(y\mid x) =\frac {f_{X,Y}(x,y)}{\int_{-\infty}^{\infty}f_{X,Y}(x,y)dy} = \frac {f_{X,Y}(x,y)}{f_{X}(x)}$$ and so $$E[Y\mid X = x] = \int_{-\infty}^{\infty} y g_{Y\mid X = x}(y\mid x)
Is E(Y|X) a function of Y? $$g_{Y\mid X = x}(y\mid x) =\frac {f_{X,Y}(x,y)}{\int_{-\infty}^{\infty}f_{X,Y}(x,y)dy} = \frac {f_{X,Y}(x,y)}{f_{X}(x)}$$ and so $$E[Y\mid X = x] = \int_{-\infty}^{\infty} y g_{Y\mid X = x}(y\mid x) \ dy = \frac{1}{f_{X}(x)} \int_{-\infty}^{\infty} y f_{X,Y}(x,y) \ dy = h(x)\tag{1}$$ Thus, the number $E[Y\mid X = x]$ depends on your choice of $x$. It does not depend on $y$ at all; $y$ is just a variable of integration and disappears when we plug in the limits on that integral in $(1)$. We can think of this number as the realization or sample of a random variable $Z$ where $Z$ has value $E[Y\mid X = x]$ whenever it so happens that $X$ has value $x$. So, we can think of the random variable $Z = E[Y \mid X]$ as a function $h(X)$ of $X$ since if we sample $X$ we can figure out what the sample of $Z$ must be. $Z$ is not a function of $Y$ at all! The law of iterated expectation says that the expected value of $Z = h(X)$, which is a function of $X$ and not at all of $Y$, quite by magic, happens to equal $E[Y]$, the expected value of $Y$, that is, $$E[Z] = E[h(X)] = E\left[\, E[Y\mid X]\,\right] = E[Y].$$
Is E(Y|X) a function of Y? $$g_{Y\mid X = x}(y\mid x) =\frac {f_{X,Y}(x,y)}{\int_{-\infty}^{\infty}f_{X,Y}(x,y)dy} = \frac {f_{X,Y}(x,y)}{f_{X}(x)}$$ and so $$E[Y\mid X = x] = \int_{-\infty}^{\infty} y g_{Y\mid X = x}(y\mid x)
46,165
Is E(Y|X) a function of Y?
@Neznajka pointed out the formal basis for why $E[Y|X]$ is a function of $X$, not $Y$. At a more abstract level, you can also see this by considering that $E[Y|X]$ takes as input a value of $X$ and maps it to a conditional expected value of $Y$. Therefore, you can write $E[Y|X]$ as a plain old univariate function $f: \mathbb{R} \to \mathbb{R}, x\mapsto E[Y|X=x]$. So, since the distribution of $Y$ remains fixed, then it has to be a function of $X$.
Is E(Y|X) a function of Y?
@Neznajka pointed out the formal basis for why $E[Y|X]$ is a function of $X$, not $Y$. At a more abstract level, you can also see this by considering that $E[Y|X]$ takes as input a value of $X$ and ma
Is E(Y|X) a function of Y? @Neznajka pointed out the formal basis for why $E[Y|X]$ is a function of $X$, not $Y$. At a more abstract level, you can also see this by considering that $E[Y|X]$ takes as input a value of $X$ and maps it to a conditional expected value of $Y$. Therefore, you can write $E[Y|X]$ as a plain old univariate function $f: \mathbb{R} \to \mathbb{R}, x\mapsto E[Y|X=x]$. So, since the distribution of $Y$ remains fixed, then it has to be a function of $X$.
Is E(Y|X) a function of Y? @Neznajka pointed out the formal basis for why $E[Y|X]$ is a function of $X$, not $Y$. At a more abstract level, you can also see this by considering that $E[Y|X]$ takes as input a value of $X$ and ma
46,166
Can gradient descent find a better solution than least squares regression?
No. These two methods both solve the same problem: minimizing the sum of squares error. One method is much faster than the other, but they are both arriving at the same answer. This would be akin to asking "which gives a better answer to 10/4: long division or a calculator?"
Can gradient descent find a better solution than least squares regression?
No. These two methods both solve the same problem: minimizing the sum of squares error. One method is much faster than the other, but they are both arriving at the same answer. This would be akin to
Can gradient descent find a better solution than least squares regression? No. These two methods both solve the same problem: minimizing the sum of squares error. One method is much faster than the other, but they are both arriving at the same answer. This would be akin to asking "which gives a better answer to 10/4: long division or a calculator?"
Can gradient descent find a better solution than least squares regression? No. These two methods both solve the same problem: minimizing the sum of squares error. One method is much faster than the other, but they are both arriving at the same answer. This would be akin to
46,167
Can gradient descent find a better solution than least squares regression?
OLS solves for BLUE (best linear unbiased estimator) only when all Gauss-Markov assumptions are met. You need a linear model, independence, identical distribution, exogeneity, and homoscedasticity. In scenarios, without linearity, we can still solve for a local minimum using gradient descent. Preferably, stochastic gradient descent with momentum. In terms of finding better solutions than OLS, the answer is do you want to find OLS? If the OLS assumptions are not met then you might need to perform weighted OLS, GLS, Lasso regression, or ridge regression. The model you chose depends on which assumptions your violate and how.
Can gradient descent find a better solution than least squares regression?
OLS solves for BLUE (best linear unbiased estimator) only when all Gauss-Markov assumptions are met. You need a linear model, independence, identical distribution, exogeneity, and homoscedasticity. In
Can gradient descent find a better solution than least squares regression? OLS solves for BLUE (best linear unbiased estimator) only when all Gauss-Markov assumptions are met. You need a linear model, independence, identical distribution, exogeneity, and homoscedasticity. In scenarios, without linearity, we can still solve for a local minimum using gradient descent. Preferably, stochastic gradient descent with momentum. In terms of finding better solutions than OLS, the answer is do you want to find OLS? If the OLS assumptions are not met then you might need to perform weighted OLS, GLS, Lasso regression, or ridge regression. The model you chose depends on which assumptions your violate and how.
Can gradient descent find a better solution than least squares regression? OLS solves for BLUE (best linear unbiased estimator) only when all Gauss-Markov assumptions are met. You need a linear model, independence, identical distribution, exogeneity, and homoscedasticity. In
46,168
Calculate mean and variance of a distribution of a distribution
The expectation and variance computations in your example can be handled with the law of total expectation and law of total variance. The law of total expectation in your case reads: $$ E(\theta) = E_{\mu} ( E_{\theta} (\theta \mid \mu ) ) $$ where the subscripts indicate which variable is being averaged over in the expectation. The inside expectation is $$ E_{\theta} ( \theta \mid \mu ) = \mu $$ and so the law gives the total expectation as $$ E(\theta) = E_{\mu} ( \mu ) = a $$ The law of total variance in your case reads $$ var (\theta) = E_{\mu}( var_{\theta} (\theta \mid \mu ) ) + var_{\mu} ( E_{\theta} ( \theta \mid \mu ) ) $$ where, again, the subscripts are telling us what is being averaged over. We can calculate the first term as $$ E_{\mu}( var_{\theta} (\theta \mid \mu ) ) = E_{\mu}( \sigma^2 ) = \sigma^2 $$ and the second as $$ var_{\mu} ( E_{\theta} ( \theta \mid \mu ) ) = var_{\mu} ( \mu ) = b^2 $$ which recovers your result from mathematica.
Calculate mean and variance of a distribution of a distribution
The expectation and variance computations in your example can be handled with the law of total expectation and law of total variance. The law of total expectation in your case reads: $$ E(\theta) = E_
Calculate mean and variance of a distribution of a distribution The expectation and variance computations in your example can be handled with the law of total expectation and law of total variance. The law of total expectation in your case reads: $$ E(\theta) = E_{\mu} ( E_{\theta} (\theta \mid \mu ) ) $$ where the subscripts indicate which variable is being averaged over in the expectation. The inside expectation is $$ E_{\theta} ( \theta \mid \mu ) = \mu $$ and so the law gives the total expectation as $$ E(\theta) = E_{\mu} ( \mu ) = a $$ The law of total variance in your case reads $$ var (\theta) = E_{\mu}( var_{\theta} (\theta \mid \mu ) ) + var_{\mu} ( E_{\theta} ( \theta \mid \mu ) ) $$ where, again, the subscripts are telling us what is being averaged over. We can calculate the first term as $$ E_{\mu}( var_{\theta} (\theta \mid \mu ) ) = E_{\mu}( \sigma^2 ) = \sigma^2 $$ and the second as $$ var_{\mu} ( E_{\theta} ( \theta \mid \mu ) ) = var_{\mu} ( \mu ) = b^2 $$ which recovers your result from mathematica.
Calculate mean and variance of a distribution of a distribution The expectation and variance computations in your example can be handled with the law of total expectation and law of total variance. The law of total expectation in your case reads: $$ E(\theta) = E_
46,169
Estimate lag for granger causality test
Introduction This test in your question seems rather heavy handed. It is conducting pairwise bivariate Granger causality testing over all pairs in the data set. I'll choose two to examine. require(lmtest) ts(datSel$cpi)->cpi ts(datSel$lohn)->wages #i presume Note that in your test at lag order 4 we get that both wages Granger cause cpi and cpi Granger causes wages. Is this lag order appropriate? The bigger problem We have a much bigger problem though first. Neither wages nor cpi are stationary. We'll take the first difference of the logs of these indices to achieve plausible stationarity. d.cpi<-diff(log(cpi)) d.wages<-diff(log(wages)) With the differenced series and lag order 4, we now have that wage increases Granger cause cpi increases (inflation) but not vice versa lmtest::grangertest(d.wages,d.cpi,4) Granger causality test Model 1: d.cpi ~ Lags(d.cpi, 1:4) + Lags(d.wages, 1:4) Model 2: d.cpi ~ Lags(d.cpi, 1:4) Res.Df Df F Pr(>F) 1 46 2 50 -4 3.4826 0.01446 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 lmtest::grangertest(d.cpi,d.wages,4) Granger causality test Model 1: d.wages ~ Lags(d.wages, 1:4) + Lags(d.cpi, 1:4) Model 2: d.wages ~ Lags(d.wages, 1:4) Res.Df Df F Pr(>F) 1 46 2 50 -4 2.0651 0.1008 Note that these are simply a F-tests of two candidate models, one with lags of of the dependent variable only, the other also with lags of the independent variable. The Problem Asked About The other problem is how to choose an appropriate lag. If this data is quarterly, then a lag order of 4 is a great place to start. You might simply report the p-values for the F-tests from lag orders starting at the frequency of the data to twice the frequency, 4 to 8. In this case, the same conclusion would be reached at any of those lag orders. One could also use AIC/BIC to select the lag length or use a series of F-tests on either increasing or decreasing lags. I'll use both AIC/BIC and F-tests of decreasing lags in this example. select.lags<-function(x,y,max.lag=8) { y<-as.numeric(y) y.lag<-embed(y,max.lag+1)[,-1,drop=FALSE] x.lag<-embed(x,max.lag+1)[,-1,drop=FALSE] t<-tail(seq_along(y),nrow(y.lag)) ms=lapply(1:max.lag,function(i) lm(y[t]~y.lag[,1:i]+x.lag[,1:i])) pvals<-mapply(function(i) anova(ms[[i]],ms[[i-1]])[2,"Pr(>F)"],max.lag:2) ind<-which(pvals<0.05)[1] ftest<-ifelse(is.na(ind),1,max.lag-ind+1) aic<-as.numeric(lapply(ms,AIC)) bic<-as.numeric(lapply(ms,BIC)) structure(list(ic=cbind(aic=aic,bic=bic),pvals=pvals, selection=list(aic=which.min(aic),bic=which.min(bic),ftest=ftest))) } Let's try this on d.cpi~d.wages. s<-select.lags(d.wages,d.cpi,8) t(s$selection) aic bic ftest [1,] 5 5 5 In this case, the AIC, BIC, and series of F-tests all suggest a lag order of 5 out of 8. Looking at a plot of the ICs, we can see a local minimum at 5, which suggests that we might be satisfied here. plot.ts(s$ic) Note that in the opposite direction we also select 5 but do not have statistically significant Granger causality. Conclusion In this data set we have some evidence that wage increases temporally precede inflation increases and are useful in forecasting them, but not vice versa. Bivariate Granger causality testing must be performed on stationary data or conclusions may be spurious. Additional testing should be performed on the underlying regressions to check other assumptions of OLS, but has not been performed here. Lag order equal to frequency is often a good choice. Information criteria and F-tests may also be used.
Estimate lag for granger causality test
Introduction This test in your question seems rather heavy handed. It is conducting pairwise bivariate Granger causality testing over all pairs in the data set. I'll choose two to examine. require(lmt
Estimate lag for granger causality test Introduction This test in your question seems rather heavy handed. It is conducting pairwise bivariate Granger causality testing over all pairs in the data set. I'll choose two to examine. require(lmtest) ts(datSel$cpi)->cpi ts(datSel$lohn)->wages #i presume Note that in your test at lag order 4 we get that both wages Granger cause cpi and cpi Granger causes wages. Is this lag order appropriate? The bigger problem We have a much bigger problem though first. Neither wages nor cpi are stationary. We'll take the first difference of the logs of these indices to achieve plausible stationarity. d.cpi<-diff(log(cpi)) d.wages<-diff(log(wages)) With the differenced series and lag order 4, we now have that wage increases Granger cause cpi increases (inflation) but not vice versa lmtest::grangertest(d.wages,d.cpi,4) Granger causality test Model 1: d.cpi ~ Lags(d.cpi, 1:4) + Lags(d.wages, 1:4) Model 2: d.cpi ~ Lags(d.cpi, 1:4) Res.Df Df F Pr(>F) 1 46 2 50 -4 3.4826 0.01446 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 lmtest::grangertest(d.cpi,d.wages,4) Granger causality test Model 1: d.wages ~ Lags(d.wages, 1:4) + Lags(d.cpi, 1:4) Model 2: d.wages ~ Lags(d.wages, 1:4) Res.Df Df F Pr(>F) 1 46 2 50 -4 2.0651 0.1008 Note that these are simply a F-tests of two candidate models, one with lags of of the dependent variable only, the other also with lags of the independent variable. The Problem Asked About The other problem is how to choose an appropriate lag. If this data is quarterly, then a lag order of 4 is a great place to start. You might simply report the p-values for the F-tests from lag orders starting at the frequency of the data to twice the frequency, 4 to 8. In this case, the same conclusion would be reached at any of those lag orders. One could also use AIC/BIC to select the lag length or use a series of F-tests on either increasing or decreasing lags. I'll use both AIC/BIC and F-tests of decreasing lags in this example. select.lags<-function(x,y,max.lag=8) { y<-as.numeric(y) y.lag<-embed(y,max.lag+1)[,-1,drop=FALSE] x.lag<-embed(x,max.lag+1)[,-1,drop=FALSE] t<-tail(seq_along(y),nrow(y.lag)) ms=lapply(1:max.lag,function(i) lm(y[t]~y.lag[,1:i]+x.lag[,1:i])) pvals<-mapply(function(i) anova(ms[[i]],ms[[i-1]])[2,"Pr(>F)"],max.lag:2) ind<-which(pvals<0.05)[1] ftest<-ifelse(is.na(ind),1,max.lag-ind+1) aic<-as.numeric(lapply(ms,AIC)) bic<-as.numeric(lapply(ms,BIC)) structure(list(ic=cbind(aic=aic,bic=bic),pvals=pvals, selection=list(aic=which.min(aic),bic=which.min(bic),ftest=ftest))) } Let's try this on d.cpi~d.wages. s<-select.lags(d.wages,d.cpi,8) t(s$selection) aic bic ftest [1,] 5 5 5 In this case, the AIC, BIC, and series of F-tests all suggest a lag order of 5 out of 8. Looking at a plot of the ICs, we can see a local minimum at 5, which suggests that we might be satisfied here. plot.ts(s$ic) Note that in the opposite direction we also select 5 but do not have statistically significant Granger causality. Conclusion In this data set we have some evidence that wage increases temporally precede inflation increases and are useful in forecasting them, but not vice versa. Bivariate Granger causality testing must be performed on stationary data or conclusions may be spurious. Additional testing should be performed on the underlying regressions to check other assumptions of OLS, but has not been performed here. Lag order equal to frequency is often a good choice. Information criteria and F-tests may also be used.
Estimate lag for granger causality test Introduction This test in your question seems rather heavy handed. It is conducting pairwise bivariate Granger causality testing over all pairs in the data set. I'll choose two to examine. require(lmt
46,170
Why do we need to log transform independent variable in logistic regression
The reason for such transformations have nothing to do with their distribution. Instead, the reason has to do with the functional form of the effect. Say we want to know the effect of the number of publications on the probability of getting tenure. It is reasonable to believe that getting an extra publication when one has only 1 publication has more impact compared with getting an extra publication when one has already published 50 articles. The log transformation is one way to capture such a (testable) assumption of diminishing returns.
Why do we need to log transform independent variable in logistic regression
The reason for such transformations have nothing to do with their distribution. Instead, the reason has to do with the functional form of the effect. Say we want to know the effect of the number of pu
Why do we need to log transform independent variable in logistic regression The reason for such transformations have nothing to do with their distribution. Instead, the reason has to do with the functional form of the effect. Say we want to know the effect of the number of publications on the probability of getting tenure. It is reasonable to believe that getting an extra publication when one has only 1 publication has more impact compared with getting an extra publication when one has already published 50 articles. The log transformation is one way to capture such a (testable) assumption of diminishing returns.
Why do we need to log transform independent variable in logistic regression The reason for such transformations have nothing to do with their distribution. Instead, the reason has to do with the functional form of the effect. Say we want to know the effect of the number of pu
46,171
What is the median of the school grades A A B B ?
Both $A$ and $B$ would be valid medians, since at least half the data are $\leq A$ and $\geq B$; you could call one the "lower" median and one the "upper" median, or if uniquely defined medians are important you could try make the set that includes both "the median".
What is the median of the school grades A A B B ?
Both $A$ and $B$ would be valid medians, since at least half the data are $\leq A$ and $\geq B$; you could call one the "lower" median and one the "upper" median, or if uniquely defined medians are im
What is the median of the school grades A A B B ? Both $A$ and $B$ would be valid medians, since at least half the data are $\leq A$ and $\geq B$; you could call one the "lower" median and one the "upper" median, or if uniquely defined medians are important you could try make the set that includes both "the median".
What is the median of the school grades A A B B ? Both $A$ and $B$ would be valid medians, since at least half the data are $\leq A$ and $\geq B$; you could call one the "lower" median and one the "upper" median, or if uniquely defined medians are im
46,172
Avoiding a spline dip
There are a number of ways to avoid such effects (e.g. smoothing splines can often be tweaked so as to avoid a dip, or maybe some form of monotonic spline to the left of the peak will be needed), but I think in this particular case a simple approach might be to transform (perhaps take logs or square roots), fit a spline on that scale and transform back. Unimodal splines exist and may suit you better. I haven't used it (edit: well, I have now! see below), but I believe the package uniReg (on CRAN) will do unimodal splines. ... Some code. Here I had previously but unimaginatively read your data into a data frame called a: library(uniReg) z=seq(min(a$Time),max(a$Time),length=201) uf=with(a,unireg(Time,Fe2.,g=5,sigma=1)) plot(Fe2.~Time,a,ylim=c(0,14500)) lines(z,uf$unimod.func(z)) The author also has a paper on unimodal splines - since published by the look, but I'll let you chase the paper up if you want it - but doesn't seem to mention it in the package documentation.
Avoiding a spline dip
There are a number of ways to avoid such effects (e.g. smoothing splines can often be tweaked so as to avoid a dip, or maybe some form of monotonic spline to the left of the peak will be needed), but
Avoiding a spline dip There are a number of ways to avoid such effects (e.g. smoothing splines can often be tweaked so as to avoid a dip, or maybe some form of monotonic spline to the left of the peak will be needed), but I think in this particular case a simple approach might be to transform (perhaps take logs or square roots), fit a spline on that scale and transform back. Unimodal splines exist and may suit you better. I haven't used it (edit: well, I have now! see below), but I believe the package uniReg (on CRAN) will do unimodal splines. ... Some code. Here I had previously but unimaginatively read your data into a data frame called a: library(uniReg) z=seq(min(a$Time),max(a$Time),length=201) uf=with(a,unireg(Time,Fe2.,g=5,sigma=1)) plot(Fe2.~Time,a,ylim=c(0,14500)) lines(z,uf$unimod.func(z)) The author also has a paper on unimodal splines - since published by the look, but I'll let you chase the paper up if you want it - but doesn't seem to mention it in the package documentation.
Avoiding a spline dip There are a number of ways to avoid such effects (e.g. smoothing splines can often be tweaked so as to avoid a dip, or maybe some form of monotonic spline to the left of the peak will be needed), but
46,173
How to estimate variance of classifier on test set?
Is there a principled way to estimate the variance of the classifiers using the testing distribution? Yes, and contrary to your intuition it is actually easy to do this by cross valiation. The idea is that iterated/repeated cross validation (or out-of-bag if you prefer to resample with replacement) allow you to compare the performance for slightly different "surrogate" models for the same test case, thus separating variance due to model instability ( training) from variance due to the finite number of test cases (testing). see e.g. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6 As @RyanBressler points out, there's the Bengio paper about cross validation fundamentally underestimating variance of the models. This underestimation occurs with respect to the assumption that resampling is a good approximation to a new independent sample (which it obviously isn't). This is important if you want to compare the general performance of some type of classifier for some type of data, but not in applied scenarios where we talk about the performance of a classifier trained from the given data. Note also that the separation of this "applied" test variance into instability and testing variance uses a very different view on the resampling: here the surrogate models are treated as approximations or slightly perturbed versions of a model trained on the whole given training data - which should be a much better approximation. the performance that method A and method B each achieve on the test set are virtually identical. However, I fear this is simply due to the variance of the two methods, and perhaps method A would be better on average if I could sample more training and testing sets. This is quite possible. I'd suggest that you check which of the 2 sources of variance (instability, i.e. training and testing uncertainty) is the larger source of variance and focus on reducing this. I think Sample size calculation for ROC/AUC analysis discusses the effects of finite test sample size on your AUC estimate. However, for performance comparison of two classifiers on the same data I'd suggest to use a paired test like McNemar's: for finding out whether (or which) classifier is better, you can concentrate on correct classifications by one classifier that are wrongly predicted by the other. These numbers are fractions of test cases for which the binomial distribution lets you calculate variance.
How to estimate variance of classifier on test set?
Is there a principled way to estimate the variance of the classifiers using the testing distribution? Yes, and contrary to your intuition it is actually easy to do this by cross valiation. The idea i
How to estimate variance of classifier on test set? Is there a principled way to estimate the variance of the classifiers using the testing distribution? Yes, and contrary to your intuition it is actually easy to do this by cross valiation. The idea is that iterated/repeated cross validation (or out-of-bag if you prefer to resample with replacement) allow you to compare the performance for slightly different "surrogate" models for the same test case, thus separating variance due to model instability ( training) from variance due to the finite number of test cases (testing). see e.g. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6 As @RyanBressler points out, there's the Bengio paper about cross validation fundamentally underestimating variance of the models. This underestimation occurs with respect to the assumption that resampling is a good approximation to a new independent sample (which it obviously isn't). This is important if you want to compare the general performance of some type of classifier for some type of data, but not in applied scenarios where we talk about the performance of a classifier trained from the given data. Note also that the separation of this "applied" test variance into instability and testing variance uses a very different view on the resampling: here the surrogate models are treated as approximations or slightly perturbed versions of a model trained on the whole given training data - which should be a much better approximation. the performance that method A and method B each achieve on the test set are virtually identical. However, I fear this is simply due to the variance of the two methods, and perhaps method A would be better on average if I could sample more training and testing sets. This is quite possible. I'd suggest that you check which of the 2 sources of variance (instability, i.e. training and testing uncertainty) is the larger source of variance and focus on reducing this. I think Sample size calculation for ROC/AUC analysis discusses the effects of finite test sample size on your AUC estimate. However, for performance comparison of two classifiers on the same data I'd suggest to use a paired test like McNemar's: for finding out whether (or which) classifier is better, you can concentrate on correct classifications by one classifier that are wrongly predicted by the other. These numbers are fractions of test cases for which the binomial distribution lets you calculate variance.
How to estimate variance of classifier on test set? Is there a principled way to estimate the variance of the classifiers using the testing distribution? Yes, and contrary to your intuition it is actually easy to do this by cross valiation. The idea i
46,174
How to estimate variance of classifier on test set?
You want some sort of bootstrap or method for generating independent measurements of performance. You can't look at the k cross validation folds or divide the test set into k partitions as the observations won't be independent. This can and will introduce significant bias in the estimate of variance. See for example Yoshua Bengio's "No Unbiased Estimator of the Variance of K-Fold Cross-Validation" It isn't even really valid to look at best and worst case performance on the CV folds since they aren't really independent draws...some folds will just have much worse or much better performance. You could do an out-of-bag estimates of performance where you essentially repeatedly bootstrapping training data sets and get performance on the rest of the data. See this write up by Breiman and the referenced earlier work by Tibshirani on estimating performance variance this way If that is computationally prohibitive because you have a ton of data i'd wonder about bootstrapping or otherwise resampling just the holdout set but I can't think of or find a reference for that off hand.
How to estimate variance of classifier on test set?
You want some sort of bootstrap or method for generating independent measurements of performance. You can't look at the k cross validation folds or divide the test set into k partitions as the observa
How to estimate variance of classifier on test set? You want some sort of bootstrap or method for generating independent measurements of performance. You can't look at the k cross validation folds or divide the test set into k partitions as the observations won't be independent. This can and will introduce significant bias in the estimate of variance. See for example Yoshua Bengio's "No Unbiased Estimator of the Variance of K-Fold Cross-Validation" It isn't even really valid to look at best and worst case performance on the CV folds since they aren't really independent draws...some folds will just have much worse or much better performance. You could do an out-of-bag estimates of performance where you essentially repeatedly bootstrapping training data sets and get performance on the rest of the data. See this write up by Breiman and the referenced earlier work by Tibshirani on estimating performance variance this way If that is computationally prohibitive because you have a ton of data i'd wonder about bootstrapping or otherwise resampling just the holdout set but I can't think of or find a reference for that off hand.
How to estimate variance of classifier on test set? You want some sort of bootstrap or method for generating independent measurements of performance. You can't look at the k cross validation folds or divide the test set into k partitions as the observa
46,175
Piecewise regression with constraints
If the goal is simply to fit a function, you could treat this as an optimization problem: y <- c(4.5,4.3,2.57,4.40,4.52,1.39,4.15,3.55,2.49,4.27,4.42,4.10,2.21,2.90,1.42,1.50,1.45,1.7,4.6,3.8,1.9) x <- c(320,419,650,340,400,800,300,570,720,480,425,460,675,600,850,920,975,1022,450,520,780) plot(x, y, col="black",pch=16) #we need four parameters: the two breakpoints and the starting and ending intercepts fun <- function(par, x) { #set all y values to starting intercept y1 <- x^0 * par["i1"] #set values after second breakpoint to ending intercept y1[x >= par["x2"]] <- par["i2"] #which values are between breakpoints? r <- x > par["x1"] & x < par["x2"] #interpolate between breakpoints y1[r] <- par["i1"] + (par["i2"] - par["i1"]) / (par["x2"] - par["x1"]) * (x[r] - par["x1"]) y1 } #sum of squared residuals SSR <- function(par) { sum((y - fun(par, x))^2) } library(optimx) optimx(par = c(x1 = 500, x2 = 820, i1 = 5, i2 = 1), fn = SSR, method = "Nelder-Mead") # x1 x2 i1 i2 value fevals gevals niter convcode kkt1 kkt2 xtimes #Nelder-Mead 449.8546 800.0002 4.381454 1.512305 0.6404728 373 NA NA 0 TRUE TRUE 0.06 lines(300:1100, fun(c(x1 = 449.8546, x2 = 800.0002, i1 = 4.381454, i2 = 1.512305), 300:1100))
Piecewise regression with constraints
If the goal is simply to fit a function, you could treat this as an optimization problem: y <- c(4.5,4.3,2.57,4.40,4.52,1.39,4.15,3.55,2.49,4.27,4.42,4.10,2.21,2.90,1.42,1.50,1.45,1.7,4.6,3.8,1.9) x
Piecewise regression with constraints If the goal is simply to fit a function, you could treat this as an optimization problem: y <- c(4.5,4.3,2.57,4.40,4.52,1.39,4.15,3.55,2.49,4.27,4.42,4.10,2.21,2.90,1.42,1.50,1.45,1.7,4.6,3.8,1.9) x <- c(320,419,650,340,400,800,300,570,720,480,425,460,675,600,850,920,975,1022,450,520,780) plot(x, y, col="black",pch=16) #we need four parameters: the two breakpoints and the starting and ending intercepts fun <- function(par, x) { #set all y values to starting intercept y1 <- x^0 * par["i1"] #set values after second breakpoint to ending intercept y1[x >= par["x2"]] <- par["i2"] #which values are between breakpoints? r <- x > par["x1"] & x < par["x2"] #interpolate between breakpoints y1[r] <- par["i1"] + (par["i2"] - par["i1"]) / (par["x2"] - par["x1"]) * (x[r] - par["x1"]) y1 } #sum of squared residuals SSR <- function(par) { sum((y - fun(par, x))^2) } library(optimx) optimx(par = c(x1 = 500, x2 = 820, i1 = 5, i2 = 1), fn = SSR, method = "Nelder-Mead") # x1 x2 i1 i2 value fevals gevals niter convcode kkt1 kkt2 xtimes #Nelder-Mead 449.8546 800.0002 4.381454 1.512305 0.6404728 373 NA NA 0 TRUE TRUE 0.06 lines(300:1100, fun(c(x1 = 449.8546, x2 = 800.0002, i1 = 4.381454, i2 = 1.512305), 300:1100))
Piecewise regression with constraints If the goal is simply to fit a function, you could treat this as an optimization problem: y <- c(4.5,4.3,2.57,4.40,4.52,1.39,4.15,3.55,2.49,4.27,4.42,4.10,2.21,2.90,1.42,1.50,1.45,1.7,4.6,3.8,1.9) x
46,176
Piecewise regression with constraints
If you also want confidence and prediction intervals you can first approximate your three-phase piecewise linear function by a smooth function, do an nls fit and then using the investr package (this helps for the fitting as the function is then continuously differentiable). In your case: x <- c(478, 525, 580, 650, 700, 720, 780, 825, 850, 900, 930, 980, 1020, 1040, 1050, 1075, 1081, 1100, 1160, 1180, 1200) y <- c(1.70, 1.45, 1.50, 1.42, 1.39, 1.90, 2.49, 2.21, 2.57, 2.90, 3.55, 3.80, 4.27, 4.10, 4.60, 4.42, 4.30, 4.52, 4.40, 4.50, 4.15) # calculate rolling slopes at each point to provide good initial estimates for slope parameter b f <- function (d) { m <- lm(y~x, as.data.frame(d)) return(coef(m)[2]) } require(zoo) slopes <- rollapply(data.frame(x=x,y=y), 3, f, by.column=F) # smooth approximation is # y ~ a + (1/2)*b*(B2-B1) + # (1/2)*sqrt(abs(b*(4*s+b*(B1-x)^2))) - # (1/2)*sqrt(abs(b*(4*s+b*(B2-x)^2))) # this smooth approximation approaches the piecewise linear model more as s -> 0 require(minpack.lm) nlslmfit = nlsLM(y ~ a + (1/2)*exp(logb)*(B2-B1) + # we fit exp(logb) to force b > 0, if you don't want this just fit b instead (1/2)*sqrt(abs(exp(logb)*(4*1E-10+exp(logb)*(B1-x)^2))) - # now set s to 1E-10, we could also fit exp(logs) (1/2)*sqrt(abs(exp(logb)*(4*1E-10+exp(logb)*(B2-x)^2))), data = data.frame(x=x, y=y), start = c(B1=min(x)+1E-10, B2=max(x)-1E-10, a=min(y)+1E-10, logb=log(max(slopes))), # lower = c(B1=min(x), B2=mean(x), a=min(y), logb=log(min(slopes[slopes>0]))), # upper = c(B1=mean(x), B2=max(x), a=mean(y), logb=log(max(slopes))), control = nls.control(maxiter=1000, warnOnly=TRUE) ) # as s->0 this smooth model approximates more closely the piecewise linear one summary(nlslmfit) # Parameters: # Estimate Std. Error t value Pr(>|t|) # B1 699.99988 19.23569 36.39 < 2e-16 *** # B2 1050.00069 15.49283 67.77 < 2e-16 *** # a 1.50817 0.09636 15.65 1.57e-11 *** # logb -4.80172 0.06347 -75.65 < 2e-16 *** require(investr) xvals=seq(min(x),max(x),length.out=100) predintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="prediction")) confintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="confidence")) require(ggplot2) qplot(data=predintervals, x=x, y=fit, ymin=lwr, ymax=upr, geom="ribbon", fill=I("red"), alpha=I(0.2)) + geom_ribbon(data=confintervals, aes(x=x, ymin=lwr, ymax=upr), fill=I("blue"), alpha=I(0.2)) + geom_line(data=confintervals, aes(x=x, y=fit), colour=I("blue"), lwd=2) + geom_point(data=data.frame(x=x,y=y), aes(x=x, y=y, ymin=NULL, ymax=NULL), size=5, col="blue") + ylab("y") You can also do a robust nls fit (slightly more robust to outliers) using the nlrob function in the robustbase package, rest is the same as above: require(robustbase) nlsrobfit <- nlrob(y ~ a + (1/2)*exp(logb)*(B2-B1) + # we fit exp(logb) to force b > 0 (1/2)*sqrt(abs(exp(logb)*(4*1E-10+exp(logb)*(B1-x)^2))) - # now set s to 1E-10, we could also fit exp(logs) (1/2)*sqrt(abs(exp(logb)*(4*1E-10+exp(logb)*(B2-x)^2))), data = data.frame(x=x, y=y), maxit = 1000, method="M", algorithm="port", doCov=TRUE, start = c(B1=min(x)+1E-10, B2=max(x)-1E-10, a=min(y)+1E-10, logb=log(mean(slopes)) ), # lower = c(B1=min(x), B2=mean(x), a=min(y), logb=log(min(slopes[slopes>0]))), # upper = c(B1=mean(x), B2=max(x), a=mean(y), logb=log(max(slopes))), control = nls.control(maxiter=1000, warnOnly=TRUE) ) summary(nlsrobfit) class(nlsrobfit)="nls" # for compatibility with investr Comparison with model where s parameter is also fitted: require(minpack.lm) nlslmfit = nlsLM(y ~ a + (1/2)*exp(logb)*(B2-B1) + # we fit exp(logb) to force b > 0 (1/2)*sqrt(abs(exp(logb)*(4*exp(logs)+exp(logb)*(B1-x)^2))) - # we now fit exp(logs) (1/2)*sqrt(abs(exp(logb)*(4*exp(logs)+exp(logb)*(B2-x)^2))), data = data.frame(x=x, y=y), start = c(B1=min(x)+1E-10, B2=max(x)-1E-10, a=min(y)+1E-10, logb=log(mean(slopes)), logs=-10), control = nls.control(maxiter=1000, warnOnly=TRUE) ) summary(nlslmfit) # Parameters: # Estimate Std. Error t value Pr(>|t|) # B1 7.000e+02 2.079e+01 33.67 2.78e-16 *** # B2 1.051e+03 1.614e+01 65.08 < 2e-16 *** # a 1.514e+00 1.000e-01 15.13 6.70e-11 *** # logb -4.806e+00 7.131e-02 -67.39 < 2e-16 *** # logs -1.805e+01 4.561e+04 0.00 1 require(investr) xvals=seq(min(x),max(x),length.out=100) predintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="prediction")) confintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="confidence")) require(ggplot2) qplot(data=predintervals, x=x, y=fit, ymin=lwr, ymax=upr, geom="ribbon", fill=I("red"), alpha=I(0.2)) + geom_ribbon(data=confintervals, aes(x=x, ymin=lwr, ymax=upr), fill=I("blue"), alpha=I(0.2)) + geom_line(data=confintervals, aes(x=x, y=fit), colour=I("blue"), lwd=2) + geom_point(data=data.frame(x=x,y=y), aes(x=x, y=y, ymin=NULL, ymax=NULL), size=5, col="blue") + ylab("y") Comparison with smooth 4-parameter logistic model: M.4pl <- function(x, lower.asymp, upper.asymp, inflec, hill){ f <- lower.asymp + ((upper.asymp - lower.asymp)/ (1 + (x / inflec)^-hill)) return(f) } require(minpack.lm) nlslmfit = nlsLM(y ~ M.4pl(x, lower.asymp, upper.asymp, inflec, hill), data = data.frame(x=x, y=y), start = c(lower.asymp=min(y)+1E-10, upper.asymp=max(y)-1E-10, inflec=mean(x), hill=1), control = nls.control(maxiter=1000, warnOnly=TRUE) ) summary(nlslmfit) # Parameters: # Estimate Std. Error t value Pr(>|t|) # lower.asymp 1.5371 0.1080 14.24 7.06e-11 *** # upper.asymp 4.5508 0.1497 30.40 2.93e-16 *** # inflec 889.1543 14.0924 63.09 < 2e-16 *** # hill 13.1717 2.5475 5.17 7.68e-05 *** require(investr) xvals=seq(min(x),max(x),length.out=100) predintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="prediction")) confintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="confidence")) require(ggplot2) qplot(data=predintervals, x=x, y=fit, ymin=lwr, ymax=upr, geom="ribbon", fill=I("red"), alpha=I(0.2)) + geom_ribbon(data=confintervals, aes(x=x, ymin=lwr, ymax=upr), fill=I("blue"), alpha=I(0.2)) + geom_line(data=confintervals, aes(x=x, y=fit), colour=I("blue"), lwd=2) + geom_point(data=data.frame(x=x,y=y), aes(x=x, y=y, ymin=NULL, ymax=NULL), size=5, col="blue") + ylab("y")
Piecewise regression with constraints
If you also want confidence and prediction intervals you can first approximate your three-phase piecewise linear function by a smooth function, do an nls fit and then using the investr package (this h
Piecewise regression with constraints If you also want confidence and prediction intervals you can first approximate your three-phase piecewise linear function by a smooth function, do an nls fit and then using the investr package (this helps for the fitting as the function is then continuously differentiable). In your case: x <- c(478, 525, 580, 650, 700, 720, 780, 825, 850, 900, 930, 980, 1020, 1040, 1050, 1075, 1081, 1100, 1160, 1180, 1200) y <- c(1.70, 1.45, 1.50, 1.42, 1.39, 1.90, 2.49, 2.21, 2.57, 2.90, 3.55, 3.80, 4.27, 4.10, 4.60, 4.42, 4.30, 4.52, 4.40, 4.50, 4.15) # calculate rolling slopes at each point to provide good initial estimates for slope parameter b f <- function (d) { m <- lm(y~x, as.data.frame(d)) return(coef(m)[2]) } require(zoo) slopes <- rollapply(data.frame(x=x,y=y), 3, f, by.column=F) # smooth approximation is # y ~ a + (1/2)*b*(B2-B1) + # (1/2)*sqrt(abs(b*(4*s+b*(B1-x)^2))) - # (1/2)*sqrt(abs(b*(4*s+b*(B2-x)^2))) # this smooth approximation approaches the piecewise linear model more as s -> 0 require(minpack.lm) nlslmfit = nlsLM(y ~ a + (1/2)*exp(logb)*(B2-B1) + # we fit exp(logb) to force b > 0, if you don't want this just fit b instead (1/2)*sqrt(abs(exp(logb)*(4*1E-10+exp(logb)*(B1-x)^2))) - # now set s to 1E-10, we could also fit exp(logs) (1/2)*sqrt(abs(exp(logb)*(4*1E-10+exp(logb)*(B2-x)^2))), data = data.frame(x=x, y=y), start = c(B1=min(x)+1E-10, B2=max(x)-1E-10, a=min(y)+1E-10, logb=log(max(slopes))), # lower = c(B1=min(x), B2=mean(x), a=min(y), logb=log(min(slopes[slopes>0]))), # upper = c(B1=mean(x), B2=max(x), a=mean(y), logb=log(max(slopes))), control = nls.control(maxiter=1000, warnOnly=TRUE) ) # as s->0 this smooth model approximates more closely the piecewise linear one summary(nlslmfit) # Parameters: # Estimate Std. Error t value Pr(>|t|) # B1 699.99988 19.23569 36.39 < 2e-16 *** # B2 1050.00069 15.49283 67.77 < 2e-16 *** # a 1.50817 0.09636 15.65 1.57e-11 *** # logb -4.80172 0.06347 -75.65 < 2e-16 *** require(investr) xvals=seq(min(x),max(x),length.out=100) predintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="prediction")) confintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="confidence")) require(ggplot2) qplot(data=predintervals, x=x, y=fit, ymin=lwr, ymax=upr, geom="ribbon", fill=I("red"), alpha=I(0.2)) + geom_ribbon(data=confintervals, aes(x=x, ymin=lwr, ymax=upr), fill=I("blue"), alpha=I(0.2)) + geom_line(data=confintervals, aes(x=x, y=fit), colour=I("blue"), lwd=2) + geom_point(data=data.frame(x=x,y=y), aes(x=x, y=y, ymin=NULL, ymax=NULL), size=5, col="blue") + ylab("y") You can also do a robust nls fit (slightly more robust to outliers) using the nlrob function in the robustbase package, rest is the same as above: require(robustbase) nlsrobfit <- nlrob(y ~ a + (1/2)*exp(logb)*(B2-B1) + # we fit exp(logb) to force b > 0 (1/2)*sqrt(abs(exp(logb)*(4*1E-10+exp(logb)*(B1-x)^2))) - # now set s to 1E-10, we could also fit exp(logs) (1/2)*sqrt(abs(exp(logb)*(4*1E-10+exp(logb)*(B2-x)^2))), data = data.frame(x=x, y=y), maxit = 1000, method="M", algorithm="port", doCov=TRUE, start = c(B1=min(x)+1E-10, B2=max(x)-1E-10, a=min(y)+1E-10, logb=log(mean(slopes)) ), # lower = c(B1=min(x), B2=mean(x), a=min(y), logb=log(min(slopes[slopes>0]))), # upper = c(B1=mean(x), B2=max(x), a=mean(y), logb=log(max(slopes))), control = nls.control(maxiter=1000, warnOnly=TRUE) ) summary(nlsrobfit) class(nlsrobfit)="nls" # for compatibility with investr Comparison with model where s parameter is also fitted: require(minpack.lm) nlslmfit = nlsLM(y ~ a + (1/2)*exp(logb)*(B2-B1) + # we fit exp(logb) to force b > 0 (1/2)*sqrt(abs(exp(logb)*(4*exp(logs)+exp(logb)*(B1-x)^2))) - # we now fit exp(logs) (1/2)*sqrt(abs(exp(logb)*(4*exp(logs)+exp(logb)*(B2-x)^2))), data = data.frame(x=x, y=y), start = c(B1=min(x)+1E-10, B2=max(x)-1E-10, a=min(y)+1E-10, logb=log(mean(slopes)), logs=-10), control = nls.control(maxiter=1000, warnOnly=TRUE) ) summary(nlslmfit) # Parameters: # Estimate Std. Error t value Pr(>|t|) # B1 7.000e+02 2.079e+01 33.67 2.78e-16 *** # B2 1.051e+03 1.614e+01 65.08 < 2e-16 *** # a 1.514e+00 1.000e-01 15.13 6.70e-11 *** # logb -4.806e+00 7.131e-02 -67.39 < 2e-16 *** # logs -1.805e+01 4.561e+04 0.00 1 require(investr) xvals=seq(min(x),max(x),length.out=100) predintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="prediction")) confintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="confidence")) require(ggplot2) qplot(data=predintervals, x=x, y=fit, ymin=lwr, ymax=upr, geom="ribbon", fill=I("red"), alpha=I(0.2)) + geom_ribbon(data=confintervals, aes(x=x, ymin=lwr, ymax=upr), fill=I("blue"), alpha=I(0.2)) + geom_line(data=confintervals, aes(x=x, y=fit), colour=I("blue"), lwd=2) + geom_point(data=data.frame(x=x,y=y), aes(x=x, y=y, ymin=NULL, ymax=NULL), size=5, col="blue") + ylab("y") Comparison with smooth 4-parameter logistic model: M.4pl <- function(x, lower.asymp, upper.asymp, inflec, hill){ f <- lower.asymp + ((upper.asymp - lower.asymp)/ (1 + (x / inflec)^-hill)) return(f) } require(minpack.lm) nlslmfit = nlsLM(y ~ M.4pl(x, lower.asymp, upper.asymp, inflec, hill), data = data.frame(x=x, y=y), start = c(lower.asymp=min(y)+1E-10, upper.asymp=max(y)-1E-10, inflec=mean(x), hill=1), control = nls.control(maxiter=1000, warnOnly=TRUE) ) summary(nlslmfit) # Parameters: # Estimate Std. Error t value Pr(>|t|) # lower.asymp 1.5371 0.1080 14.24 7.06e-11 *** # upper.asymp 4.5508 0.1497 30.40 2.93e-16 *** # inflec 889.1543 14.0924 63.09 < 2e-16 *** # hill 13.1717 2.5475 5.17 7.68e-05 *** require(investr) xvals=seq(min(x),max(x),length.out=100) predintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="prediction")) confintervals = data.frame(x=xvals,predFit(nlslmfit, newdata=data.frame(x=xvals), interval="confidence")) require(ggplot2) qplot(data=predintervals, x=x, y=fit, ymin=lwr, ymax=upr, geom="ribbon", fill=I("red"), alpha=I(0.2)) + geom_ribbon(data=confintervals, aes(x=x, ymin=lwr, ymax=upr), fill=I("blue"), alpha=I(0.2)) + geom_line(data=confintervals, aes(x=x, y=fit), colour=I("blue"), lwd=2) + geom_point(data=data.frame(x=x,y=y), aes(x=x, y=y, ymin=NULL, ymax=NULL), size=5, col="blue") + ylab("y")
Piecewise regression with constraints If you also want confidence and prediction intervals you can first approximate your three-phase piecewise linear function by a smooth function, do an nls fit and then using the investr package (this h
46,177
Piecewise regression with constraints
In the general case, the linear piecewise regression for three segments leads to this kind of function : The parameters where computed according to the direct method (not iterative) given page 30 in the paper https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf But the first and third segment are not parallel to the x-axis, which probably is not what is expected. The particular case of piecewise regression for thee segments with first and third segments parallel to the x-axis is given page 18 of the paper. The result is :
Piecewise regression with constraints
In the general case, the linear piecewise regression for three segments leads to this kind of function : The parameters where computed according to the direct method (not iterative) given page 30 in
Piecewise regression with constraints In the general case, the linear piecewise regression for three segments leads to this kind of function : The parameters where computed according to the direct method (not iterative) given page 30 in the paper https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf But the first and third segment are not parallel to the x-axis, which probably is not what is expected. The particular case of piecewise regression for thee segments with first and third segments parallel to the x-axis is given page 18 of the paper. The result is :
Piecewise regression with constraints In the general case, the linear piecewise regression for three segments leads to this kind of function : The parameters where computed according to the direct method (not iterative) given page 30 in
46,178
Interpretation of marginal effects in Logit Model with log$\times$independent variable
You know that in a logit: $$Pr[y = 1 \vert x,z] = p = \frac{\exp (\alpha + \beta \cdot \ln x + \gamma z)}{1+\exp (\alpha + \beta \cdot \ln x + \gamma z )}. $$ After some tedious calculus and simplification, the partial of that with respect to $x$ becomes: $$ \frac{\partial Pr[y=1 \vert x,z]}{\partial x} = \frac{\beta}{x} \cdot p \cdot (1-p). $$ This is (sort of) equivalent to $$\frac{\Delta p}{\Delta x}=\frac{\beta}{x} \cdot p \cdot (1-p),$$ which can be re-written as $$\frac{\Delta p}{100 \cdot \frac{ \Delta x}{x}}= \frac{\beta \cdot p \cdot (1-p)}{100}.$$ This is the definition of semi-elasticity, and can be interpreted as the change in probability for a 1% change in $x$. Here's an example in Stata.* Note that I am using margins instead of the out-of-date mfx to get the average marginal effect of $x$, $\frac{1}{N}\Sigma_{i=1}^N\frac{\beta \cdot p_i \cdot (1-p_i)}{100}$: . sysuse auto, clear (1978 Automobile Data) . gen ln_price = ln(price) . logit foreign ln_price mpg weight, nolog Logistic regression Number of obs = 74 LR chi2(3) = 57.69 Prob > chi2 = 0.0000 Log likelihood = -16.185932 Pseudo R2 = 0.6406 ------------------------------------------------------------------------------ foreign | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- ln_price | 6.851215 2.11763 3.24 0.001 2.700737 11.00169 mpg | -.0880842 .1031317 -0.85 0.393 -.2902186 .1140503 weight | -.0062268 .0017269 -3.61 0.000 -.0096115 -.0028422 _cons | -41.32383 16.24003 -2.54 0.011 -73.15371 -9.493947 ------------------------------------------------------------------------------ . margins, expression(_b[ln_price]*predict()*(1-predict())/100) Predictive margins Number of obs = 74 Model VCE : OIM Expression : _b[ln_price]*predict()*(1-predict())/100 ------------------------------------------------------------------------------ | Delta-method | Margin Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- _cons | .0046371 .0007965 5.82 0.000 .003076 .0061982 ------------------------------------------------------------------------------ This means that for a 1% increase in price, the probability that a car is foreign increases by 0.005 on a [0,1] scale. Or a 10% increase in price gives you a 0.05 increase. In this date, about 0.3 of the cars are foreign, so these are economically and statistically significant. Edit: A good way to do this in Stata 10 is to install the user-written command margeff: . margeff, dydx(ln_price) replace Average partial effects after margeff y = Pr(foreign) ------------------------------------------------------------------------------ variable | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- ln_price | .4637103 .0796514 5.82 0.000 .3075964 .6198241 mpg | -.0059616 .006781 -0.88 0.379 -.0192522 .007329 weight | -.0004214 .0000417 -10.11 0.000 -.0005031 -.0003398 ------------------------------------------------------------------------------ . lincom _b[ln_price]/100 ( 1) .01*ln_price = 0 ------------------------------------------------------------------------------ variable | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- (1) | .0046371 .0007965 5.82 0.000 .003076 .0061982 ------------------------------------------------------------------------------ *This is actually not a great empirical example since the relationship in the data has an inverted-U shape.
Interpretation of marginal effects in Logit Model with log$\times$independent variable
You know that in a logit: $$Pr[y = 1 \vert x,z] = p = \frac{\exp (\alpha + \beta \cdot \ln x + \gamma z)}{1+\exp (\alpha + \beta \cdot \ln x + \gamma z )}. $$ After some tedious calculus and simplific
Interpretation of marginal effects in Logit Model with log$\times$independent variable You know that in a logit: $$Pr[y = 1 \vert x,z] = p = \frac{\exp (\alpha + \beta \cdot \ln x + \gamma z)}{1+\exp (\alpha + \beta \cdot \ln x + \gamma z )}. $$ After some tedious calculus and simplification, the partial of that with respect to $x$ becomes: $$ \frac{\partial Pr[y=1 \vert x,z]}{\partial x} = \frac{\beta}{x} \cdot p \cdot (1-p). $$ This is (sort of) equivalent to $$\frac{\Delta p}{\Delta x}=\frac{\beta}{x} \cdot p \cdot (1-p),$$ which can be re-written as $$\frac{\Delta p}{100 \cdot \frac{ \Delta x}{x}}= \frac{\beta \cdot p \cdot (1-p)}{100}.$$ This is the definition of semi-elasticity, and can be interpreted as the change in probability for a 1% change in $x$. Here's an example in Stata.* Note that I am using margins instead of the out-of-date mfx to get the average marginal effect of $x$, $\frac{1}{N}\Sigma_{i=1}^N\frac{\beta \cdot p_i \cdot (1-p_i)}{100}$: . sysuse auto, clear (1978 Automobile Data) . gen ln_price = ln(price) . logit foreign ln_price mpg weight, nolog Logistic regression Number of obs = 74 LR chi2(3) = 57.69 Prob > chi2 = 0.0000 Log likelihood = -16.185932 Pseudo R2 = 0.6406 ------------------------------------------------------------------------------ foreign | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- ln_price | 6.851215 2.11763 3.24 0.001 2.700737 11.00169 mpg | -.0880842 .1031317 -0.85 0.393 -.2902186 .1140503 weight | -.0062268 .0017269 -3.61 0.000 -.0096115 -.0028422 _cons | -41.32383 16.24003 -2.54 0.011 -73.15371 -9.493947 ------------------------------------------------------------------------------ . margins, expression(_b[ln_price]*predict()*(1-predict())/100) Predictive margins Number of obs = 74 Model VCE : OIM Expression : _b[ln_price]*predict()*(1-predict())/100 ------------------------------------------------------------------------------ | Delta-method | Margin Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- _cons | .0046371 .0007965 5.82 0.000 .003076 .0061982 ------------------------------------------------------------------------------ This means that for a 1% increase in price, the probability that a car is foreign increases by 0.005 on a [0,1] scale. Or a 10% increase in price gives you a 0.05 increase. In this date, about 0.3 of the cars are foreign, so these are economically and statistically significant. Edit: A good way to do this in Stata 10 is to install the user-written command margeff: . margeff, dydx(ln_price) replace Average partial effects after margeff y = Pr(foreign) ------------------------------------------------------------------------------ variable | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- ln_price | .4637103 .0796514 5.82 0.000 .3075964 .6198241 mpg | -.0059616 .006781 -0.88 0.379 -.0192522 .007329 weight | -.0004214 .0000417 -10.11 0.000 -.0005031 -.0003398 ------------------------------------------------------------------------------ . lincom _b[ln_price]/100 ( 1) .01*ln_price = 0 ------------------------------------------------------------------------------ variable | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- (1) | .0046371 .0007965 5.82 0.000 .003076 .0061982 ------------------------------------------------------------------------------ *This is actually not a great empirical example since the relationship in the data has an inverted-U shape.
Interpretation of marginal effects in Logit Model with log$\times$independent variable You know that in a logit: $$Pr[y = 1 \vert x,z] = p = \frac{\exp (\alpha + \beta \cdot \ln x + \gamma z)}{1+\exp (\alpha + \beta \cdot \ln x + \gamma z )}. $$ After some tedious calculus and simplific
46,179
Negative $R^2$ at random regression forest [duplicate]
Explained variance is here defined as R² = 1- SSmodel / SStotal = sum((ŷ-y)²) / sum((mean(y)-y)²). = 1 - mse / var(y). It is correct that the squared pearson product-moment correlation cannot be negative. in the documentation to randomForest function is written in values section: rsq (regression only) “pseudo R-squared”: 1 - mse / Var(y). A simple interpretation of this negative R², is that you were better of simply predicting any sample as equal to grand mean. Thus the model don't do very good. The predictions of the training set RF$predicted are out-of-bag cross validated, likewise should any R^2 or other performance measure be. library(randomForest) obs = 500 vars = 100 X = replicate(vars,factor(sample(1:5,obs,replace=T))) y = rnorm(obs) RF = randomForest(X,y) #var explained printed print(RF) cat("% Var explained: \n", 100 * (1-sum((RF$y-RF$pred )^2) / sum((RF$y-mean(RF$y))^2) ) ) ##pearson correlation R²(pearson) cat("% Pearson cor: \n ", 100*cor(RF$y,RF$predicted)^2) ##spearman correlation R²(spearman) cat("% spearman cor: \n ", 100*cor(RF$y,RF$predicted,method="s")^2)
Negative $R^2$ at random regression forest [duplicate]
Explained variance is here defined as R² = 1- SSmodel / SStotal = sum((ŷ-y)²) / sum((mean(y)-y)²). = 1 - mse / var(y). It is correct that the squared pearson product-moment correlation cannot be nega
Negative $R^2$ at random regression forest [duplicate] Explained variance is here defined as R² = 1- SSmodel / SStotal = sum((ŷ-y)²) / sum((mean(y)-y)²). = 1 - mse / var(y). It is correct that the squared pearson product-moment correlation cannot be negative. in the documentation to randomForest function is written in values section: rsq (regression only) “pseudo R-squared”: 1 - mse / Var(y). A simple interpretation of this negative R², is that you were better of simply predicting any sample as equal to grand mean. Thus the model don't do very good. The predictions of the training set RF$predicted are out-of-bag cross validated, likewise should any R^2 or other performance measure be. library(randomForest) obs = 500 vars = 100 X = replicate(vars,factor(sample(1:5,obs,replace=T))) y = rnorm(obs) RF = randomForest(X,y) #var explained printed print(RF) cat("% Var explained: \n", 100 * (1-sum((RF$y-RF$pred )^2) / sum((RF$y-mean(RF$y))^2) ) ) ##pearson correlation R²(pearson) cat("% Pearson cor: \n ", 100*cor(RF$y,RF$predicted)^2) ##spearman correlation R²(spearman) cat("% spearman cor: \n ", 100*cor(RF$y,RF$predicted,method="s")^2)
Negative $R^2$ at random regression forest [duplicate] Explained variance is here defined as R² = 1- SSmodel / SStotal = sum((ŷ-y)²) / sum((mean(y)-y)²). = 1 - mse / var(y). It is correct that the squared pearson product-moment correlation cannot be nega
46,180
Chi-squared-like test with absolute instead of squared differences for calculating test statistic
You don't very clearly explain your concern, but I suppose that you're probably worried about the relative weight the chi-square puts on the cases where $(O_i-E_i)^2$ is large relative to the $E_i$ on the denominator. A single such term can dominate the statistic. I also assume (at least to start with) that you're asking about the multinomial goodness of fit chi-square. Note that your statistic is $\sum_{i=1}^k|\frac{O_i-E_i}{E_i}|= \sum_{i=1}^k|\frac{O_i}{E_i}-1|$. If you want to reduce the effect of the larger differences between observed and expected values for multinomial goodness of fit tests, there's the power-divergence family[1]: $$2nI^\lambda=\frac{2}{\lambda(\lambda+1)}\sum_{i=1}^k O_i\left\{\left(\frac{O_i}{E_i}\right)^\lambda-1\right\}\,;\,\lambda\in\mathbb{R}$$ Some authors refer to $2nI^\lambda$ as $\text{CR}(\lambda)$. The choice $\lambda=1$ gives the ordinary chi-square, $\lambda=0$ gives the G test, $\lambda=-\frac{_1}{^2}$ corresponds to the Freeman-Tukey statistic[2][3], and so on. These all have asymptotic chi-square distributions. Of those, two that would seem to come more-or-less near to what you were seeking in the statistic (at least in the sense of having a power of $O_i$ near 1) would be $\lambda=0$, the G-test (likelihood ratio test): $$G = 2\sum_{i=1}^k O_i\cdot\ln\left(\frac{O_i}{E_i}\right)$$ and the (usual form of the) Freeman-Tukey: $$T^2 = 4\sum_{i=1}^k \left(\sqrt{O_i}-\sqrt{E_i}\right)^2$$ If you're looking for a test for a contingency table, the likelihood ratio test is widely accepted and it has good properties; the distribution of the test statistic tends to work a little better at small sample sizes as well. I've seen at least one paper where power divergence statistics (aside from the usual chi-square and likelihood ratio test) were adapted to the contingency table case, but haven't pursued them. -- More generally, you can use pretty much whatever statistic you choose if you can sample from the null distribution of your test statistic, but (as whuber points out) you should consider the properties of your choice of statistic. Just choosing statistics at whim may produce poor power characteristics (power may be investigated for specific alternatives of interest). You should justify your choice of test statistic carefully - why that statistic, rather than some other, similar statistic? e.g. why $\sum_i|\frac{O_i-E_i}{E_i}|= \sum_i|O_i/E_i-1|$ rather than something that would seem to be more natural, perhaps ones such as $\sum_i |\frac{O_i-E_i}{\sqrt{E_i}}|$ or $\sum_i |\frac{O_i-E_i}{\sqrt{E_i(1-\pi_i)}}|$. Under multinomial sampling under $H_0$ its easy enough to produce random tables of counts and so investigate the distribution of some test statistic under the null (and so produce a test). If you condition on the margins, it's also possible to sample contingency tables of counts under the null of independence (e.g. R has a function to do so). It's generally better to start with something whose good characteristics are established. [1] Cressie, N. and Read, T.R.C (1984), "Multinomial Goodness-of-fit Tests" JRSSB, 46(3), 440-464 [2] Read, C. B. (1993), "Freeman—Tukey chi-squared goodness-of-fit statistics" Statistics & Probability Letters, 18(4), November: 271–278 [3] Freeman, M. F. and Tukey, J. W. (1950), "Transformations related to the angular and the square root", The Annals of Mathematical Statistics 21(4): 607–611, doi:10.1214/aoms/1177729756
Chi-squared-like test with absolute instead of squared differences for calculating test statistic
You don't very clearly explain your concern, but I suppose that you're probably worried about the relative weight the chi-square puts on the cases where $(O_i-E_i)^2$ is large relative to the $E_i$ on
Chi-squared-like test with absolute instead of squared differences for calculating test statistic You don't very clearly explain your concern, but I suppose that you're probably worried about the relative weight the chi-square puts on the cases where $(O_i-E_i)^2$ is large relative to the $E_i$ on the denominator. A single such term can dominate the statistic. I also assume (at least to start with) that you're asking about the multinomial goodness of fit chi-square. Note that your statistic is $\sum_{i=1}^k|\frac{O_i-E_i}{E_i}|= \sum_{i=1}^k|\frac{O_i}{E_i}-1|$. If you want to reduce the effect of the larger differences between observed and expected values for multinomial goodness of fit tests, there's the power-divergence family[1]: $$2nI^\lambda=\frac{2}{\lambda(\lambda+1)}\sum_{i=1}^k O_i\left\{\left(\frac{O_i}{E_i}\right)^\lambda-1\right\}\,;\,\lambda\in\mathbb{R}$$ Some authors refer to $2nI^\lambda$ as $\text{CR}(\lambda)$. The choice $\lambda=1$ gives the ordinary chi-square, $\lambda=0$ gives the G test, $\lambda=-\frac{_1}{^2}$ corresponds to the Freeman-Tukey statistic[2][3], and so on. These all have asymptotic chi-square distributions. Of those, two that would seem to come more-or-less near to what you were seeking in the statistic (at least in the sense of having a power of $O_i$ near 1) would be $\lambda=0$, the G-test (likelihood ratio test): $$G = 2\sum_{i=1}^k O_i\cdot\ln\left(\frac{O_i}{E_i}\right)$$ and the (usual form of the) Freeman-Tukey: $$T^2 = 4\sum_{i=1}^k \left(\sqrt{O_i}-\sqrt{E_i}\right)^2$$ If you're looking for a test for a contingency table, the likelihood ratio test is widely accepted and it has good properties; the distribution of the test statistic tends to work a little better at small sample sizes as well. I've seen at least one paper where power divergence statistics (aside from the usual chi-square and likelihood ratio test) were adapted to the contingency table case, but haven't pursued them. -- More generally, you can use pretty much whatever statistic you choose if you can sample from the null distribution of your test statistic, but (as whuber points out) you should consider the properties of your choice of statistic. Just choosing statistics at whim may produce poor power characteristics (power may be investigated for specific alternatives of interest). You should justify your choice of test statistic carefully - why that statistic, rather than some other, similar statistic? e.g. why $\sum_i|\frac{O_i-E_i}{E_i}|= \sum_i|O_i/E_i-1|$ rather than something that would seem to be more natural, perhaps ones such as $\sum_i |\frac{O_i-E_i}{\sqrt{E_i}}|$ or $\sum_i |\frac{O_i-E_i}{\sqrt{E_i(1-\pi_i)}}|$. Under multinomial sampling under $H_0$ its easy enough to produce random tables of counts and so investigate the distribution of some test statistic under the null (and so produce a test). If you condition on the margins, it's also possible to sample contingency tables of counts under the null of independence (e.g. R has a function to do so). It's generally better to start with something whose good characteristics are established. [1] Cressie, N. and Read, T.R.C (1984), "Multinomial Goodness-of-fit Tests" JRSSB, 46(3), 440-464 [2] Read, C. B. (1993), "Freeman—Tukey chi-squared goodness-of-fit statistics" Statistics & Probability Letters, 18(4), November: 271–278 [3] Freeman, M. F. and Tukey, J. W. (1950), "Transformations related to the angular and the square root", The Annals of Mathematical Statistics 21(4): 607–611, doi:10.1214/aoms/1177729756
Chi-squared-like test with absolute instead of squared differences for calculating test statistic You don't very clearly explain your concern, but I suppose that you're probably worried about the relative weight the chi-square puts on the cases where $(O_i-E_i)^2$ is large relative to the $E_i$ on
46,181
Chi-squared-like test with absolute instead of squared differences for calculating test statistic
Just to add to this, the concern about weighting for chi-square and related tests is well-founded in certain cases. One solution is to use the (unweighted) Euclidean distance. This isn't the absolute distance, but I think it's a little more intuitive than the weighted Euclidean distance involved in the chi-square test. There's been several recent papers on this, but for a relatively detailed explication, see William Perkins, Mark Tygert, Rachel Ward's paper. The free version is here: https://arxiv.org/abs/1108.4126
Chi-squared-like test with absolute instead of squared differences for calculating test statistic
Just to add to this, the concern about weighting for chi-square and related tests is well-founded in certain cases. One solution is to use the (unweighted) Euclidean distance. This isn't the absolute
Chi-squared-like test with absolute instead of squared differences for calculating test statistic Just to add to this, the concern about weighting for chi-square and related tests is well-founded in certain cases. One solution is to use the (unweighted) Euclidean distance. This isn't the absolute distance, but I think it's a little more intuitive than the weighted Euclidean distance involved in the chi-square test. There's been several recent papers on this, but for a relatively detailed explication, see William Perkins, Mark Tygert, Rachel Ward's paper. The free version is here: https://arxiv.org/abs/1108.4126
Chi-squared-like test with absolute instead of squared differences for calculating test statistic Just to add to this, the concern about weighting for chi-square and related tests is well-founded in certain cases. One solution is to use the (unweighted) Euclidean distance. This isn't the absolute
46,182
Compute the probability that the provided classifier label is correct
SVMs do produce a decision function but it does not directly correspond to probability. There is a way in LibSVM (and sklearn which uses that under the hood) to get probabilities using Platt scaling which sounds like what you're looking for. Some more details about how that works are here: How does sklearn.svm.svc's function predict_proba() work internally? Converting LinearSVC's decision function to probabilities
Compute the probability that the provided classifier label is correct
SVMs do produce a decision function but it does not directly correspond to probability. There is a way in LibSVM (and sklearn which uses that under the hood) to get probabilities using Platt scaling
Compute the probability that the provided classifier label is correct SVMs do produce a decision function but it does not directly correspond to probability. There is a way in LibSVM (and sklearn which uses that under the hood) to get probabilities using Platt scaling which sounds like what you're looking for. Some more details about how that works are here: How does sklearn.svm.svc's function predict_proba() work internally? Converting LinearSVC's decision function to probabilities
Compute the probability that the provided classifier label is correct SVMs do produce a decision function but it does not directly correspond to probability. There is a way in LibSVM (and sklearn which uses that under the hood) to get probabilities using Platt scaling
46,183
Compute the probability that the provided classifier label is correct
What you want sounds akin to a precision-recall (PR) curve. PR curves show precision (TP / (TP + FP)) as a function of recall (TP / (TP + FN)). Every PR point corresponds to a threshold $T$ on the SVM's output $d$ (signed distance to the hyperplane), that is positive if $d \geq T$ and negative otherwise. As such, you could create a figure depicting precision as a function of decision threshold. The general shape of this figure will be similar to a PR curve, except that it will be stretched horizontally (because not every unique decision value corresponds to a unique recall, the PR curve is more dense horizontally). It must be noted that, perhaps contrary to intuition, precision is not necessarily highest for the largest decision values ($d$).
Compute the probability that the provided classifier label is correct
What you want sounds akin to a precision-recall (PR) curve. PR curves show precision (TP / (TP + FP)) as a function of recall (TP / (TP + FN)). Every PR point corresponds to a threshold $T$ on the SVM
Compute the probability that the provided classifier label is correct What you want sounds akin to a precision-recall (PR) curve. PR curves show precision (TP / (TP + FP)) as a function of recall (TP / (TP + FN)). Every PR point corresponds to a threshold $T$ on the SVM's output $d$ (signed distance to the hyperplane), that is positive if $d \geq T$ and negative otherwise. As such, you could create a figure depicting precision as a function of decision threshold. The general shape of this figure will be similar to a PR curve, except that it will be stretched horizontally (because not every unique decision value corresponds to a unique recall, the PR curve is more dense horizontally). It must be noted that, perhaps contrary to intuition, precision is not necessarily highest for the largest decision values ($d$).
Compute the probability that the provided classifier label is correct What you want sounds akin to a precision-recall (PR) curve. PR curves show precision (TP / (TP + FP)) as a function of recall (TP / (TP + FN)). Every PR point corresponds to a threshold $T$ on the SVM
46,184
Specific robust measure of scale
This is possible in a somewhat artificial sense: just adjust $R$ a little whenever $S$ is nonzero, to guarantee $R$ is nonzero in such cases. "Resistant" means $R$ has a finite breakdown point, but the possibility of a sample with values $(Y, Y, \ldots, Y, Y^\prime \ne Y)$ means that $R$ has to respond to the value of even the most extreme datum. It would seem that such an $R$ could not be resistant. However, since you have imposed no other conditions on $R$ that might otherwise limit our choices, you could make it resistant by ensuring $R$ cannot change much in such cases. For example, define it as $$R(Y_1, \ldots, Y_n) = \text{MAD}(Y_1,\ldots, Y_n) + I(S(Y_1,\ldots,Y_n) \gt 0)/n$$ (using the indicator function $I$). Because the amount by which this could change the MAD is bounded, $R$ has the same breakdown point as the MAD, making it (strongly) resistant. This merely adds $1/n$ to the (non-negative) MAD whenever the SD is nonzero, guaranteeing the "magic property." By adding a quantity that decreases to zero as the sample size increases, asymptotically this $R$ will have the same expectation as the MAD, showing that the artificial correction isn't necessarily all that bad. Of course you would be concerned about all this only when the $Y_i$ do not have a continuous distribution or are strongly correlated (for otherwise the chance of the SD being zero would be nil). If you don't want to bias the estimator relative to the MAD you could, for instance, multiply the MAD by $$1 + \text{sign}(Y_1-Y_2)I(S(Y_1,\ldots,Y_n)\gt 0)/(2n)$$ when the $Y_i$ are iid. (This trick obviates the need to use a randomized estimator.) Naturally the MAD could be replaced by almost any resistant estimator of scale. The additive factor of $1/n$ could be replaced by any bounded nonzero function of $n$ or the multiplicative factors $1\pm 1/(2n)$ by any function with range in a finite interval $[a,b]$ and $a\gt 0$.
Specific robust measure of scale
This is possible in a somewhat artificial sense: just adjust $R$ a little whenever $S$ is nonzero, to guarantee $R$ is nonzero in such cases. "Resistant" means $R$ has a finite breakdown point, but t
Specific robust measure of scale This is possible in a somewhat artificial sense: just adjust $R$ a little whenever $S$ is nonzero, to guarantee $R$ is nonzero in such cases. "Resistant" means $R$ has a finite breakdown point, but the possibility of a sample with values $(Y, Y, \ldots, Y, Y^\prime \ne Y)$ means that $R$ has to respond to the value of even the most extreme datum. It would seem that such an $R$ could not be resistant. However, since you have imposed no other conditions on $R$ that might otherwise limit our choices, you could make it resistant by ensuring $R$ cannot change much in such cases. For example, define it as $$R(Y_1, \ldots, Y_n) = \text{MAD}(Y_1,\ldots, Y_n) + I(S(Y_1,\ldots,Y_n) \gt 0)/n$$ (using the indicator function $I$). Because the amount by which this could change the MAD is bounded, $R$ has the same breakdown point as the MAD, making it (strongly) resistant. This merely adds $1/n$ to the (non-negative) MAD whenever the SD is nonzero, guaranteeing the "magic property." By adding a quantity that decreases to zero as the sample size increases, asymptotically this $R$ will have the same expectation as the MAD, showing that the artificial correction isn't necessarily all that bad. Of course you would be concerned about all this only when the $Y_i$ do not have a continuous distribution or are strongly correlated (for otherwise the chance of the SD being zero would be nil). If you don't want to bias the estimator relative to the MAD you could, for instance, multiply the MAD by $$1 + \text{sign}(Y_1-Y_2)I(S(Y_1,\ldots,Y_n)\gt 0)/(2n)$$ when the $Y_i$ are iid. (This trick obviates the need to use a randomized estimator.) Naturally the MAD could be replaced by almost any resistant estimator of scale. The additive factor of $1/n$ could be replaced by any bounded nonzero function of $n$ or the multiplicative factors $1\pm 1/(2n)$ by any function with range in a finite interval $[a,b]$ and $a\gt 0$.
Specific robust measure of scale This is possible in a somewhat artificial sense: just adjust $R$ a little whenever $S$ is nonzero, to guarantee $R$ is nonzero in such cases. "Resistant" means $R$ has a finite breakdown point, but t
46,185
Specific robust measure of scale
I will here gather together comments with the import that this is not possible in a simple manner. Consider an example such as $y = 7, 7, \cdots, 7, 42$ with two distinct values, one of which occurs only once. The singleton in this example, $42$, has outlier flavour. So in broad terms resistant measures of scale are of interest here. However, as @Michael M points out in the question, for such examples (with positive standard deviation) then IQR and MAD are zero. We could add that so also is the length of the shortest half. But any other alternative to the SD could hardly ignore the value of the smallest pairwise positive difference, i.e. the smallest value of $|y_i - y_j|$ that is positive. Here that is $35$ and for examples of this type it must be equal to the range, and thus quintessentially not robust (or resistant). Note that for binary data coded $0$ and $1$ similar situations can easily arise, but using the SD as a measure of scale is pretty much universal without discussion. For binary data the IQR and MAD would usually be 0.
Specific robust measure of scale
I will here gather together comments with the import that this is not possible in a simple manner. Consider an example such as $y = 7, 7, \cdots, 7, 42$ with two distinct values, one of which occurs
Specific robust measure of scale I will here gather together comments with the import that this is not possible in a simple manner. Consider an example such as $y = 7, 7, \cdots, 7, 42$ with two distinct values, one of which occurs only once. The singleton in this example, $42$, has outlier flavour. So in broad terms resistant measures of scale are of interest here. However, as @Michael M points out in the question, for such examples (with positive standard deviation) then IQR and MAD are zero. We could add that so also is the length of the shortest half. But any other alternative to the SD could hardly ignore the value of the smallest pairwise positive difference, i.e. the smallest value of $|y_i - y_j|$ that is positive. Here that is $35$ and for examples of this type it must be equal to the range, and thus quintessentially not robust (or resistant). Note that for binary data coded $0$ and $1$ similar situations can easily arise, but using the SD as a measure of scale is pretty much universal without discussion. For binary data the IQR and MAD would usually be 0.
Specific robust measure of scale I will here gather together comments with the import that this is not possible in a simple manner. Consider an example such as $y = 7, 7, \cdots, 7, 42$ with two distinct values, one of which occurs
46,186
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$?
The parameters of a $\text{Beta}(\alpha,\beta)$ distribution with mean $0\lt m\lt 1$ and variance $0\lt v\lt m(1-m)$ are $$\alpha = m\frac{m(1-m)- v}{v},\quad \beta = (1-m)\frac{m(1-m)-v}{v}.$$ This shaded contour plot of $\alpha$ has contours ranging from $0$ (at the top of the colored region) to $1$ (along the bottom). The plot of $\beta$ is its mirror image. If they are not both less than $1$, then algebra shows us that $$v \le m\max\left(\frac{m(1-m)}{1+m}, \frac{(1-m)^2}{2-m}\right).$$ The valid set of all possible means and variances of Beta distributions is contained beneath the gray curve. Within that set, those where one or both of $\alpha$ and $\beta$ are $1$ or greater is shown in darker blue. These tend to have lower variances on the whole.
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$?
The parameters of a $\text{Beta}(\alpha,\beta)$ distribution with mean $0\lt m\lt 1$ and variance $0\lt v\lt m(1-m)$ are $$\alpha = m\frac{m(1-m)- v}{v},\quad \beta = (1-m)\frac{m(1-m)-v}{v}.$$ This
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$? The parameters of a $\text{Beta}(\alpha,\beta)$ distribution with mean $0\lt m\lt 1$ and variance $0\lt v\lt m(1-m)$ are $$\alpha = m\frac{m(1-m)- v}{v},\quad \beta = (1-m)\frac{m(1-m)-v}{v}.$$ This shaded contour plot of $\alpha$ has contours ranging from $0$ (at the top of the colored region) to $1$ (along the bottom). The plot of $\beta$ is its mirror image. If they are not both less than $1$, then algebra shows us that $$v \le m\max\left(\frac{m(1-m)}{1+m}, \frac{(1-m)^2}{2-m}\right).$$ The valid set of all possible means and variances of Beta distributions is contained beneath the gray curve. Within that set, those where one or both of $\alpha$ and $\beta$ are $1$ or greater is shown in darker blue. These tend to have lower variances on the whole.
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$? The parameters of a $\text{Beta}(\alpha,\beta)$ distribution with mean $0\lt m\lt 1$ and variance $0\lt v\lt m(1-m)$ are $$\alpha = m\frac{m(1-m)- v}{v},\quad \beta = (1-m)\frac{m(1-m)-v}{v}.$$ This
46,187
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$?
The mean of the Beta distribution is $$\mu = \frac {\alpha}{\alpha + \beta}$$ We want to see whether restricting the permissible range of $\mu$, will guarantee that we will have either $\{\alpha \geq 1, \beta >0\}$, OR $\{\alpha >0, \beta \geq 1\}$. Treating the mean as a function of the parameters we obtain $$\frac {\partial \mu}{\partial \alpha} > 0, \;\; \frac {\partial \mu}{\partial \beta} <0$$ So it is monotonically increasing in $\alpha$ and monotonically decreasing in $\beta$. So $$\min \mu = 1/(1+\beta)\implies \{\alpha \geq 1, \beta >0\} \tag {1}$$ but the situation $\{\alpha >0, \beta \geq 1\}$ permits all possible values of $\mu$ (in $(0,1)$). In other words by restricting the mean to lie in the interval $[1/(1+\beta),\, 1)$ we can guarantee that we will have $\{\alpha \geq 1, \beta >0\}$. But there is no restriction on the mean that will guarantee us that $\{\alpha >0, \beta \geq 1\}$. So we should turn to the variance, which is $$\sigma^2 = \frac {\alpha \beta}{(\alpha + \beta)^2(\alpha + \beta +1)}$$ It is not difficult to determine that no restriction on the range of the variance can guarantee that we will have $\{\alpha >0, \beta \geq 1\}$. So : If we impose the restriction $\mu \geq 1/(1+\beta)$, then we will certainly have $\{\alpha \geq 1, \beta >0\}$, i.e. "not both parameters smaller than unity". But this in a sense is a partial result, since there is also the other way in which "not both parameters are smaller than unity". In other words, this approach imposes the additional restriction that only $\beta$ is allowed to be smaller than unity. It is therefore incompatible with Beta distributions for which we want to be able to have $\alpha \leq 1$ (and $\beta >1$).
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$?
The mean of the Beta distribution is $$\mu = \frac {\alpha}{\alpha + \beta}$$ We want to see whether restricting the permissible range of $\mu$, will guarantee that we will have either $\{\alpha \geq
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$? The mean of the Beta distribution is $$\mu = \frac {\alpha}{\alpha + \beta}$$ We want to see whether restricting the permissible range of $\mu$, will guarantee that we will have either $\{\alpha \geq 1, \beta >0\}$, OR $\{\alpha >0, \beta \geq 1\}$. Treating the mean as a function of the parameters we obtain $$\frac {\partial \mu}{\partial \alpha} > 0, \;\; \frac {\partial \mu}{\partial \beta} <0$$ So it is monotonically increasing in $\alpha$ and monotonically decreasing in $\beta$. So $$\min \mu = 1/(1+\beta)\implies \{\alpha \geq 1, \beta >0\} \tag {1}$$ but the situation $\{\alpha >0, \beta \geq 1\}$ permits all possible values of $\mu$ (in $(0,1)$). In other words by restricting the mean to lie in the interval $[1/(1+\beta),\, 1)$ we can guarantee that we will have $\{\alpha \geq 1, \beta >0\}$. But there is no restriction on the mean that will guarantee us that $\{\alpha >0, \beta \geq 1\}$. So we should turn to the variance, which is $$\sigma^2 = \frac {\alpha \beta}{(\alpha + \beta)^2(\alpha + \beta +1)}$$ It is not difficult to determine that no restriction on the range of the variance can guarantee that we will have $\{\alpha >0, \beta \geq 1\}$. So : If we impose the restriction $\mu \geq 1/(1+\beta)$, then we will certainly have $\{\alpha \geq 1, \beta >0\}$, i.e. "not both parameters smaller than unity". But this in a sense is a partial result, since there is also the other way in which "not both parameters are smaller than unity". In other words, this approach imposes the additional restriction that only $\beta$ is allowed to be smaller than unity. It is therefore incompatible with Beta distributions for which we want to be able to have $\alpha \leq 1$ (and $\beta >1$).
Mean and variance of a Beta distribution with $\alpha \ge 1$ or $\beta \ge 1$? The mean of the Beta distribution is $$\mu = \frac {\alpha}{\alpha + \beta}$$ We want to see whether restricting the permissible range of $\mu$, will guarantee that we will have either $\{\alpha \geq
46,188
TF-IDF versus Cosine Similarity in Document Search
Xeon is right in what TF-IDF and cosine similarity are two different things. TF-IDF will give you a representation for a given term in a document. Cosine similarity will give you a score for two different documents that share the same representation. However, "one of the simplest ranking functions is computed by summing the tf–idf for each query term". This solution is biased towards long documents where more of your terms will appear (e.g., Encyclopedia Britannica). Also, there are much more advance approaches based on a similar idea (most notably Okapi BM25). In general, you should use the cosine similarity if you are comparing elements with the same nature (e.g., documents vs documents) or when you need the score itself to have some meaningful value. In the case of cosine similarity, a 1.0 means that the two elements are exactly the same based on their representation. I would recommend these resources to know more about the topic: Modern Information Retrieval, by Ricardo Baeza-Yates et al., Introduction to Information Retrieval, by Christopher Manning et al.
TF-IDF versus Cosine Similarity in Document Search
Xeon is right in what TF-IDF and cosine similarity are two different things. TF-IDF will give you a representation for a given term in a document. Cosine similarity will give you a score for two diffe
TF-IDF versus Cosine Similarity in Document Search Xeon is right in what TF-IDF and cosine similarity are two different things. TF-IDF will give you a representation for a given term in a document. Cosine similarity will give you a score for two different documents that share the same representation. However, "one of the simplest ranking functions is computed by summing the tf–idf for each query term". This solution is biased towards long documents where more of your terms will appear (e.g., Encyclopedia Britannica). Also, there are much more advance approaches based on a similar idea (most notably Okapi BM25). In general, you should use the cosine similarity if you are comparing elements with the same nature (e.g., documents vs documents) or when you need the score itself to have some meaningful value. In the case of cosine similarity, a 1.0 means that the two elements are exactly the same based on their representation. I would recommend these resources to know more about the topic: Modern Information Retrieval, by Ricardo Baeza-Yates et al., Introduction to Information Retrieval, by Christopher Manning et al.
TF-IDF versus Cosine Similarity in Document Search Xeon is right in what TF-IDF and cosine similarity are two different things. TF-IDF will give you a representation for a given term in a document. Cosine similarity will give you a score for two diffe
46,189
TF-IDF versus Cosine Similarity in Document Search
TF-IDF is about features and their normalization. Cosine metric is metric that you will use to score. If my memory is good, TF makes the word counts in a vector normalized. You can then compare TF normalized vectors using the cosine metric. Adding DF weight is about weighting down too frequent terms (e.g. stop words) so they won't dominate other less frequent (and often more informative) features. Clean your corpus before creating TF-IDF vectors. Do for example stemming (e.g use Porter stemmer). This will reduce the vocabulary and make the word vectors less orthogonal.
TF-IDF versus Cosine Similarity in Document Search
TF-IDF is about features and their normalization. Cosine metric is metric that you will use to score. If my memory is good, TF makes the word counts in a vector normalized. You can then compare TF nor
TF-IDF versus Cosine Similarity in Document Search TF-IDF is about features and their normalization. Cosine metric is metric that you will use to score. If my memory is good, TF makes the word counts in a vector normalized. You can then compare TF normalized vectors using the cosine metric. Adding DF weight is about weighting down too frequent terms (e.g. stop words) so they won't dominate other less frequent (and often more informative) features. Clean your corpus before creating TF-IDF vectors. Do for example stemming (e.g use Porter stemmer). This will reduce the vocabulary and make the word vectors less orthogonal.
TF-IDF versus Cosine Similarity in Document Search TF-IDF is about features and their normalization. Cosine metric is metric that you will use to score. If my memory is good, TF makes the word counts in a vector normalized. You can then compare TF nor
46,190
Expected value of Y = (1/X) where $X \sim Gamma$
The calculation of $E\left[X^{-1}\right]$ when $X$ is a Gamma random variable with order parameter $n$ and rate parameter $\lambda$ requires recognition of the density of another Gamma random variable (with order parameter $n-1$ and rate parameter $\lambda$) in the integral given by the law of the unconscious statistician for $E\left[X^{-1}\right]$. We have $$\begin{align} E\left[X^{-1}\right]&= \int_0^\infty \frac 1x \cdot \underbrace{\lambda \frac{(\lambda x)^{n-1}}{\Gamma(n)}e^{-\lambda x}}_{\Gamma(n,\lambda)~\text{density}}\,\mathrm dx\\ &= \lambda\frac{\Gamma(n-1)}{\Gamma(n)}\int_0^\infty \underbrace{\lambda \frac{(\lambda x)^{n-2}}{\Gamma(n-1)}e^{-\lambda x}}_{\Gamma(n-1,\lambda)~\text{density}}\,\mathrm dx\\ &= \frac{\lambda}{n-1} \end{align}$$ since for positive integer $k$, $\Gamma(k) = (k-1)!$.
Expected value of Y = (1/X) where $X \sim Gamma$
The calculation of $E\left[X^{-1}\right]$ when $X$ is a Gamma random variable with order parameter $n$ and rate parameter $\lambda$ requires recognition of the density of another Gamma random variable
Expected value of Y = (1/X) where $X \sim Gamma$ The calculation of $E\left[X^{-1}\right]$ when $X$ is a Gamma random variable with order parameter $n$ and rate parameter $\lambda$ requires recognition of the density of another Gamma random variable (with order parameter $n-1$ and rate parameter $\lambda$) in the integral given by the law of the unconscious statistician for $E\left[X^{-1}\right]$. We have $$\begin{align} E\left[X^{-1}\right]&= \int_0^\infty \frac 1x \cdot \underbrace{\lambda \frac{(\lambda x)^{n-1}}{\Gamma(n)}e^{-\lambda x}}_{\Gamma(n,\lambda)~\text{density}}\,\mathrm dx\\ &= \lambda\frac{\Gamma(n-1)}{\Gamma(n)}\int_0^\infty \underbrace{\lambda \frac{(\lambda x)^{n-2}}{\Gamma(n-1)}e^{-\lambda x}}_{\Gamma(n-1,\lambda)~\text{density}}\,\mathrm dx\\ &= \frac{\lambda}{n-1} \end{align}$$ since for positive integer $k$, $\Gamma(k) = (k-1)!$.
Expected value of Y = (1/X) where $X \sim Gamma$ The calculation of $E\left[X^{-1}\right]$ when $X$ is a Gamma random variable with order parameter $n$ and rate parameter $\lambda$ requires recognition of the density of another Gamma random variable
46,191
How are individual trees added together in boosted regression tree?
They assume that you're keeping track of a "current estimator" $\hat f$, which is a sum of all the trees you've seen so far. (In code you would just store this as an array of all the trees you've seen so far.) The $\leftarrow$ sign just means "takes the new value"--so when they say "add the new tree" they mean, basically, append the new tree to the array of trees you already store, so that where you previously would have computed $\hat f$ with that array, you now compute $\hat f + \lambda \hat f^b$. (The $\leftarrow$ sign just means "takes the new value".) The residual is just the difference between the response and your current prediction $\hat f$. So if you add something to $\hat f$ you need to subtract it from the residual so that they continue to sum up to the target response. Again, the $\leftarrow$ sign just means "takes the new value"--so $r_i \leftarrow r_i - \lambda \hat f^b(x_i)$ would translate in code to r[i] -= lambda * tree_prediction[i] or something.
How are individual trees added together in boosted regression tree?
They assume that you're keeping track of a "current estimator" $\hat f$, which is a sum of all the trees you've seen so far. (In code you would just store this as an array of all the trees you've seen
How are individual trees added together in boosted regression tree? They assume that you're keeping track of a "current estimator" $\hat f$, which is a sum of all the trees you've seen so far. (In code you would just store this as an array of all the trees you've seen so far.) The $\leftarrow$ sign just means "takes the new value"--so when they say "add the new tree" they mean, basically, append the new tree to the array of trees you already store, so that where you previously would have computed $\hat f$ with that array, you now compute $\hat f + \lambda \hat f^b$. (The $\leftarrow$ sign just means "takes the new value".) The residual is just the difference between the response and your current prediction $\hat f$. So if you add something to $\hat f$ you need to subtract it from the residual so that they continue to sum up to the target response. Again, the $\leftarrow$ sign just means "takes the new value"--so $r_i \leftarrow r_i - \lambda \hat f^b(x_i)$ would translate in code to r[i] -= lambda * tree_prediction[i] or something.
How are individual trees added together in boosted regression tree? They assume that you're keeping track of a "current estimator" $\hat f$, which is a sum of all the trees you've seen so far. (In code you would just store this as an array of all the trees you've seen
46,192
Computing the power of Fisher's exact test in R
What you are asking for here is a post-hoc power analysis. (More specifically, "the probability of correctly rejecting the null hypothesis" is the power, and 1-power is beta, "the probability of a type-II error". You ask for both, but we only need one to know the other.) We take your existing dataset as the alternative hypothesis / model of the true data generating process. I don't know of a specialized, pre-existing function (e.g., in the pwr package) to do this, but, yes, this can be done in R. You will just have to simulate it. For (considerably) more information on power analyses, and simulating them in R, you should read my answer here: Simulation of logistic regression power analysis - designed experiments. In this case, I will just give a quick, adapted version for dealing with Fisher's exact test. (I usually write code as close to pseudocode as possible so that it may be more widely understood, but because this has the potential of taking so long to run, I try to move as much as possible out of the for loop, and use some of R's unique capacities.) table = matrix(c(18,20,15,15,10,55,65,70,30), 3, 3) table # [,1] [,2] [,3] # [1,] 18 15 65 # [2,] 20 10 70 # [3,] 15 55 30 N = sum(table) # this is the total number of observations N # [1] 298 probs = prop.table(table) # these are the probabilities of an observation probs # being in any given cell # [,1] [,2] [,3] # [1,] 0.06040268 0.05033557 0.2181208 # [2,] 0.06711409 0.03355705 0.2348993 # [3,] 0.05033557 0.18456376 0.1006711 probs.v = as.vector(probs) # notice that the probabilities read column-wise probs.v # [1] 0.06040268 0.06711409 0.05033557 0.05033557 0.03355705 0.18456376 0.21812081 # [8] 0.23489933 0.10067114 cuts = c(0, cumsum(probs.v)) # notice that I add a 0 on the front cuts # [1] 0.00000000 0.06040268 0.12751678 0.17785235 0.22818792 0.26174497 # [7] 0.44630872 0.66442953 0.89932886 1.00000000 set.seed(4941) # this makes it exactly reproducible B = 10000 # number of iterations in simulation vals = runif(N*B) # generate random values / probabilities cats = cut(vals, breaks=cuts, labels=c("11", "21", "31", "12", "22", "32", "13", "23", "33")) cats = matrix(cats, nrow=N, ncol=B, byrow=F) counts = apply(cats, 2, function(x){ as.vector(table(x)) }) rm(table, N, vals, probs, probs.v, cuts, cats) p.vals = vector(length=B) # this will store the outputs ptm = proc.time() # this lets me time the simulation for(i in 1:B){ mat = matrix(counts[,i], nrow=3, ncol=3, byrow=T) p.vals[i] = fisher.test(mat, simulate.p.value=T)$p.value } proc.time() - ptm # not too bad, really # user system elapsed # 28.66 0.32 29.08 # mean(p.vals>=.05) # the estimated probability of type II errors is 0 # [1] 0 c(0, 3/B) # using the rule of 3 to estimate the 95% CI # [1] 0e+00 3e-04 Given how far your data diverge from the null hypothesis in Fisher's exact test, and the amount of data you have, this simulation does not turn up a single type II error in 10,000 iterations. Because each iteration can be understood as a draw from a binomial distribution with probability $p$ (which we are estimating as the proportion of type II errors observed), this simulation is actually an estimate with some stochastic variability. We can form a 95% confidence interval bounding the true probability of a type II error. To get around the fact that we didn't actually find any type II errors, we will use the rule of 3 ($3/N$) to estimate the upper limit of the CI. Thus, the 95% CI for true type II error rate is $[0,\ 0.0003]$. On a different note, @rvl points out in the comments that "[p]ost hoc power is a silly exercise". That is largely true. I have seen people make the argument, in effect, 'my results are not significant, but I don't have any power, so there's no reason to believe my theory isn't right', which is fairly bizarre on any number of levels. On the other hand, since your results are significant, it isn't clear what difference knowing the post-hoc power for your study is either. I find that understanding post-hoc power can be useful pedagogically to help people begin to understand the topic. And we can also take this as a starting point for a-priori power analyses for planning future studies.
Computing the power of Fisher's exact test in R
What you are asking for here is a post-hoc power analysis. (More specifically, "the probability of correctly rejecting the null hypothesis" is the power, and 1-power is beta, "the probability of a ty
Computing the power of Fisher's exact test in R What you are asking for here is a post-hoc power analysis. (More specifically, "the probability of correctly rejecting the null hypothesis" is the power, and 1-power is beta, "the probability of a type-II error". You ask for both, but we only need one to know the other.) We take your existing dataset as the alternative hypothesis / model of the true data generating process. I don't know of a specialized, pre-existing function (e.g., in the pwr package) to do this, but, yes, this can be done in R. You will just have to simulate it. For (considerably) more information on power analyses, and simulating them in R, you should read my answer here: Simulation of logistic regression power analysis - designed experiments. In this case, I will just give a quick, adapted version for dealing with Fisher's exact test. (I usually write code as close to pseudocode as possible so that it may be more widely understood, but because this has the potential of taking so long to run, I try to move as much as possible out of the for loop, and use some of R's unique capacities.) table = matrix(c(18,20,15,15,10,55,65,70,30), 3, 3) table # [,1] [,2] [,3] # [1,] 18 15 65 # [2,] 20 10 70 # [3,] 15 55 30 N = sum(table) # this is the total number of observations N # [1] 298 probs = prop.table(table) # these are the probabilities of an observation probs # being in any given cell # [,1] [,2] [,3] # [1,] 0.06040268 0.05033557 0.2181208 # [2,] 0.06711409 0.03355705 0.2348993 # [3,] 0.05033557 0.18456376 0.1006711 probs.v = as.vector(probs) # notice that the probabilities read column-wise probs.v # [1] 0.06040268 0.06711409 0.05033557 0.05033557 0.03355705 0.18456376 0.21812081 # [8] 0.23489933 0.10067114 cuts = c(0, cumsum(probs.v)) # notice that I add a 0 on the front cuts # [1] 0.00000000 0.06040268 0.12751678 0.17785235 0.22818792 0.26174497 # [7] 0.44630872 0.66442953 0.89932886 1.00000000 set.seed(4941) # this makes it exactly reproducible B = 10000 # number of iterations in simulation vals = runif(N*B) # generate random values / probabilities cats = cut(vals, breaks=cuts, labels=c("11", "21", "31", "12", "22", "32", "13", "23", "33")) cats = matrix(cats, nrow=N, ncol=B, byrow=F) counts = apply(cats, 2, function(x){ as.vector(table(x)) }) rm(table, N, vals, probs, probs.v, cuts, cats) p.vals = vector(length=B) # this will store the outputs ptm = proc.time() # this lets me time the simulation for(i in 1:B){ mat = matrix(counts[,i], nrow=3, ncol=3, byrow=T) p.vals[i] = fisher.test(mat, simulate.p.value=T)$p.value } proc.time() - ptm # not too bad, really # user system elapsed # 28.66 0.32 29.08 # mean(p.vals>=.05) # the estimated probability of type II errors is 0 # [1] 0 c(0, 3/B) # using the rule of 3 to estimate the 95% CI # [1] 0e+00 3e-04 Given how far your data diverge from the null hypothesis in Fisher's exact test, and the amount of data you have, this simulation does not turn up a single type II error in 10,000 iterations. Because each iteration can be understood as a draw from a binomial distribution with probability $p$ (which we are estimating as the proportion of type II errors observed), this simulation is actually an estimate with some stochastic variability. We can form a 95% confidence interval bounding the true probability of a type II error. To get around the fact that we didn't actually find any type II errors, we will use the rule of 3 ($3/N$) to estimate the upper limit of the CI. Thus, the 95% CI for true type II error rate is $[0,\ 0.0003]$. On a different note, @rvl points out in the comments that "[p]ost hoc power is a silly exercise". That is largely true. I have seen people make the argument, in effect, 'my results are not significant, but I don't have any power, so there's no reason to believe my theory isn't right', which is fairly bizarre on any number of levels. On the other hand, since your results are significant, it isn't clear what difference knowing the post-hoc power for your study is either. I find that understanding post-hoc power can be useful pedagogically to help people begin to understand the topic. And we can also take this as a starting point for a-priori power analyses for planning future studies.
Computing the power of Fisher's exact test in R What you are asking for here is a post-hoc power analysis. (More specifically, "the probability of correctly rejecting the null hypothesis" is the power, and 1-power is beta, "the probability of a ty
46,193
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
The following approach might be more accurate and more efficient. Goutte, C., & Gaussier, E. (2005, March). A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In European Conference on Information Retrieval (pp. 345-359). Springer Berlin Heidelberg.
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
The following approach might be more accurate and more efficient. Goutte, C., & Gaussier, E. (2005, March). A probabilistic interpretation of precision, recall and F-score, with implication for evalua
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? The following approach might be more accurate and more efficient. Goutte, C., & Gaussier, E. (2005, March). A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In European Conference on Information Retrieval (pp. 345-359). Springer Berlin Heidelberg.
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? The following approach might be more accurate and more efficient. Goutte, C., & Gaussier, E. (2005, March). A probabilistic interpretation of precision, recall and F-score, with implication for evalua
46,194
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
I'll summarise the approaches which are sketched by the other two answers. Suggest by @marsu. Assume that your confusion matrix $C$ has a multinomial distribution $M(n; π)$, then the distribution of the $TP$ is binomial. Assume a symmetric beta prior for precision $p$ and and recall $r$, that is $p,r ∼ Beta(λ, λ)$. Then given your data $D$ the posterior for $p$ is $p|D ∼ Beta(TP + \lambda, FP + \lambda)$ and $r$ is $r|D ∼ Beta(TP + \lambda, FN + \lambda)$. You can then use software to calculate the appropriate confidence interval as outlined here: Calculate the confidence interval for the mean of a beta distribution. Suggest by @fred. Generate $D_n$ datasets by sampling with replacement from your underlying dataset $D$. For each $D_n$ fit your classifier, and calculate the confusion matrix $C_n$. For each $C_n$ calculate precision $p_n$ and $r_n$, the confidence interval for these quantities can be calculated directly from the bootstrap distribution.
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
I'll summarise the approaches which are sketched by the other two answers. Suggest by @marsu. Assume that your confusion matrix $C$ has a multinomial distribution $M(n; π)$, then the distribution of
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? I'll summarise the approaches which are sketched by the other two answers. Suggest by @marsu. Assume that your confusion matrix $C$ has a multinomial distribution $M(n; π)$, then the distribution of the $TP$ is binomial. Assume a symmetric beta prior for precision $p$ and and recall $r$, that is $p,r ∼ Beta(λ, λ)$. Then given your data $D$ the posterior for $p$ is $p|D ∼ Beta(TP + \lambda, FP + \lambda)$ and $r$ is $r|D ∼ Beta(TP + \lambda, FN + \lambda)$. You can then use software to calculate the appropriate confidence interval as outlined here: Calculate the confidence interval for the mean of a beta distribution. Suggest by @fred. Generate $D_n$ datasets by sampling with replacement from your underlying dataset $D$. For each $D_n$ fit your classifier, and calculate the confusion matrix $C_n$. For each $C_n$ calculate precision $p_n$ and $r_n$, the confidence interval for these quantities can be calculated directly from the bootstrap distribution.
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? I'll summarise the approaches which are sketched by the other two answers. Suggest by @marsu. Assume that your confusion matrix $C$ has a multinomial distribution $M(n; π)$, then the distribution of
46,195
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
An answer here suggests using bootstrapped statistics; we've done this at my place of employment and it seems to do the right thing. Confidence interval for precision and recall in classification
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
An answer here suggests using bootstrapped statistics; we've done this at my place of employment and it seems to do the right thing. Confidence interval for precision and recall in classification
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? An answer here suggests using bootstrapped statistics; we've done this at my place of employment and it seems to do the right thing. Confidence interval for precision and recall in classification
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? An answer here suggests using bootstrapped statistics; we've done this at my place of employment and it seems to do the right thing. Confidence interval for precision and recall in classification
46,196
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
Please note that the approach described in the paper suggested by @Marsu_ is a Bayesian rather than a frequentist one. This means that the intervals it provides, despite what the article claims, are credible intervals, not the confidence ones; and those are in fact very different in interpretation. The Bayesian approach is assuming that the parameter of interest is a random variable having a prior distribution, and the credible interval bounds are fixed as encompassing a given probability mass of the posterior distribution of that parameter. The prior is chosen through some considerations external to the inference problem; the article itself suggests several alternatives. From the frequentist standpoint, the parameter is constant and the interval bounds are random variables; the confidence level represents how often on average the true value of the parameter will fall into the resulting confidence interval if the sample was re-drawn multiple times from the distribution. See Credible interval and Confidence interval: Meaning and interpretation for more information. So it seems the only remaining options for proper confidence intervals are resampling-based methods like bootstrap or jackknife proposed by others.
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?
Please note that the approach described in the paper suggested by @Marsu_ is a Bayesian rather than a frequentist one. This means that the intervals it provides, despite what the article claims, are c
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? Please note that the approach described in the paper suggested by @Marsu_ is a Bayesian rather than a frequentist one. This means that the intervals it provides, despite what the article claims, are credible intervals, not the confidence ones; and those are in fact very different in interpretation. The Bayesian approach is assuming that the parameter of interest is a random variable having a prior distribution, and the credible interval bounds are fixed as encompassing a given probability mass of the posterior distribution of that parameter. The prior is chosen through some considerations external to the inference problem; the article itself suggests several alternatives. From the frequentist standpoint, the parameter is constant and the interval bounds are random variables; the confidence level represents how often on average the true value of the parameter will fall into the resulting confidence interval if the sample was re-drawn multiple times from the distribution. See Credible interval and Confidence interval: Meaning and interpretation for more information. So it seems the only remaining options for proper confidence intervals are resampling-based methods like bootstrap or jackknife proposed by others.
How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)? Please note that the approach described in the paper suggested by @Marsu_ is a Bayesian rather than a frequentist one. This means that the intervals it provides, despite what the article claims, are c
46,197
How can MANOVA report a significant difference when none of the univariate ANOVAs reaches significance?
Here is a figure illustrating how it is possible: Two populations (red and blue) are sampled from a same 2D distribution, but slightly shifted from each other. On the left $N=100$, on the right $N=1000$ in each group. In both cases, I conduct two univariate ANOVAs (for $x$ dimension and for $y$ dimension) and one multivariate MANOVA. P-values are reported in the titles. Note that on the left, both univariate tests yield non-significant p-values of around $0.2$, whereas MANOVA reports a very significant one. This happens because to separate the groups the data need to be projected onto a diagonal running from upper-left to lower-right corner of the subplot; projecting the data on either horizontal or vertical axes would not result in significant separation of the two groups. Finding such "optimal group-separating projection" is what discriminant analysis (LDA) and MANOVA both do (directly or indirectly). It might not be obvious from the left figure that there is any difference between groups at all, but it should be evident on the right figure, where more points are sampled from the same distributions. Here univariate tests reach significance as well, but MANOVA is of course still much more sensitive. [Note that in case of only one factor with only two levels (i.e. only two groups) ANOVA is equivalent to a t-test, and MANOVA is equivalent to a Hotelling's T2-test.] Simulation code (Matlab): Ns = [100 1000]; figure for i=1:2 subplot(1,2,i) X = randn(Ns(i),2); X = bsxfun(@times, X, [4 1]); X = X * [sind(45) cosd(45); cosd(45) -sind(45)]; X = bsxfun(@plus, X, [1.5 1]); Y = randn(Ns(i),2); Y = bsxfun(@times, Y, [4 1]); Y = Y * [sind(45) cosd(45); cosd(45) -sind(45)]; Y = bsxfun(@plus, Y, [1 1.5]); hold on scatter(X(:,1), X(:,2), 'r.') scatter(Y(:,1), Y(:,2), 'b.') axis([-10 10 -8 12]) axis square [~, p1] = ttest2(X(:,1), Y(:,1)); [~, p2] = ttest2(X(:,2), Y(:,2)); [~, p3] = manova1([X; Y], [ones(size(X),1); 2*ones(size(X),1)]'); title(['ANOVAs: ' num2str(p1,2) ', ' num2str(p2,2) '; MANOVA: ' num2str(p3,2)]) end
How can MANOVA report a significant difference when none of the univariate ANOVAs reaches significan
Here is a figure illustrating how it is possible: Two populations (red and blue) are sampled from a same 2D distribution, but slightly shifted from each other. On the left $N=100$, on the right $N=10
How can MANOVA report a significant difference when none of the univariate ANOVAs reaches significance? Here is a figure illustrating how it is possible: Two populations (red and blue) are sampled from a same 2D distribution, but slightly shifted from each other. On the left $N=100$, on the right $N=1000$ in each group. In both cases, I conduct two univariate ANOVAs (for $x$ dimension and for $y$ dimension) and one multivariate MANOVA. P-values are reported in the titles. Note that on the left, both univariate tests yield non-significant p-values of around $0.2$, whereas MANOVA reports a very significant one. This happens because to separate the groups the data need to be projected onto a diagonal running from upper-left to lower-right corner of the subplot; projecting the data on either horizontal or vertical axes would not result in significant separation of the two groups. Finding such "optimal group-separating projection" is what discriminant analysis (LDA) and MANOVA both do (directly or indirectly). It might not be obvious from the left figure that there is any difference between groups at all, but it should be evident on the right figure, where more points are sampled from the same distributions. Here univariate tests reach significance as well, but MANOVA is of course still much more sensitive. [Note that in case of only one factor with only two levels (i.e. only two groups) ANOVA is equivalent to a t-test, and MANOVA is equivalent to a Hotelling's T2-test.] Simulation code (Matlab): Ns = [100 1000]; figure for i=1:2 subplot(1,2,i) X = randn(Ns(i),2); X = bsxfun(@times, X, [4 1]); X = X * [sind(45) cosd(45); cosd(45) -sind(45)]; X = bsxfun(@plus, X, [1.5 1]); Y = randn(Ns(i),2); Y = bsxfun(@times, Y, [4 1]); Y = Y * [sind(45) cosd(45); cosd(45) -sind(45)]; Y = bsxfun(@plus, Y, [1 1.5]); hold on scatter(X(:,1), X(:,2), 'r.') scatter(Y(:,1), Y(:,2), 'b.') axis([-10 10 -8 12]) axis square [~, p1] = ttest2(X(:,1), Y(:,1)); [~, p2] = ttest2(X(:,2), Y(:,2)); [~, p3] = manova1([X; Y], [ones(size(X),1); 2*ones(size(X),1)]'); title(['ANOVAs: ' num2str(p1,2) ', ' num2str(p2,2) '; MANOVA: ' num2str(p3,2)]) end
How can MANOVA report a significant difference when none of the univariate ANOVAs reaches significan Here is a figure illustrating how it is possible: Two populations (red and blue) are sampled from a same 2D distribution, but slightly shifted from each other. On the left $N=100$, on the right $N=10
46,198
Detecting Bimodal Distribution
This looks like a typical task of detecting components of a mixture distribution with an umbrella topic being finite mixture models. If you use R, you don't need to implement K-means or other clustering algorithms, as there are enough existing packages that already do that and more. One of the most popular one - mixtools package (http://cran.r-project.org/web/packages/mixtools) - contains function normalmixEM, which is based on Expectation-Maximization algorithm and can be used to fit your data to a mixture of normal distributions. For more details and examples, see the package documentation and this blog post: http://exploringdatablog.blogspot.com/2011/08/fitting-mixture-distributions-with-r.html. You may find beneficial to read a brief introduction to mixture distributions prior to reading the above-mentioned post: http://exploringdatablog.blogspot.com/2011/06/brief-introduction-to-mixture.html. Other related packages include rebmix (http://cran.r-project.org/web/packages/rebmix), flexmix (http://cran.r-project.org/web/packages/flexmix) and mclust (for detailed information, please see http://www.stat.washington.edu/mclust and http://cran.r-project.org/web/packages/mclust). Performing a goodness-of-fit test for estimating a mixture of normal distributions has been frequently discussed on Cross Validated. For example, check this discussion: Goodness of fit test for a mixture in R. Finally, the following paper might be of your interest, as it addresses the intersection of both topics, related to your question - mixture analysis and speaker identification. I hope that you will find it useful: http://smtp.intjit.org/journal/volume/12/7/127_2.pdf.
Detecting Bimodal Distribution
This looks like a typical task of detecting components of a mixture distribution with an umbrella topic being finite mixture models. If you use R, you don't need to implement K-means or other clusteri
Detecting Bimodal Distribution This looks like a typical task of detecting components of a mixture distribution with an umbrella topic being finite mixture models. If you use R, you don't need to implement K-means or other clustering algorithms, as there are enough existing packages that already do that and more. One of the most popular one - mixtools package (http://cran.r-project.org/web/packages/mixtools) - contains function normalmixEM, which is based on Expectation-Maximization algorithm and can be used to fit your data to a mixture of normal distributions. For more details and examples, see the package documentation and this blog post: http://exploringdatablog.blogspot.com/2011/08/fitting-mixture-distributions-with-r.html. You may find beneficial to read a brief introduction to mixture distributions prior to reading the above-mentioned post: http://exploringdatablog.blogspot.com/2011/06/brief-introduction-to-mixture.html. Other related packages include rebmix (http://cran.r-project.org/web/packages/rebmix), flexmix (http://cran.r-project.org/web/packages/flexmix) and mclust (for detailed information, please see http://www.stat.washington.edu/mclust and http://cran.r-project.org/web/packages/mclust). Performing a goodness-of-fit test for estimating a mixture of normal distributions has been frequently discussed on Cross Validated. For example, check this discussion: Goodness of fit test for a mixture in R. Finally, the following paper might be of your interest, as it addresses the intersection of both topics, related to your question - mixture analysis and speaker identification. I hope that you will find it useful: http://smtp.intjit.org/journal/volume/12/7/127_2.pdf.
Detecting Bimodal Distribution This looks like a typical task of detecting components of a mixture distribution with an umbrella topic being finite mixture models. If you use R, you don't need to implement K-means or other clusteri
46,199
Detecting Bimodal Distribution
I have often used a scheme (Intervention Detection) even though it is not time series data to determine the presence of "an intercept change" or a change in the mean value. An intercept change is essentially a mean change or in other words a level shift. Please post your data and I will try and help you. Both plots suggest to me a possible intercept change after some anomalies (one time pulses) have been accounted for. In the first course in statistics we are often given the fact that n1 values are in Group 1 and n2 values in Group 2. In actual practice we are often given 1 column of readings possibly a time series and the goal is to determine how many groups there are. This is in effect a form of one dimensional discriminant analysis.
Detecting Bimodal Distribution
I have often used a scheme (Intervention Detection) even though it is not time series data to determine the presence of "an intercept change" or a change in the mean value. An intercept change is esse
Detecting Bimodal Distribution I have often used a scheme (Intervention Detection) even though it is not time series data to determine the presence of "an intercept change" or a change in the mean value. An intercept change is essentially a mean change or in other words a level shift. Please post your data and I will try and help you. Both plots suggest to me a possible intercept change after some anomalies (one time pulses) have been accounted for. In the first course in statistics we are often given the fact that n1 values are in Group 1 and n2 values in Group 2. In actual practice we are often given 1 column of readings possibly a time series and the goal is to determine how many groups there are. This is in effect a form of one dimensional discriminant analysis.
Detecting Bimodal Distribution I have often used a scheme (Intervention Detection) even though it is not time series data to determine the presence of "an intercept change" or a change in the mean value. An intercept change is esse
46,200
Multiple Regression or Separate Simple Regressions?
This is a somewhat confusing debate because I am not sure how the two-regression method is going to achieve your goal. Regression models with two continuous independent variables can be visualized as a 3-D space: The blue lines represent the association between $x$ and $y$ at each value of $z$. Without $z$ in the model, the slope of the blue line may no longer be the same as the picture above because somehow $z$ could be associated with $x$ and in the same time a causal component of $y$. In such case, missing $z$ in the model will confound the association between $x$ and $y$. Aka, $\beta_1$, the regression coefficient of $x$ in $y = \beta_0 + \beta_1 x$ can be biased. This 3-variable dynamics cannot be discerned just by examining $x$ on $y$ and $z$ on $y$ separately. I may consider them as an intermediate approach to understand the relationship between variables... but they cannot complete the job if seeing "amount of X given and amount of Z taken impacts Y" is your goal. To complicate the answer slightly, your proposed method of performing a multiple linear regression with both $x$ and $z$ as independent variables may also be insufficient. Sometimes, the association of an independent variable may depend on the value of another independent variable, causing the regression plane to "warp." This is one of the many possibilities: In this case, the association between $x$ and $y$ (the slopes of the blue line) changes at different value of $z$. If this is happening, you may need to modify your model by incorporating an interaction term between $x$ and $z$. There is also collinearity that can affect the results of your multiple regression model as well.
Multiple Regression or Separate Simple Regressions?
This is a somewhat confusing debate because I am not sure how the two-regression method is going to achieve your goal. Regression models with two continuous independent variables can be visualized as
Multiple Regression or Separate Simple Regressions? This is a somewhat confusing debate because I am not sure how the two-regression method is going to achieve your goal. Regression models with two continuous independent variables can be visualized as a 3-D space: The blue lines represent the association between $x$ and $y$ at each value of $z$. Without $z$ in the model, the slope of the blue line may no longer be the same as the picture above because somehow $z$ could be associated with $x$ and in the same time a causal component of $y$. In such case, missing $z$ in the model will confound the association between $x$ and $y$. Aka, $\beta_1$, the regression coefficient of $x$ in $y = \beta_0 + \beta_1 x$ can be biased. This 3-variable dynamics cannot be discerned just by examining $x$ on $y$ and $z$ on $y$ separately. I may consider them as an intermediate approach to understand the relationship between variables... but they cannot complete the job if seeing "amount of X given and amount of Z taken impacts Y" is your goal. To complicate the answer slightly, your proposed method of performing a multiple linear regression with both $x$ and $z$ as independent variables may also be insufficient. Sometimes, the association of an independent variable may depend on the value of another independent variable, causing the regression plane to "warp." This is one of the many possibilities: In this case, the association between $x$ and $y$ (the slopes of the blue line) changes at different value of $z$. If this is happening, you may need to modify your model by incorporating an interaction term between $x$ and $z$. There is also collinearity that can affect the results of your multiple regression model as well.
Multiple Regression or Separate Simple Regressions? This is a somewhat confusing debate because I am not sure how the two-regression method is going to achieve your goal. Regression models with two continuous independent variables can be visualized as