idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
19,501
A job interview question on flipping a coin [duplicate]
The interviewer may also have been using this as a way to see how you nuance language around the discussion of statistical results. Other answers have made it clear, this is a low probability event if the coin is fair. For many, that may be enough evidence to claim bias. However, depending on how the interviewer worded the question (and the context leading up to the question) they may be looking for you to make the distinction that while the "best" available evidence points to it being bias, there is of course no way to know this with absolute certainty. (Although I would be certain enough evidence for me not to let anyone use that coin to decide who gets the dirty job).
A job interview question on flipping a coin [duplicate]
The interviewer may also have been using this as a way to see how you nuance language around the discussion of statistical results. Other answers have made it clear, this is a low probability event i
A job interview question on flipping a coin [duplicate] The interviewer may also have been using this as a way to see how you nuance language around the discussion of statistical results. Other answers have made it clear, this is a low probability event if the coin is fair. For many, that may be enough evidence to claim bias. However, depending on how the interviewer worded the question (and the context leading up to the question) they may be looking for you to make the distinction that while the "best" available evidence points to it being bias, there is of course no way to know this with absolute certainty. (Although I would be certain enough evidence for me not to let anyone use that coin to decide who gets the dirty job).
A job interview question on flipping a coin [duplicate] The interviewer may also have been using this as a way to see how you nuance language around the discussion of statistical results. Other answers have made it clear, this is a low probability event i
19,502
A job interview question on flipping a coin [duplicate]
With a large number of independent Bernoulli trials, the sample proportion has an approximate normal distribution by the Central Limit Theorem. With $\hat{p}= 0.56$ and $se(\hat{p}) = \sqrt{0.56(1-0.56)/1000} \approx 0.015 $. The sample test statistic for the proportion test of the hypothesis of $p=0.5$ corresponding to the fair coin is $Z \approx (0.56-0.50)/0.015 \approx 4$. Using the normal approximation to the sampling distribution of the test statistic under the null hypothesis, the probability of observing 560 or more, or 440 or less heads is very small, less than 0.001 which is very strong evidence of the coin being unfair.
A job interview question on flipping a coin [duplicate]
With a large number of independent Bernoulli trials, the sample proportion has an approximate normal distribution by the Central Limit Theorem. With $\hat{p}= 0.56$ and $se(\hat{p}) = \sqrt{0.56(1-0.5
A job interview question on flipping a coin [duplicate] With a large number of independent Bernoulli trials, the sample proportion has an approximate normal distribution by the Central Limit Theorem. With $\hat{p}= 0.56$ and $se(\hat{p}) = \sqrt{0.56(1-0.56)/1000} \approx 0.015 $. The sample test statistic for the proportion test of the hypothesis of $p=0.5$ corresponding to the fair coin is $Z \approx (0.56-0.50)/0.015 \approx 4$. Using the normal approximation to the sampling distribution of the test statistic under the null hypothesis, the probability of observing 560 or more, or 440 or less heads is very small, less than 0.001 which is very strong evidence of the coin being unfair.
A job interview question on flipping a coin [duplicate] With a large number of independent Bernoulli trials, the sample proportion has an approximate normal distribution by the Central Limit Theorem. With $\hat{p}= 0.56$ and $se(\hat{p}) = \sqrt{0.56(1-0.5
19,503
A job interview question on flipping a coin [duplicate]
Call $X$ the number of heads. Assume it is not biased. It is the sum of 1000 independent Bernoulli variables with mean $0.5$ and variance $0.5\times 0.5=0.25$. It has mean $500$ and variance $250$. The standard deviation is $\sqrt{250}\approx 16$. Intuitively $X$ should be 500 +/- 16. $X$ can be approximated by a normal distribution (1000 is large enough). The question is : what is probability for a normally distributed variable to have a distance to the mean at least $60/16=3.8$ times the standard deviation. You can find it in this table : https://en.wikipedia.org/wiki/Standard_normal_table $p=1-2*0.49993=0.00014$ As a conclusion, if the coin is unbiased, the probability of a number of heads as great as 560 is 0.014%. This is quite small. The coin is biased quite certainly. Or you can use a $\chi^2$ test https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test that will yield the same conclusion.
A job interview question on flipping a coin [duplicate]
Call $X$ the number of heads. Assume it is not biased. It is the sum of 1000 independent Bernoulli variables with mean $0.5$ and variance $0.5\times 0.5=0.25$. It has mean $500$ and variance $250$. T
A job interview question on flipping a coin [duplicate] Call $X$ the number of heads. Assume it is not biased. It is the sum of 1000 independent Bernoulli variables with mean $0.5$ and variance $0.5\times 0.5=0.25$. It has mean $500$ and variance $250$. The standard deviation is $\sqrt{250}\approx 16$. Intuitively $X$ should be 500 +/- 16. $X$ can be approximated by a normal distribution (1000 is large enough). The question is : what is probability for a normally distributed variable to have a distance to the mean at least $60/16=3.8$ times the standard deviation. You can find it in this table : https://en.wikipedia.org/wiki/Standard_normal_table $p=1-2*0.49993=0.00014$ As a conclusion, if the coin is unbiased, the probability of a number of heads as great as 560 is 0.014%. This is quite small. The coin is biased quite certainly. Or you can use a $\chi^2$ test https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test that will yield the same conclusion.
A job interview question on flipping a coin [duplicate] Call $X$ the number of heads. Assume it is not biased. It is the sum of 1000 independent Bernoulli variables with mean $0.5$ and variance $0.5\times 0.5=0.25$. It has mean $500$ and variance $250$. T
19,504
A job interview question on flipping a coin [duplicate]
I would talk about normal distributions and standard deviations from the mean. Draw a nice normal distribution curve on a board. Then ASK what is the definition of biased; based on the number of standard deviations from the mean.
A job interview question on flipping a coin [duplicate]
I would talk about normal distributions and standard deviations from the mean. Draw a nice normal distribution curve on a board. Then ASK what is the definition of biased; based on the number of sta
A job interview question on flipping a coin [duplicate] I would talk about normal distributions and standard deviations from the mean. Draw a nice normal distribution curve on a board. Then ASK what is the definition of biased; based on the number of standard deviations from the mean.
A job interview question on flipping a coin [duplicate] I would talk about normal distributions and standard deviations from the mean. Draw a nice normal distribution curve on a board. Then ASK what is the definition of biased; based on the number of sta
19,505
A job interview question on flipping a coin [duplicate]
I like the "easy" and "certified" answer that can come from having some basic resources. Managers aren't going to understand algebra. You get 5 bullet points and can't say any math at all, but defend your assertion. I have been required to do this. If this is your question in a job interview, especially if the person asking the question doesn't have a math degree, then they want to see if you "speak human". I would go to this site http://epitools.ausvet.com.au/content.php?page=CIProportion I would type in the numbers, and select 'all confidence-interval methods', and hit "submit". There are good guidelines for which method to use, but they all give a consistent number for the lower interval that does not include 50%. A non-biased coin would include 50% in its confidence interval. I would say "this is made by world-class PhD's in stats, and is a government facing AI in epidemiology", so without any other reason than this, we might still believe its numbers are good. Also, all the different methods agree. Comment: I was asked in my interview "how many marbles do I need to draw from a bowl in order to make a pair, when there are two colors uniformly randomly distributed", and why.
A job interview question on flipping a coin [duplicate]
I like the "easy" and "certified" answer that can come from having some basic resources. Managers aren't going to understand algebra. You get 5 bullet points and can't say any math at all, but defen
A job interview question on flipping a coin [duplicate] I like the "easy" and "certified" answer that can come from having some basic resources. Managers aren't going to understand algebra. You get 5 bullet points and can't say any math at all, but defend your assertion. I have been required to do this. If this is your question in a job interview, especially if the person asking the question doesn't have a math degree, then they want to see if you "speak human". I would go to this site http://epitools.ausvet.com.au/content.php?page=CIProportion I would type in the numbers, and select 'all confidence-interval methods', and hit "submit". There are good guidelines for which method to use, but they all give a consistent number for the lower interval that does not include 50%. A non-biased coin would include 50% in its confidence interval. I would say "this is made by world-class PhD's in stats, and is a government facing AI in epidemiology", so without any other reason than this, we might still believe its numbers are good. Also, all the different methods agree. Comment: I was asked in my interview "how many marbles do I need to draw from a bowl in order to make a pair, when there are two colors uniformly randomly distributed", and why.
A job interview question on flipping a coin [duplicate] I like the "easy" and "certified" answer that can come from having some basic resources. Managers aren't going to understand algebra. You get 5 bullet points and can't say any math at all, but defen
19,506
A job interview question on flipping a coin [duplicate]
I would say that it would require some simple calculations. Let $X\sim \operatorname{Binomial}(1000, 0.5)$. If the coin is fair it should be quite likely to get 560 heads out of 1000. So we calculate that probability as: $\Pr(X=560)=\binom{1000} {560}0.5^{560}(1-0.5)^{1000-560}\approx0.00002$. Since the probability of getting 560 heads out 1000 flips if the coin is fair is so very small, I find it very likely to be biased.
A job interview question on flipping a coin [duplicate]
I would say that it would require some simple calculations. Let $X\sim \operatorname{Binomial}(1000, 0.5)$. If the coin is fair it should be quite likely to get 560 heads out of 1000. So we calculate
A job interview question on flipping a coin [duplicate] I would say that it would require some simple calculations. Let $X\sim \operatorname{Binomial}(1000, 0.5)$. If the coin is fair it should be quite likely to get 560 heads out of 1000. So we calculate that probability as: $\Pr(X=560)=\binom{1000} {560}0.5^{560}(1-0.5)^{1000-560}\approx0.00002$. Since the probability of getting 560 heads out 1000 flips if the coin is fair is so very small, I find it very likely to be biased.
A job interview question on flipping a coin [duplicate] I would say that it would require some simple calculations. Let $X\sim \operatorname{Binomial}(1000, 0.5)$. If the coin is fair it should be quite likely to get 560 heads out of 1000. So we calculate
19,507
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features?
Let's say you initially have $p$ features but this is too many so you want to actually fit your model on $d < p$ features. You could choose $d$ of your features and drop the rest. If $X$ is our feature matrix, this corresponds to using $XD$ where $D \in \{0,1\}^{p \times d}$ picks out exactly the columns of $X$ that we want to include. But this ignores all information in the other columns, so why not consider a more general dimension reduction $XV$ where $V \in \mathbb R^{p \times d}$? This is exactly what PCA does: we find the matrix $V$ such that $XV$ contains as much of the information in $X$ as possible. Not all linear combinations are created equally. Unless our $X$ matrix is so low rank that a random set of $d$ columns can (with high probability) span the column space of all $p$ columns we will certainly not be able to do just as well as with all $p$ features. Some information will be lost, and so it behooves us to lose as little information as possible. With PCA, the "information" that we're trying to avoid losing is the variation in the data. As for why we restrict ourselves to linear transformations of the predictors, the whole point in this use-case is computation time. If we could do fancy non-linear dimension reduction on $X$ we could probably just fit the model on all of $X$ too. So PCA sits perfectly at the intersection of fast-to-compute and effective.
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features?
Let's say you initially have $p$ features but this is too many so you want to actually fit your model on $d < p$ features. You could choose $d$ of your features and drop the rest. If $X$ is our featur
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features? Let's say you initially have $p$ features but this is too many so you want to actually fit your model on $d < p$ features. You could choose $d$ of your features and drop the rest. If $X$ is our feature matrix, this corresponds to using $XD$ where $D \in \{0,1\}^{p \times d}$ picks out exactly the columns of $X$ that we want to include. But this ignores all information in the other columns, so why not consider a more general dimension reduction $XV$ where $V \in \mathbb R^{p \times d}$? This is exactly what PCA does: we find the matrix $V$ such that $XV$ contains as much of the information in $X$ as possible. Not all linear combinations are created equally. Unless our $X$ matrix is so low rank that a random set of $d$ columns can (with high probability) span the column space of all $p$ columns we will certainly not be able to do just as well as with all $p$ features. Some information will be lost, and so it behooves us to lose as little information as possible. With PCA, the "information" that we're trying to avoid losing is the variation in the data. As for why we restrict ourselves to linear transformations of the predictors, the whole point in this use-case is computation time. If we could do fancy non-linear dimension reduction on $X$ we could probably just fit the model on all of $X$ too. So PCA sits perfectly at the intersection of fast-to-compute and effective.
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features? Let's say you initially have $p$ features but this is too many so you want to actually fit your model on $d < p$ features. You could choose $d$ of your features and drop the rest. If $X$ is our featur
19,508
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features?
PCA reduces features while preserving the variance/information in the original data. This helps with enabling computation while not losing the data's resemblance of reality.
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features?
PCA reduces features while preserving the variance/information in the original data. This helps with enabling computation while not losing the data's resemblance of reality.
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features? PCA reduces features while preserving the variance/information in the original data. This helps with enabling computation while not losing the data's resemblance of reality.
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features? PCA reduces features while preserving the variance/information in the original data. This helps with enabling computation while not losing the data's resemblance of reality.
19,509
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features?
PCA solution First, beware when using PCA for this purpose. As I wrote in response to a related question PCA does not necessarily lead to selection of features that are informative for the regression you intend to do (see also Jolliffe 1982). OP proposed solution Now consider the proposed alternative mechanism: reduce the dimension of your feature vector to k dimensions by just choosing k of your features at random and eliminating the rest. Now in the problem statement we were asked to suppose that dimension of your vector x is very large. Let's call this dimension $p$ There are $pCk$ ways to choose $k$ predictors from a group of $p$. To give an example if $p=1000$ and we choose $k=5$ predictors from the dataset there would be $\approx 8.25 \times 10^{12}$ different models we would have to fit. And that's supposing we knew that $k=5$, and not $k=6$ etc etc. Put simply, it's not a problem you'd want to brute force in a large $p$ setting. Suggested solution To cope with regressions where $p$ is large a number of penalised regression strategies have been proposed. In particular the LASSO method will do dimension reduction while constructing a regression model by zeroing out the contribution from predictors that do not contribute enough to the model. There is a very clever algroithm (LARS) to fit the model efficiently.
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features?
PCA solution First, beware when using PCA for this purpose. As I wrote in response to a related question PCA does not necessarily lead to selection of features that are informative for the regression
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features? PCA solution First, beware when using PCA for this purpose. As I wrote in response to a related question PCA does not necessarily lead to selection of features that are informative for the regression you intend to do (see also Jolliffe 1982). OP proposed solution Now consider the proposed alternative mechanism: reduce the dimension of your feature vector to k dimensions by just choosing k of your features at random and eliminating the rest. Now in the problem statement we were asked to suppose that dimension of your vector x is very large. Let's call this dimension $p$ There are $pCk$ ways to choose $k$ predictors from a group of $p$. To give an example if $p=1000$ and we choose $k=5$ predictors from the dataset there would be $\approx 8.25 \times 10^{12}$ different models we would have to fit. And that's supposing we knew that $k=5$, and not $k=6$ etc etc. Put simply, it's not a problem you'd want to brute force in a large $p$ setting. Suggested solution To cope with regressions where $p$ is large a number of penalised regression strategies have been proposed. In particular the LASSO method will do dimension reduction while constructing a regression model by zeroing out the contribution from predictors that do not contribute enough to the model. There is a very clever algroithm (LARS) to fit the model efficiently.
Why do we use PCA to speed up learning algorithms when we could just reduce the number of features? PCA solution First, beware when using PCA for this purpose. As I wrote in response to a related question PCA does not necessarily lead to selection of features that are informative for the regression
19,510
Bayesian logit model - intuitive explanation?
Logistic regression can be described as a linear combination $$ \eta = \beta_0 + \beta_1 X_1 + ... + \beta_k X_k $$ that is passed through the link function $g$: $$ g(E(Y)) = \eta $$ where the link function is a logit function $$ E(Y|X,\beta) = p = \text{logit}^{-1}( \eta ) $$ where $Y$ take only values in $\{0,1\}$ and inverse logit functions transforms linear combination $\eta$ to this range. This is where classical logistic regression ends. However if you recall that $E(Y) = P(Y = 1)$ for variables that take only values in $\{0,1\}$, than $E(Y | X,\beta)$ can be considered as $P(Y = 1 | X,\beta)$. In this case, the logit function output could be thought as conditional probability of "success", i.e. $P(Y=1|X,\beta)$. Bernoulli distribution is a distribution that describes probability of observing binary outcome, with some $p$ parameter, so we can describe $Y$ as $$ y_i \sim \text{Bernoulli}(p) $$ So with logistic regression we look for some parameters $\beta$ that togeder with independent variables $X$ form a linear combination $\eta$. In classical regression $E(Y|X,\beta) = \eta$ (we assume link function to be identity function), however to model $Y$ that takes values in $\{0,1\}$ we need to transform $\eta$ so to fit in $[0,1]$ range. Now, to estimate logistic regression in Bayesian way you pick up some priors for $\beta_i$ parameters as with linear regression (see Kruschke et al, 2012), then use logit function to transform the linear combination $\eta$, so to use its output as a $p$ parameter of Bernoulli distribution that describes your $Y$ variable. So, yes, you actually use the equation and the logit link function the same way as in frequentionist case, and the rest works (e.g. choosing priors) like with estimating linear regression the Bayesian way. The simple approach for choosing priors is to choose Normal distributions (but you can also use other distributions, e.g. $t$- or Laplace distribution for more robust model) for $\beta_i$'s with parameters $\mu_i$ and $\sigma_i^2$ that are preset or taken from hierarchical priors. Now, having the model definition you can use software such as JAGS to perform Markov Chain Monte Carlo simulation for you to estimate the model. Below I post JAGS code for simple logistic model (check here for more examples). model { # setting up priors a ~ dnorm(0, .0001) b ~ dnorm(0, .0001) for (i in 1:N) { # passing the linear combination through logit function logit(p[i]) <- a + b * x[i] # likelihood function y[i] ~ dbern(p[i]) } } As you can see, the code directly translates to model definition. What the software does is it draws some values from Normal priors for a and b, then it uses those values to estimate p and finally, uses likelihood function to assess how likely is your data given those parameters (this is when you use Bayes theorem, see here for more detailed description). The basic logistic regression model can be extended to model the dependency between the predictors using a hierarchical model (including hyperpriors). In this case you can draw $\beta_i$'s from Multivariate Normal distribution that enables us to include information about covariance $\boldsymbol{\Sigma}$ between independent variables $$ \begin{pmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_k \end{pmatrix} \sim \mathrm{MVN} \left( \begin{bmatrix} \mu_0 \\ \mu_1 \\ \vdots \\ \mu_k \end{bmatrix}, \begin{bmatrix} \sigma^2_0 & \sigma_{0,1} & \ldots & \sigma_{0,k} \\ \sigma_{1,0} & \sigma^2_1 & \ldots &\sigma_{1,k} \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{k,0} & \sigma_{k,1} & \ldots & \sigma^2_k \end{bmatrix} \right)$$ ...but this is going into details, so let's stop right here. The "Bayesian" part in here is choosing priors, using Bayes theorem and defining model in probabilistic terms. See here for definition of "Bayesian model" and here for some general intuition on Bayesian approach. What you can also notice is that defining models is pretty straightforward and flexible with this approach. Kruschke, J. K., Aguinis, H., & Joo, H. (2012). The time has come: Bayesian methods for data analysis in the organizational sciences. Organizational Research Methods, 15(4), 722-752. Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2(4), 1360–1383.
Bayesian logit model - intuitive explanation?
Logistic regression can be described as a linear combination $$ \eta = \beta_0 + \beta_1 X_1 + ... + \beta_k X_k $$ that is passed through the link function $g$: $$ g(E(Y)) = \eta $$ where the link fu
Bayesian logit model - intuitive explanation? Logistic regression can be described as a linear combination $$ \eta = \beta_0 + \beta_1 X_1 + ... + \beta_k X_k $$ that is passed through the link function $g$: $$ g(E(Y)) = \eta $$ where the link function is a logit function $$ E(Y|X,\beta) = p = \text{logit}^{-1}( \eta ) $$ where $Y$ take only values in $\{0,1\}$ and inverse logit functions transforms linear combination $\eta$ to this range. This is where classical logistic regression ends. However if you recall that $E(Y) = P(Y = 1)$ for variables that take only values in $\{0,1\}$, than $E(Y | X,\beta)$ can be considered as $P(Y = 1 | X,\beta)$. In this case, the logit function output could be thought as conditional probability of "success", i.e. $P(Y=1|X,\beta)$. Bernoulli distribution is a distribution that describes probability of observing binary outcome, with some $p$ parameter, so we can describe $Y$ as $$ y_i \sim \text{Bernoulli}(p) $$ So with logistic regression we look for some parameters $\beta$ that togeder with independent variables $X$ form a linear combination $\eta$. In classical regression $E(Y|X,\beta) = \eta$ (we assume link function to be identity function), however to model $Y$ that takes values in $\{0,1\}$ we need to transform $\eta$ so to fit in $[0,1]$ range. Now, to estimate logistic regression in Bayesian way you pick up some priors for $\beta_i$ parameters as with linear regression (see Kruschke et al, 2012), then use logit function to transform the linear combination $\eta$, so to use its output as a $p$ parameter of Bernoulli distribution that describes your $Y$ variable. So, yes, you actually use the equation and the logit link function the same way as in frequentionist case, and the rest works (e.g. choosing priors) like with estimating linear regression the Bayesian way. The simple approach for choosing priors is to choose Normal distributions (but you can also use other distributions, e.g. $t$- or Laplace distribution for more robust model) for $\beta_i$'s with parameters $\mu_i$ and $\sigma_i^2$ that are preset or taken from hierarchical priors. Now, having the model definition you can use software such as JAGS to perform Markov Chain Monte Carlo simulation for you to estimate the model. Below I post JAGS code for simple logistic model (check here for more examples). model { # setting up priors a ~ dnorm(0, .0001) b ~ dnorm(0, .0001) for (i in 1:N) { # passing the linear combination through logit function logit(p[i]) <- a + b * x[i] # likelihood function y[i] ~ dbern(p[i]) } } As you can see, the code directly translates to model definition. What the software does is it draws some values from Normal priors for a and b, then it uses those values to estimate p and finally, uses likelihood function to assess how likely is your data given those parameters (this is when you use Bayes theorem, see here for more detailed description). The basic logistic regression model can be extended to model the dependency between the predictors using a hierarchical model (including hyperpriors). In this case you can draw $\beta_i$'s from Multivariate Normal distribution that enables us to include information about covariance $\boldsymbol{\Sigma}$ between independent variables $$ \begin{pmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_k \end{pmatrix} \sim \mathrm{MVN} \left( \begin{bmatrix} \mu_0 \\ \mu_1 \\ \vdots \\ \mu_k \end{bmatrix}, \begin{bmatrix} \sigma^2_0 & \sigma_{0,1} & \ldots & \sigma_{0,k} \\ \sigma_{1,0} & \sigma^2_1 & \ldots &\sigma_{1,k} \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{k,0} & \sigma_{k,1} & \ldots & \sigma^2_k \end{bmatrix} \right)$$ ...but this is going into details, so let's stop right here. The "Bayesian" part in here is choosing priors, using Bayes theorem and defining model in probabilistic terms. See here for definition of "Bayesian model" and here for some general intuition on Bayesian approach. What you can also notice is that defining models is pretty straightforward and flexible with this approach. Kruschke, J. K., Aguinis, H., & Joo, H. (2012). The time has come: Bayesian methods for data analysis in the organizational sciences. Organizational Research Methods, 15(4), 722-752. Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2(4), 1360–1383.
Bayesian logit model - intuitive explanation? Logistic regression can be described as a linear combination $$ \eta = \beta_0 + \beta_1 X_1 + ... + \beta_k X_k $$ that is passed through the link function $g$: $$ g(E(Y)) = \eta $$ where the link fu
19,511
Bayesian logit model - intuitive explanation?
What is all this prior, likelihood stuff? That's what makes it Bayesian. The generative model for the data is the same; the difference is that a Bayesian analysis chooses some prior distribution for parameters of interest, and calculates or approximates a posterior distribution, upon which all inference is based. Bayes rule relates the two: The posterior is proportional to likelihood times prior. Intuitively, this prior allows an analyst mathematically to express subject matter expertise or preexisting findings. For instance, the text you reference notes that the prior for $\bf\beta$ is a multivariate normal. Perhaps prior studies suggest a certain range of parameters that can be expressed with certain normal parameters. (With flexibility comes responsibility: One should be able to justify their prior to a skeptical audience.) In more elaborate models, one can use domain expertise to tune certain latent parameters. For example, see the liver example referenced in this answer. Some frequentist models can be related to a Bayesian counterpart with a specific prior, though I'm unsure which corresponds in this case.
Bayesian logit model - intuitive explanation?
What is all this prior, likelihood stuff? That's what makes it Bayesian. The generative model for the data is the same; the difference is that a Bayesian analysis chooses some prior distribution for
Bayesian logit model - intuitive explanation? What is all this prior, likelihood stuff? That's what makes it Bayesian. The generative model for the data is the same; the difference is that a Bayesian analysis chooses some prior distribution for parameters of interest, and calculates or approximates a posterior distribution, upon which all inference is based. Bayes rule relates the two: The posterior is proportional to likelihood times prior. Intuitively, this prior allows an analyst mathematically to express subject matter expertise or preexisting findings. For instance, the text you reference notes that the prior for $\bf\beta$ is a multivariate normal. Perhaps prior studies suggest a certain range of parameters that can be expressed with certain normal parameters. (With flexibility comes responsibility: One should be able to justify their prior to a skeptical audience.) In more elaborate models, one can use domain expertise to tune certain latent parameters. For example, see the liver example referenced in this answer. Some frequentist models can be related to a Bayesian counterpart with a specific prior, though I'm unsure which corresponds in this case.
Bayesian logit model - intuitive explanation? What is all this prior, likelihood stuff? That's what makes it Bayesian. The generative model for the data is the same; the difference is that a Bayesian analysis chooses some prior distribution for
19,512
Selecting knots for a GAM
Update If you are a stats newbie like me, this answer may suffice. if you want a more correct answer, see Nukimov's answer. A much better option is to fit your model using gam() in the mgcv package, which contains a method called Generalized Cross-validation (GCV). GCV will automatically choose the number of knots for your model so that simplicity is balanced against explanatory power. When using gam() in mgcv, turn GCV on by setting k to equal -1. Just like this: set.seed(1) dat <- data.frame(y = rnorm(10000), x = rnorm(10000)) library(mgcv) G1 <- gam(y ~ s(x, k = -1, bs = "cs"), data = dat) summary(G1) # check the significance of your smooth term gam.check(G1) # inspect your residuals to evaluate if the degree of smoothing is good To plot your smooth line you will have to extract the model fit. This should do the trick: plot(y~x, data = dat, cex = .1) G1pred <- predict(G1) I1 <- order(dat$y) lines(dat$x, G1pred) You can also adjust k manually, and see what number of k brings you closest to the k value set automatically by GCV.
Selecting knots for a GAM
Update If you are a stats newbie like me, this answer may suffice. if you want a more correct answer, see Nukimov's answer. A much better option is to fit your model using gam() in the mgcv package, w
Selecting knots for a GAM Update If you are a stats newbie like me, this answer may suffice. if you want a more correct answer, see Nukimov's answer. A much better option is to fit your model using gam() in the mgcv package, which contains a method called Generalized Cross-validation (GCV). GCV will automatically choose the number of knots for your model so that simplicity is balanced against explanatory power. When using gam() in mgcv, turn GCV on by setting k to equal -1. Just like this: set.seed(1) dat <- data.frame(y = rnorm(10000), x = rnorm(10000)) library(mgcv) G1 <- gam(y ~ s(x, k = -1, bs = "cs"), data = dat) summary(G1) # check the significance of your smooth term gam.check(G1) # inspect your residuals to evaluate if the degree of smoothing is good To plot your smooth line you will have to extract the model fit. This should do the trick: plot(y~x, data = dat, cex = .1) G1pred <- predict(G1) I1 <- order(dat$y) lines(dat$x, G1pred) You can also adjust k manually, and see what number of k brings you closest to the k value set automatically by GCV.
Selecting knots for a GAM Update If you are a stats newbie like me, this answer may suffice. if you want a more correct answer, see Nukimov's answer. A much better option is to fit your model using gam() in the mgcv package, w
19,513
Selecting knots for a GAM
Where is the idea coming from that GCV will automatically choose the number of knots? The number of knots (i.e., the basis dimension) is fixed and cannot be changed during model fit. What the GCV score in function gam() is doing "automatically" is not choosing the basis dimension k, as Ira S says, but is choosing the smooth level of each basis spline by introducing a wigliness penalty in the minimizer or fitting goal. To choose the number of knots k you should use a value larger than the number of degrees of freedom you are expecting. Quoting the help of choose.k: "exact choice of k is not generally critical: it should be chosen to be large enough that you are reasonably sure of having enough degrees of freedom to represent the underlying ‘truth’ reasonably well, but small enough to maintain reasonable computational efficiency". So, basically increase k in large steps until you see no changes in your plot, for instance. Summarizing: There is nothing like an "automatic" choice for k as Ira S is saying, the user should always choose a k value as part of model design. Otherwise you are most probably under-fitting your model!
Selecting knots for a GAM
Where is the idea coming from that GCV will automatically choose the number of knots? The number of knots (i.e., the basis dimension) is fixed and cannot be changed during model fit. What the GCV scor
Selecting knots for a GAM Where is the idea coming from that GCV will automatically choose the number of knots? The number of knots (i.e., the basis dimension) is fixed and cannot be changed during model fit. What the GCV score in function gam() is doing "automatically" is not choosing the basis dimension k, as Ira S says, but is choosing the smooth level of each basis spline by introducing a wigliness penalty in the minimizer or fitting goal. To choose the number of knots k you should use a value larger than the number of degrees of freedom you are expecting. Quoting the help of choose.k: "exact choice of k is not generally critical: it should be chosen to be large enough that you are reasonably sure of having enough degrees of freedom to represent the underlying ‘truth’ reasonably well, but small enough to maintain reasonable computational efficiency". So, basically increase k in large steps until you see no changes in your plot, for instance. Summarizing: There is nothing like an "automatic" choice for k as Ira S is saying, the user should always choose a k value as part of model design. Otherwise you are most probably under-fitting your model!
Selecting knots for a GAM Where is the idea coming from that GCV will automatically choose the number of knots? The number of knots (i.e., the basis dimension) is fixed and cannot be changed during model fit. What the GCV scor
19,514
Endogeneity versus unobserved heterogeneity
The terms endogeneity and unobserved heterogeneity often refer to the same thing but usage varies somewhat, even within economics, the discipline I most associate with the terms. In a regression equation, an explanatory variable is endogenous if it is correlated with the error term. Endogeneity is often described as having three sources: omitted variables, measurement error, and simultaneity. Though it is often helpful to mention these "sources" separately, confusion sometimes arises because they are not truly distinct. Imagine a regression predicting the effect of education on wages. Perhaps our measure of education is simply the number of years someone spent in formal education, regardless of the type of education. If I have a clear idea of what type of education affects wages, I might describe this situation as measurement error in the education variable. Alternatively, I could describe the situation as an omitted variables problem (the variables indicating type of education). Perhaps wages also affect education decisions. If wages and education are measured at the same time this is an example of simultaneity, but it too, might be reframed in terms of omitted variables. Unobserved heterogeneity is simply variation/differences among cases which are not measured. If you understand endogeneity, I think you understand the implications of unobserved heterogeneity in a regression context.
Endogeneity versus unobserved heterogeneity
The terms endogeneity and unobserved heterogeneity often refer to the same thing but usage varies somewhat, even within economics, the discipline I most associate with the terms. In a regression equ
Endogeneity versus unobserved heterogeneity The terms endogeneity and unobserved heterogeneity often refer to the same thing but usage varies somewhat, even within economics, the discipline I most associate with the terms. In a regression equation, an explanatory variable is endogenous if it is correlated with the error term. Endogeneity is often described as having three sources: omitted variables, measurement error, and simultaneity. Though it is often helpful to mention these "sources" separately, confusion sometimes arises because they are not truly distinct. Imagine a regression predicting the effect of education on wages. Perhaps our measure of education is simply the number of years someone spent in formal education, regardless of the type of education. If I have a clear idea of what type of education affects wages, I might describe this situation as measurement error in the education variable. Alternatively, I could describe the situation as an omitted variables problem (the variables indicating type of education). Perhaps wages also affect education decisions. If wages and education are measured at the same time this is an example of simultaneity, but it too, might be reframed in terms of omitted variables. Unobserved heterogeneity is simply variation/differences among cases which are not measured. If you understand endogeneity, I think you understand the implications of unobserved heterogeneity in a regression context.
Endogeneity versus unobserved heterogeneity The terms endogeneity and unobserved heterogeneity often refer to the same thing but usage varies somewhat, even within economics, the discipline I most associate with the terms. In a regression equ
19,515
Endogeneity versus unobserved heterogeneity
I agree with @Michael's description of endogeneity---this is about a problem with the variables that you include and their relationship to the variables that you do not (i.e., the stuff in the error term). Unobserved heterogeneity is typically about unobservable componenents of the effects that you are estimating. Continuing with @Michael's education example, unobserved heterogeneity might be that some people have higher returns (e.g., increases in wages) from going to school than others. Let the returns for person $i$ be $\beta + b_i$ with $\mathbb{E}(b_i) = 0$. We have $$\begin{equation*} y_i = x_i (\beta + b_i) + w^\prime_i \gamma + \epsilon_i, \end{equation*}$$ where $y_i$ is (typically, log) income, $x_i$ is years of education, and $w_i$ is a set of other controls. An example of endogeneity is when $x_i$ is correlated with $\epsilon_i$ (e.g., education is correlated with IQ, which is not among our other predictors). If we estimate a single coefficient, we have $$\begin{equation*} y_i = x_i \beta + w^\prime_i \gamma + (\epsilon_i + b x_i) = x_i \beta + w^\prime_i \gamma + \tilde{\epsilon}_i \end{equation*}$$ See that the included variable $x_i$ is correlated with the error term $\tilde{\epsilon}_i$, inducing the same problems as the case of endogeneity.
Endogeneity versus unobserved heterogeneity
I agree with @Michael's description of endogeneity---this is about a problem with the variables that you include and their relationship to the variables that you do not (i.e., the stuff in the error t
Endogeneity versus unobserved heterogeneity I agree with @Michael's description of endogeneity---this is about a problem with the variables that you include and their relationship to the variables that you do not (i.e., the stuff in the error term). Unobserved heterogeneity is typically about unobservable componenents of the effects that you are estimating. Continuing with @Michael's education example, unobserved heterogeneity might be that some people have higher returns (e.g., increases in wages) from going to school than others. Let the returns for person $i$ be $\beta + b_i$ with $\mathbb{E}(b_i) = 0$. We have $$\begin{equation*} y_i = x_i (\beta + b_i) + w^\prime_i \gamma + \epsilon_i, \end{equation*}$$ where $y_i$ is (typically, log) income, $x_i$ is years of education, and $w_i$ is a set of other controls. An example of endogeneity is when $x_i$ is correlated with $\epsilon_i$ (e.g., education is correlated with IQ, which is not among our other predictors). If we estimate a single coefficient, we have $$\begin{equation*} y_i = x_i \beta + w^\prime_i \gamma + (\epsilon_i + b x_i) = x_i \beta + w^\prime_i \gamma + \tilde{\epsilon}_i \end{equation*}$$ See that the included variable $x_i$ is correlated with the error term $\tilde{\epsilon}_i$, inducing the same problems as the case of endogeneity.
Endogeneity versus unobserved heterogeneity I agree with @Michael's description of endogeneity---this is about a problem with the variables that you include and their relationship to the variables that you do not (i.e., the stuff in the error t
19,516
Endogeneity versus unobserved heterogeneity
I understand heterogeneity to be any difference between individuals. Observed heterogeneity usually consists of the covariates and unobserved heterogeneity consists of any unobserved difference like ability or effort. Endogeneity refers to the relationship between the observed and unobserved variables, namely that they are dependent on one another.
Endogeneity versus unobserved heterogeneity
I understand heterogeneity to be any difference between individuals. Observed heterogeneity usually consists of the covariates and unobserved heterogeneity consists of any unobserved difference like a
Endogeneity versus unobserved heterogeneity I understand heterogeneity to be any difference between individuals. Observed heterogeneity usually consists of the covariates and unobserved heterogeneity consists of any unobserved difference like ability or effort. Endogeneity refers to the relationship between the observed and unobserved variables, namely that they are dependent on one another.
Endogeneity versus unobserved heterogeneity I understand heterogeneity to be any difference between individuals. Observed heterogeneity usually consists of the covariates and unobserved heterogeneity consists of any unobserved difference like a
19,517
Endogeneity versus unobserved heterogeneity
To wrap it up: Unobserved heterogeneity is one possible cause of endogeneity. Endogeneity is therefore the broader term. Unobserved heterogeneity implies endogeneity but not the other way around.
Endogeneity versus unobserved heterogeneity
To wrap it up: Unobserved heterogeneity is one possible cause of endogeneity. Endogeneity is therefore the broader term. Unobserved heterogeneity implies endogeneity but not the other way around.
Endogeneity versus unobserved heterogeneity To wrap it up: Unobserved heterogeneity is one possible cause of endogeneity. Endogeneity is therefore the broader term. Unobserved heterogeneity implies endogeneity but not the other way around.
Endogeneity versus unobserved heterogeneity To wrap it up: Unobserved heterogeneity is one possible cause of endogeneity. Endogeneity is therefore the broader term. Unobserved heterogeneity implies endogeneity but not the other way around.
19,518
Endogeneity versus unobserved heterogeneity
the difference between the unobserved heterogeniety and endogeniety in the case of omitted variables lies in the orthogonality assumptions made. Whereas in the former, the assumption is that the unobserved omitted variable is independent of the observed (included) explanatory variable x,...in the latter this assumption is relaxed such that the unobserved (omitted) variable is correlated with some of the observed (included) explanatory variable.
Endogeneity versus unobserved heterogeneity
the difference between the unobserved heterogeniety and endogeniety in the case of omitted variables lies in the orthogonality assumptions made. Whereas in the former, the assumption is that the unobs
Endogeneity versus unobserved heterogeneity the difference between the unobserved heterogeniety and endogeniety in the case of omitted variables lies in the orthogonality assumptions made. Whereas in the former, the assumption is that the unobserved omitted variable is independent of the observed (included) explanatory variable x,...in the latter this assumption is relaxed such that the unobserved (omitted) variable is correlated with some of the observed (included) explanatory variable.
Endogeneity versus unobserved heterogeneity the difference between the unobserved heterogeniety and endogeniety in the case of omitted variables lies in the orthogonality assumptions made. Whereas in the former, the assumption is that the unobs
19,519
Endogeneity versus unobserved heterogeneity
Easy answer, without explanation because it is not wanted: if the omitted variables that cause endogeneity are not observable we call it unobserved heterogeneity. Easy :)
Endogeneity versus unobserved heterogeneity
Easy answer, without explanation because it is not wanted: if the omitted variables that cause endogeneity are not observable we call it unobserved heterogeneity. Easy :)
Endogeneity versus unobserved heterogeneity Easy answer, without explanation because it is not wanted: if the omitted variables that cause endogeneity are not observable we call it unobserved heterogeneity. Easy :)
Endogeneity versus unobserved heterogeneity Easy answer, without explanation because it is not wanted: if the omitted variables that cause endogeneity are not observable we call it unobserved heterogeneity. Easy :)
19,520
Why isn't the sum of Precision and Recall a worthy measure?
It's not that $\text{Precision} + \text{Recall}$ is a bad measure per se, its just that, on its own, the resulting number doesn't represent anything meaningful. You are on the right track though... what we are looking for is a combined, average of the two performance measures since we don't want to have to choose between them. Recall that precision and recall are defined as: $$\text{Precision} = \frac{\text{True Positive}}{\text{Predicted Positive}}$$ $$\text{Recall} = \frac{\text{True Positive}}{\text{Actual Positive}}$$ Since they both have different denominators, adding them together results in something like this: $$\frac{\text{True Positive}\left(\text{Predicted Positive}+\text{Actual Positive}\right)}{\text{Predicted Positive}\times \text{Actual Positive}}$$ ... which isn't particularly useful. Lets go back to adding them together, and make a tweak: multiply them by $\frac{1}{2}$ so that they are the stay in the correct scale, $[0-1]$. This is taking the familiar average of them. $$ \frac{1}{2} \times \left( \frac{\text{True Positive}}{\text{Predicted Positive}} + \frac{\text{True Positive}}{\text{Actual Positive}} \right) $$ So, we have two quantities, which have the same numerator, but different denominators and we would like to take the average of them. What do we do? Well we could flip them over, take their inverse. Then you could add them together. So they are "right side up", you take the inverse again. This process of inverting, and then inverting again turns a "regular" mean into a harmonic mean. It just so happens that the harmonic mean of precision and recall is the F1-statistic. The harmonic mean is generally used instead of the standard arithmetic mean when dealing with rates, as we doing are here. In the end, the F1-statistic is just the average of precision and recall, and you use it because you don't want to choose one or the other to evaluate the model's performance.
Why isn't the sum of Precision and Recall a worthy measure?
It's not that $\text{Precision} + \text{Recall}$ is a bad measure per se, its just that, on its own, the resulting number doesn't represent anything meaningful. You are on the right track though... wh
Why isn't the sum of Precision and Recall a worthy measure? It's not that $\text{Precision} + \text{Recall}$ is a bad measure per se, its just that, on its own, the resulting number doesn't represent anything meaningful. You are on the right track though... what we are looking for is a combined, average of the two performance measures since we don't want to have to choose between them. Recall that precision and recall are defined as: $$\text{Precision} = \frac{\text{True Positive}}{\text{Predicted Positive}}$$ $$\text{Recall} = \frac{\text{True Positive}}{\text{Actual Positive}}$$ Since they both have different denominators, adding them together results in something like this: $$\frac{\text{True Positive}\left(\text{Predicted Positive}+\text{Actual Positive}\right)}{\text{Predicted Positive}\times \text{Actual Positive}}$$ ... which isn't particularly useful. Lets go back to adding them together, and make a tweak: multiply them by $\frac{1}{2}$ so that they are the stay in the correct scale, $[0-1]$. This is taking the familiar average of them. $$ \frac{1}{2} \times \left( \frac{\text{True Positive}}{\text{Predicted Positive}} + \frac{\text{True Positive}}{\text{Actual Positive}} \right) $$ So, we have two quantities, which have the same numerator, but different denominators and we would like to take the average of them. What do we do? Well we could flip them over, take their inverse. Then you could add them together. So they are "right side up", you take the inverse again. This process of inverting, and then inverting again turns a "regular" mean into a harmonic mean. It just so happens that the harmonic mean of precision and recall is the F1-statistic. The harmonic mean is generally used instead of the standard arithmetic mean when dealing with rates, as we doing are here. In the end, the F1-statistic is just the average of precision and recall, and you use it because you don't want to choose one or the other to evaluate the model's performance.
Why isn't the sum of Precision and Recall a worthy measure? It's not that $\text{Precision} + \text{Recall}$ is a bad measure per se, its just that, on its own, the resulting number doesn't represent anything meaningful. You are on the right track though... wh
19,521
Why isn't the sum of Precision and Recall a worthy measure?
The short answer is: you would not expect the summing of two percentages which have two different denominators to have any particular meaning. Hence, the approach to take an average measure such as F1, F2 or F0.5. The latter retain at least the property of a percentage. What about their meaning though? The beauty of Precision and Recall as separate measures is their ease of interpretation and the fact that they can be easily confronted with the model's business objectives. Precision measures the percentage of true positives out of the cases classified as positive by the model. Recall measures the percentage of true positives found by the model out of all the true cases. For many problems, you will have to choose between optimizing either Precision or Recall. Any average measure looses the above interpretation and boils down to which measure you prefer most. F1 means either you don't know whether you prefer Recall or Precision, or you attach equal weight to each of them. If you consider Recall more important than Precision, then you should also allocate a higher weight to it in the average calculation (e.g F2), and vice versa (e.g F0.5).
Why isn't the sum of Precision and Recall a worthy measure?
The short answer is: you would not expect the summing of two percentages which have two different denominators to have any particular meaning. Hence, the approach to take an average measure such as F1
Why isn't the sum of Precision and Recall a worthy measure? The short answer is: you would not expect the summing of two percentages which have two different denominators to have any particular meaning. Hence, the approach to take an average measure such as F1, F2 or F0.5. The latter retain at least the property of a percentage. What about their meaning though? The beauty of Precision and Recall as separate measures is their ease of interpretation and the fact that they can be easily confronted with the model's business objectives. Precision measures the percentage of true positives out of the cases classified as positive by the model. Recall measures the percentage of true positives found by the model out of all the true cases. For many problems, you will have to choose between optimizing either Precision or Recall. Any average measure looses the above interpretation and boils down to which measure you prefer most. F1 means either you don't know whether you prefer Recall or Precision, or you attach equal weight to each of them. If you consider Recall more important than Precision, then you should also allocate a higher weight to it in the average calculation (e.g F2), and vice versa (e.g F0.5).
Why isn't the sum of Precision and Recall a worthy measure? The short answer is: you would not expect the summing of two percentages which have two different denominators to have any particular meaning. Hence, the approach to take an average measure such as F1
19,522
Why isn't the sum of Precision and Recall a worthy measure?
Adding the two is a bad measure. You'll get a score of at least 1 if you flag everything as positive, since that's a 100% recall by definition. And you'll get a little precision bump on top of that. The geometric mean used in F1 emphasizes the weak link, since it is multiplicative; you have to at least do okay with both precision and recall to get a decent F1 score.
Why isn't the sum of Precision and Recall a worthy measure?
Adding the two is a bad measure. You'll get a score of at least 1 if you flag everything as positive, since that's a 100% recall by definition. And you'll get a little precision bump on top of that. T
Why isn't the sum of Precision and Recall a worthy measure? Adding the two is a bad measure. You'll get a score of at least 1 if you flag everything as positive, since that's a 100% recall by definition. And you'll get a little precision bump on top of that. The geometric mean used in F1 emphasizes the weak link, since it is multiplicative; you have to at least do okay with both precision and recall to get a decent F1 score.
Why isn't the sum of Precision and Recall a worthy measure? Adding the two is a bad measure. You'll get a score of at least 1 if you flag everything as positive, since that's a 100% recall by definition. And you'll get a little precision bump on top of that. T
19,523
Why isn't the sum of Precision and Recall a worthy measure?
F1 score is especially valuable in case of severely asymmetric probabilities. Consider the following example: we test for a rare but dangerous illness. Let's assume that in a city of 1.000.000 people only 100 are infected. Test A detects all these 100 positives. However, it also has 50% false positive rate: it erroneously shows another 500.000 people to be ill. Meanwhile, test B misses 10% of the infected, but gives only 1.000 false positives (0.1% false positive rate) Let's calculate the scores. For test A, precision will be effectively 0; recall will be exactly 1. For test B, precision will still be rather small, about 0.01. Recall will be equal to 0.9. If we naively sum or take arithmetic mean of precision and recall, this will give 1 (0.5) for test A and 0.91 (0.455) for test B. So, test A would seem marginally better. However, if we look from a practical perspective, test A is worthless: if a person is tested positive, his chance to be truly ill is 1 in 50.000! Test B has more practical significance: you may take 1.100 people to the hospital and observe them closely. This is accurately reflected by F1 score: for test A it will be close to 0.0002, for test B: (0.01 * 0.9) / (0.01 + 0.9) = 0.0098, which is still rather poor, but about 50 times better. This match between score value and practical significance is what makes F1 score valuable.
Why isn't the sum of Precision and Recall a worthy measure?
F1 score is especially valuable in case of severely asymmetric probabilities. Consider the following example: we test for a rare but dangerous illness. Let's assume that in a city of 1.000.000 people
Why isn't the sum of Precision and Recall a worthy measure? F1 score is especially valuable in case of severely asymmetric probabilities. Consider the following example: we test for a rare but dangerous illness. Let's assume that in a city of 1.000.000 people only 100 are infected. Test A detects all these 100 positives. However, it also has 50% false positive rate: it erroneously shows another 500.000 people to be ill. Meanwhile, test B misses 10% of the infected, but gives only 1.000 false positives (0.1% false positive rate) Let's calculate the scores. For test A, precision will be effectively 0; recall will be exactly 1. For test B, precision will still be rather small, about 0.01. Recall will be equal to 0.9. If we naively sum or take arithmetic mean of precision and recall, this will give 1 (0.5) for test A and 0.91 (0.455) for test B. So, test A would seem marginally better. However, if we look from a practical perspective, test A is worthless: if a person is tested positive, his chance to be truly ill is 1 in 50.000! Test B has more practical significance: you may take 1.100 people to the hospital and observe them closely. This is accurately reflected by F1 score: for test A it will be close to 0.0002, for test B: (0.01 * 0.9) / (0.01 + 0.9) = 0.0098, which is still rather poor, but about 50 times better. This match between score value and practical significance is what makes F1 score valuable.
Why isn't the sum of Precision and Recall a worthy measure? F1 score is especially valuable in case of severely asymmetric probabilities. Consider the following example: we test for a rare but dangerous illness. Let's assume that in a city of 1.000.000 people
19,524
Why isn't the sum of Precision and Recall a worthy measure?
In general, maximizing the geometric mean emphasizes the values being similar. For example, take two models: the first has (precision, recall) = (0.8, 0.8) and the second has (precision, recall) = (0.6, 1.0). Using the algebraic mean, both models would be equivalent. Using the geometric mean, the first model is better because it doesn't trade precision for recall.
Why isn't the sum of Precision and Recall a worthy measure?
In general, maximizing the geometric mean emphasizes the values being similar. For example, take two models: the first has (precision, recall) = (0.8, 0.8) and the second has (precision, recall) = (0.
Why isn't the sum of Precision and Recall a worthy measure? In general, maximizing the geometric mean emphasizes the values being similar. For example, take two models: the first has (precision, recall) = (0.8, 0.8) and the second has (precision, recall) = (0.6, 1.0). Using the algebraic mean, both models would be equivalent. Using the geometric mean, the first model is better because it doesn't trade precision for recall.
Why isn't the sum of Precision and Recall a worthy measure? In general, maximizing the geometric mean emphasizes the values being similar. For example, take two models: the first has (precision, recall) = (0.8, 0.8) and the second has (precision, recall) = (0.
19,525
Timeseries analysis procedure and methods using R
You should use the forecast package, which supports all of these models (and more) and makes fitting them a snap: library(forecast) x <- AirPassengers mod_arima <- auto.arima(x, ic='aicc', stepwise=FALSE) mod_exponential <- ets(x, ic='aicc', restrict=FALSE) mod_neural <- nnetar(x, p=12, size=25) mod_tbats <- tbats(x, ic='aicc', seasonal.periods=12) par(mfrow=c(4, 1)) plot(forecast(mod_arima, 12), include=36) plot(forecast(mod_exponential, 12), include=36) plot(forecast(mod_neural, 12), include=36) plot(forecast(mod_tbats, 12), include=36) I would advise against smoothing the data prior to fitting your model. Your model is inherently going to try to smooth the data, so pre-smoothing just complicates things. Edit based on new data: It actually looks like arima is one of the worst models you could chose for this training and test set. I saved your data to a file call coil.csv, loaded it into R, and split it into a training and test set: library(forecast) dat <- read.csv('~/coil.csv') x <- ts(dat$Coil, start=c(dat$Year[1], dat$Month[1]), frequency=12) test_x <- window(x, start=c(2012, 3)) x <- window(x, end=c(2012, 2)) Next I fit a bunch of time series models: arima, exponential smoothing, neural network, tbats, bats, seasonal decomposition, and structural time series: models <- list( mod_arima = auto.arima(x, ic='aicc', stepwise=FALSE), mod_exp = ets(x, ic='aicc', restrict=FALSE), mod_neural = nnetar(x, p=12, size=25), mod_tbats = tbats(x, ic='aicc', seasonal.periods=12), mod_bats = bats(x, ic='aicc', seasonal.periods=12), mod_stl = stlm(x, s.window=12, ic='aicc', robust=TRUE, method='ets'), mod_sts = StructTS(x) ) Then I made some forecasts and compared to the test set. I included a naive forecast that always predicts a flat, horizontal line: forecasts <- lapply(models, forecast, 12) forecasts$naive <- naive(x, 12) par(mfrow=c(4, 2)) for(f in forecasts){ plot(f) lines(test_x, col='red') } As you can see, the arima model gets the trend wrong, but I kind of like the look of the "Basic Structural Model" Finally, I measured each model's accuracy on the test set: acc <- lapply(forecasts, function(f){ accuracy(f, test_x)[2,,drop=FALSE] }) acc <- Reduce(rbind, acc) row.names(acc) <- names(forecasts) acc <- acc[order(acc[,'MASE']),] round(acc, 2) ME RMSE MAE MPE MAPE MASE ACF1 Theil's U mod_sts 283.15 609.04 514.46 0.69 1.27 0.10 0.77 1.65 mod_bats 65.36 706.93 638.31 0.13 1.59 0.12 0.85 1.96 mod_tbats 65.22 706.92 638.32 0.13 1.59 0.12 0.85 1.96 mod_exp 25.00 706.52 641.67 0.03 1.60 0.12 0.85 1.96 naive 25.00 706.52 641.67 0.03 1.60 0.12 0.85 1.96 mod_neural 81.14 853.86 754.61 0.18 1.89 0.14 0.14 2.39 mod_arima 766.51 904.06 766.51 1.90 1.90 0.14 0.73 2.48 mod_stl -208.74 1166.84 1005.81 -0.52 2.50 0.19 0.32 3.02 The metrics used are described in Hyndman, R.J. and Athanasopoulos, G. (2014) "Forecasting: principles and practice", who also happen to be the authors of the forecast package. I highly recommend you read their text: it's available for free online. The structural time series is the best model by several metrics, including MASE, which is the metric I tend to prefer for model selection. One final question is: did the structural model get lucky on this test set? One way to assess this is looking at training set errors. Training set errors are less reliable than test set errors (because they can be over-fit), but in this case the structural model still comes out on top: acc <- lapply(forecasts, function(f){ accuracy(f, test_x)[1,,drop=FALSE] }) acc <- Reduce(rbind, acc) row.names(acc) <- names(forecasts) acc <- acc[order(acc[,'MASE']),] round(acc, 2) ME RMSE MAE MPE MAPE MASE ACF1 Theil's U mod_sts -0.03 0.99 0.71 0.00 0.00 0.00 0.08 NA mod_neural 3.00 1145.91 839.15 -0.09 2.25 0.16 0.00 NA mod_exp -82.74 1915.75 1359.87 -0.33 3.68 0.25 0.06 NA naive -86.96 1936.38 1386.96 -0.34 3.75 0.26 0.06 NA mod_arima -180.32 1889.56 1393.94 -0.74 3.79 0.26 0.09 NA mod_stl -38.12 2158.25 1471.63 -0.22 4.00 0.28 -0.09 NA mod_bats 57.07 2184.16 1525.28 0.00 4.07 0.29 -0.03 NA mod_tbats 62.30 2203.54 1531.48 0.01 4.08 0.29 -0.03 NA (Note that the neural network overfit, performing excellent on the training set and poorly on the test set) Finally, it would be a good idea to cross-validate all of these models, perhaps by training on 2008-2009/testing on 2010, training on 2008-2010/testing on 2011, training on 2008-2011/testing on 2012, training on 2008-2012/testing on 2013, and averaging errors across all of these time periods. If you wish to go down that route, I have a partially complete package for cross-validating time series models on github that I'd love you to try out and give me feedback/pull requests on: devtools::install_github('zachmayer/cv.ts') library(cv.ts) Edit 2: Lets see if I remember how to use my own package! First of all, install and load the package from github (see above). Then cross-validate some models (using the full dataset): library(cv.ts) x <- ts(dat$Coil, start=c(dat$Year[1], dat$Month[1]), frequency=12) ctrl <- tseriesControl(stepSize=1, maxHorizon=12, minObs=36, fixedWindow=TRUE) models <- list() models$arima = cv.ts( x, auto.arimaForecast, tsControl=ctrl, ic='aicc', stepwise=FALSE) models$exp = cv.ts( x, etsForecast, tsControl=ctrl, ic='aicc', restrict=FALSE) models$neural = cv.ts( x, nnetarForecast, tsControl=ctrl, nn_p=6, size=5) models$tbats = cv.ts( x, tbatsForecast, tsControl=ctrl, seasonal.periods=12) models$bats = cv.ts( x, batsForecast, tsControl=ctrl, seasonal.periods=12) models$stl = cv.ts( x, stl.Forecast, tsControl=ctrl, s.window=12, ic='aicc', robust=TRUE, method='ets') models$sts = cv.ts(x, stsForecast, tsControl=ctrl) models$naive = cv.ts(x, naiveForecast, tsControl=ctrl) models$theta = cv.ts(x, thetaForecast, tsControl=ctrl) (Note that I reduced the flexibility of the neural network model, to try to help prevent it from overfitting) Once we've fit the models, we can compare them by MAPE (cv.ts doesn't yet support MASE): res_overall <- lapply(models, function(x) x$results[13,-1]) res_overall <- Reduce(rbind, res_overall) row.names(res_overall) <- names(models) res_overall <- res_overall[order(res_overall[,'MAPE']),] round(res_overall, 2) ME RMSE MAE MPE MAPE naive 91.40 1126.83 961.18 0.19 2.40 ets 91.56 1127.09 961.35 0.19 2.40 stl -114.59 1661.73 1332.73 -0.29 3.36 neural 5.26 1979.83 1521.83 0.00 3.83 bats 294.01 2087.99 1725.14 0.70 4.32 sts -698.90 3680.71 1901.78 -1.81 4.77 arima -1687.27 2750.49 2199.53 -4.23 5.53 tbats -476.67 2761.44 2428.34 -1.23 6.10 Ouch. It would appear that our structural forecast got lucky. Over the long term, the naive forecast makes the best forecasts, averaged across a 12-month horizon (the arima model is still one of the worst models). Let's compare the models at each of the 12 forecast horizons, and see if any of them ever beat the naive model: library(reshape2) library(ggplot2) res <- lapply(models, function(x) x$results$MAPE[1:12]) res <- data.frame(do.call(cbind, res)) res$horizon <- 1:nrow(res) res <- melt(res, id.var='horizon', variable.name='model', value.name='MAPE') res$model <- factor(res$model, levels=row.names(res_overall)) ggplot(res, aes(x=horizon, y=MAPE, col=model)) + geom_line(size=2) + theme_bw() + theme(legend.position="top") + scale_color_manual(values=c( "#1f78b4", "#ff7f00", "#33a02c", "#6a3d9a", "#e31a1c", "#b15928", "#a6cee3", "#fdbf6f", "#b2df8a") ) Tellingly, the exponential smoothing model is always picking the naive model (the orange line and blue line overlap 100%). In other words, the naive forecast of "next month's coil prices will be the same as this month's coil prices" is more accurate (at almost every forecast horizon) than 7 extremely sophisticated time series models. Unless you have some secret information the coil market doesn't already know, beating the naive coil price forecast is going to be extremely difficult. It's never the answer anyone wants to hear, but if forecast accuracy is your goal, you should use the most accurate model. Use the naive model.
Timeseries analysis procedure and methods using R
You should use the forecast package, which supports all of these models (and more) and makes fitting them a snap: library(forecast) x <- AirPassengers mod_arima <- auto.arima(x, ic='aicc', stepwise=FA
Timeseries analysis procedure and methods using R You should use the forecast package, which supports all of these models (and more) and makes fitting them a snap: library(forecast) x <- AirPassengers mod_arima <- auto.arima(x, ic='aicc', stepwise=FALSE) mod_exponential <- ets(x, ic='aicc', restrict=FALSE) mod_neural <- nnetar(x, p=12, size=25) mod_tbats <- tbats(x, ic='aicc', seasonal.periods=12) par(mfrow=c(4, 1)) plot(forecast(mod_arima, 12), include=36) plot(forecast(mod_exponential, 12), include=36) plot(forecast(mod_neural, 12), include=36) plot(forecast(mod_tbats, 12), include=36) I would advise against smoothing the data prior to fitting your model. Your model is inherently going to try to smooth the data, so pre-smoothing just complicates things. Edit based on new data: It actually looks like arima is one of the worst models you could chose for this training and test set. I saved your data to a file call coil.csv, loaded it into R, and split it into a training and test set: library(forecast) dat <- read.csv('~/coil.csv') x <- ts(dat$Coil, start=c(dat$Year[1], dat$Month[1]), frequency=12) test_x <- window(x, start=c(2012, 3)) x <- window(x, end=c(2012, 2)) Next I fit a bunch of time series models: arima, exponential smoothing, neural network, tbats, bats, seasonal decomposition, and structural time series: models <- list( mod_arima = auto.arima(x, ic='aicc', stepwise=FALSE), mod_exp = ets(x, ic='aicc', restrict=FALSE), mod_neural = nnetar(x, p=12, size=25), mod_tbats = tbats(x, ic='aicc', seasonal.periods=12), mod_bats = bats(x, ic='aicc', seasonal.periods=12), mod_stl = stlm(x, s.window=12, ic='aicc', robust=TRUE, method='ets'), mod_sts = StructTS(x) ) Then I made some forecasts and compared to the test set. I included a naive forecast that always predicts a flat, horizontal line: forecasts <- lapply(models, forecast, 12) forecasts$naive <- naive(x, 12) par(mfrow=c(4, 2)) for(f in forecasts){ plot(f) lines(test_x, col='red') } As you can see, the arima model gets the trend wrong, but I kind of like the look of the "Basic Structural Model" Finally, I measured each model's accuracy on the test set: acc <- lapply(forecasts, function(f){ accuracy(f, test_x)[2,,drop=FALSE] }) acc <- Reduce(rbind, acc) row.names(acc) <- names(forecasts) acc <- acc[order(acc[,'MASE']),] round(acc, 2) ME RMSE MAE MPE MAPE MASE ACF1 Theil's U mod_sts 283.15 609.04 514.46 0.69 1.27 0.10 0.77 1.65 mod_bats 65.36 706.93 638.31 0.13 1.59 0.12 0.85 1.96 mod_tbats 65.22 706.92 638.32 0.13 1.59 0.12 0.85 1.96 mod_exp 25.00 706.52 641.67 0.03 1.60 0.12 0.85 1.96 naive 25.00 706.52 641.67 0.03 1.60 0.12 0.85 1.96 mod_neural 81.14 853.86 754.61 0.18 1.89 0.14 0.14 2.39 mod_arima 766.51 904.06 766.51 1.90 1.90 0.14 0.73 2.48 mod_stl -208.74 1166.84 1005.81 -0.52 2.50 0.19 0.32 3.02 The metrics used are described in Hyndman, R.J. and Athanasopoulos, G. (2014) "Forecasting: principles and practice", who also happen to be the authors of the forecast package. I highly recommend you read their text: it's available for free online. The structural time series is the best model by several metrics, including MASE, which is the metric I tend to prefer for model selection. One final question is: did the structural model get lucky on this test set? One way to assess this is looking at training set errors. Training set errors are less reliable than test set errors (because they can be over-fit), but in this case the structural model still comes out on top: acc <- lapply(forecasts, function(f){ accuracy(f, test_x)[1,,drop=FALSE] }) acc <- Reduce(rbind, acc) row.names(acc) <- names(forecasts) acc <- acc[order(acc[,'MASE']),] round(acc, 2) ME RMSE MAE MPE MAPE MASE ACF1 Theil's U mod_sts -0.03 0.99 0.71 0.00 0.00 0.00 0.08 NA mod_neural 3.00 1145.91 839.15 -0.09 2.25 0.16 0.00 NA mod_exp -82.74 1915.75 1359.87 -0.33 3.68 0.25 0.06 NA naive -86.96 1936.38 1386.96 -0.34 3.75 0.26 0.06 NA mod_arima -180.32 1889.56 1393.94 -0.74 3.79 0.26 0.09 NA mod_stl -38.12 2158.25 1471.63 -0.22 4.00 0.28 -0.09 NA mod_bats 57.07 2184.16 1525.28 0.00 4.07 0.29 -0.03 NA mod_tbats 62.30 2203.54 1531.48 0.01 4.08 0.29 -0.03 NA (Note that the neural network overfit, performing excellent on the training set and poorly on the test set) Finally, it would be a good idea to cross-validate all of these models, perhaps by training on 2008-2009/testing on 2010, training on 2008-2010/testing on 2011, training on 2008-2011/testing on 2012, training on 2008-2012/testing on 2013, and averaging errors across all of these time periods. If you wish to go down that route, I have a partially complete package for cross-validating time series models on github that I'd love you to try out and give me feedback/pull requests on: devtools::install_github('zachmayer/cv.ts') library(cv.ts) Edit 2: Lets see if I remember how to use my own package! First of all, install and load the package from github (see above). Then cross-validate some models (using the full dataset): library(cv.ts) x <- ts(dat$Coil, start=c(dat$Year[1], dat$Month[1]), frequency=12) ctrl <- tseriesControl(stepSize=1, maxHorizon=12, minObs=36, fixedWindow=TRUE) models <- list() models$arima = cv.ts( x, auto.arimaForecast, tsControl=ctrl, ic='aicc', stepwise=FALSE) models$exp = cv.ts( x, etsForecast, tsControl=ctrl, ic='aicc', restrict=FALSE) models$neural = cv.ts( x, nnetarForecast, tsControl=ctrl, nn_p=6, size=5) models$tbats = cv.ts( x, tbatsForecast, tsControl=ctrl, seasonal.periods=12) models$bats = cv.ts( x, batsForecast, tsControl=ctrl, seasonal.periods=12) models$stl = cv.ts( x, stl.Forecast, tsControl=ctrl, s.window=12, ic='aicc', robust=TRUE, method='ets') models$sts = cv.ts(x, stsForecast, tsControl=ctrl) models$naive = cv.ts(x, naiveForecast, tsControl=ctrl) models$theta = cv.ts(x, thetaForecast, tsControl=ctrl) (Note that I reduced the flexibility of the neural network model, to try to help prevent it from overfitting) Once we've fit the models, we can compare them by MAPE (cv.ts doesn't yet support MASE): res_overall <- lapply(models, function(x) x$results[13,-1]) res_overall <- Reduce(rbind, res_overall) row.names(res_overall) <- names(models) res_overall <- res_overall[order(res_overall[,'MAPE']),] round(res_overall, 2) ME RMSE MAE MPE MAPE naive 91.40 1126.83 961.18 0.19 2.40 ets 91.56 1127.09 961.35 0.19 2.40 stl -114.59 1661.73 1332.73 -0.29 3.36 neural 5.26 1979.83 1521.83 0.00 3.83 bats 294.01 2087.99 1725.14 0.70 4.32 sts -698.90 3680.71 1901.78 -1.81 4.77 arima -1687.27 2750.49 2199.53 -4.23 5.53 tbats -476.67 2761.44 2428.34 -1.23 6.10 Ouch. It would appear that our structural forecast got lucky. Over the long term, the naive forecast makes the best forecasts, averaged across a 12-month horizon (the arima model is still one of the worst models). Let's compare the models at each of the 12 forecast horizons, and see if any of them ever beat the naive model: library(reshape2) library(ggplot2) res <- lapply(models, function(x) x$results$MAPE[1:12]) res <- data.frame(do.call(cbind, res)) res$horizon <- 1:nrow(res) res <- melt(res, id.var='horizon', variable.name='model', value.name='MAPE') res$model <- factor(res$model, levels=row.names(res_overall)) ggplot(res, aes(x=horizon, y=MAPE, col=model)) + geom_line(size=2) + theme_bw() + theme(legend.position="top") + scale_color_manual(values=c( "#1f78b4", "#ff7f00", "#33a02c", "#6a3d9a", "#e31a1c", "#b15928", "#a6cee3", "#fdbf6f", "#b2df8a") ) Tellingly, the exponential smoothing model is always picking the naive model (the orange line and blue line overlap 100%). In other words, the naive forecast of "next month's coil prices will be the same as this month's coil prices" is more accurate (at almost every forecast horizon) than 7 extremely sophisticated time series models. Unless you have some secret information the coil market doesn't already know, beating the naive coil price forecast is going to be extremely difficult. It's never the answer anyone wants to hear, but if forecast accuracy is your goal, you should use the most accurate model. Use the naive model.
Timeseries analysis procedure and methods using R You should use the forecast package, which supports all of these models (and more) and makes fitting them a snap: library(forecast) x <- AirPassengers mod_arima <- auto.arima(x, ic='aicc', stepwise=FA
19,526
Timeseries analysis procedure and methods using R
The approach that you have taken is reasonable. If you are new to forecasting, then I would recommend following books: Forecasting methods and applications by Makridakis, Wheelright and Hyndman Forecasting: Principles and practice by Hyndman and Athana­sopou­los. The first book is a classic which I strongly recommend. The second book is an open source book which you can refer for forecasting methods and how it is applied using R open source software package forecast. Both the books provide good background on the methods that I have used. If you are serious about forecasting, then I would recommend Principles of Forecasting by Armstrong which is collection of tremendous amount of research in forecasting that a practitioners might find it very helpful. Coming to your specific example on coil, it reminds me of a concept of forecastability which most textbooks often ignore. Some series such as your series simply cannot be forecasted because it is pattern less as it doesn't exhibit trend or seasonal patters or any systematic variation. In that case I would categorize a series as less forecastable. Before venturing into extrapolation methods, I would look at the data and ask the question, is my series forecastable?In this specific example, a simple extrapolation such as random walk forecast which uses the last value of the forecast has been found to be most accurate. Also one additional comment about neural network: Neural networks are notoriously know to fail in empirical competitions. I would try traditional statitical methods for time series before attempting to use neural networks for time series forecasting tasks. I attempted to model your data in R's forecast package, hopefully the comments are self explanatory. coil <- c(44000, 44500, 42000, 45000, 42500, 41000, 39000, 35000, 34000, 29700, 29700, 29000, 30000, 30000, 31000, 31000, 33500, 33500, 33000, 31500, 34000, 35000, 35000, 36000, 38500, 38500, 35500, 33500, 34500, 36000, 35500, 34500, 35500, 38500, 44500, 40700, 40500, 39100, 39100, 39100, 38600, 39500, 39500, 38500, 39500, 40000, 40000, 40500, 41000, 41000, 41000, 40500, 40000, 39300, 39300, 39300, 39300, 39300, 39800) coilts <- ts(coil,start=c(2008,4),frequency=12) library("forecast") # Data for modeling coilts.mod <- window(coilts,end = c(2012,3)) #Data for testing coil.test <- window(coilts,start=c(2012,4)) # Model using multiple methods - arima, expo smooth, theta, random walk, structural time series #arima coil.arima <- forecast(auto.arima(coilts.mod),h=11) #exponential smoothing coil.ets <- forecast(ets(coilts.mod),h=11) #theta coil.tht <- thetaf(coilts.mod, h=11) #random walk coil.rwf <- rwf(coilts.mod, h=11) #structts coil.struc <- forecast(StructTS(coilts.mod),h=11) ##accuracy arm.acc <- accuracy(coil.arima,coil.test) ets.acc <- accuracy(coil.ets,coil.test) tht.acc <- accuracy(coil.tht,coil.test) rwf.acc <- accuracy(coil.rwf,coil.test) str.acc <- accuracy(coil.struc,coil.test) Using MAE on the hold out data, I would choose ARIMA for short term forecast (1 - 12 months). for long term, I would rely on random walk forecast. Please note that ARIMA picked a random walk model with drift (0,1,0)+drift which tends to be much more accurate than pure random walk model in these type of problems specifically short term. See below chart. This is based on the accuracy function as shown in the above code. Specific answers to your specific questions: Also one question I had was, before passing to ARIMA or neural net should I smooth the data? If yes, using what? No, Forecasting methods naturally smooths your data to fit model. The data shows both Seasonality and trend. The above data doesnt show trend or seasonality. If you determine that the data exhibits seasonality and trend, then choose an appropriate method. Practical Tips to improve accuracy: Combine variety of forecasting methods: - You could try using non extrapolation methods such as forecasting by analogy, judgmental forecasting or other techniques and combine them with your statitical methods to provide accurate predictions. See this article for benefits of combining. I tried combining the above 5 methods, but the prediction were not accurate as individual methods, one possible reason is that individual forecast are similar. You reap the benefits of combining forecast when you combine diverse methods such as statistical and judgmental forecasts. Detect and Understand Outliers: - Real world data is filled with outliers. Identify and appropriately treat outliers in time series. Recommend reading this post. In looking at your coil data, is the drop prior to 2009 is an outlier ?? Edit The data appears to be following some type of macro economic trends. My guess is the downward trend seen before 2009 follows slump in economy seen between 2008 - 2009 and start to pick up post 2009. If this is the case, then I would all together avoid using any extrapolation methods and instead rely on sound theory on how these economic trends behave such as the one referenced by @GraemeWalsh. Hope this helps
Timeseries analysis procedure and methods using R
The approach that you have taken is reasonable. If you are new to forecasting, then I would recommend following books: Forecasting methods and applications by Makridakis, Wheelright and Hyndman Forec
Timeseries analysis procedure and methods using R The approach that you have taken is reasonable. If you are new to forecasting, then I would recommend following books: Forecasting methods and applications by Makridakis, Wheelright and Hyndman Forecasting: Principles and practice by Hyndman and Athana­sopou­los. The first book is a classic which I strongly recommend. The second book is an open source book which you can refer for forecasting methods and how it is applied using R open source software package forecast. Both the books provide good background on the methods that I have used. If you are serious about forecasting, then I would recommend Principles of Forecasting by Armstrong which is collection of tremendous amount of research in forecasting that a practitioners might find it very helpful. Coming to your specific example on coil, it reminds me of a concept of forecastability which most textbooks often ignore. Some series such as your series simply cannot be forecasted because it is pattern less as it doesn't exhibit trend or seasonal patters or any systematic variation. In that case I would categorize a series as less forecastable. Before venturing into extrapolation methods, I would look at the data and ask the question, is my series forecastable?In this specific example, a simple extrapolation such as random walk forecast which uses the last value of the forecast has been found to be most accurate. Also one additional comment about neural network: Neural networks are notoriously know to fail in empirical competitions. I would try traditional statitical methods for time series before attempting to use neural networks for time series forecasting tasks. I attempted to model your data in R's forecast package, hopefully the comments are self explanatory. coil <- c(44000, 44500, 42000, 45000, 42500, 41000, 39000, 35000, 34000, 29700, 29700, 29000, 30000, 30000, 31000, 31000, 33500, 33500, 33000, 31500, 34000, 35000, 35000, 36000, 38500, 38500, 35500, 33500, 34500, 36000, 35500, 34500, 35500, 38500, 44500, 40700, 40500, 39100, 39100, 39100, 38600, 39500, 39500, 38500, 39500, 40000, 40000, 40500, 41000, 41000, 41000, 40500, 40000, 39300, 39300, 39300, 39300, 39300, 39800) coilts <- ts(coil,start=c(2008,4),frequency=12) library("forecast") # Data for modeling coilts.mod <- window(coilts,end = c(2012,3)) #Data for testing coil.test <- window(coilts,start=c(2012,4)) # Model using multiple methods - arima, expo smooth, theta, random walk, structural time series #arima coil.arima <- forecast(auto.arima(coilts.mod),h=11) #exponential smoothing coil.ets <- forecast(ets(coilts.mod),h=11) #theta coil.tht <- thetaf(coilts.mod, h=11) #random walk coil.rwf <- rwf(coilts.mod, h=11) #structts coil.struc <- forecast(StructTS(coilts.mod),h=11) ##accuracy arm.acc <- accuracy(coil.arima,coil.test) ets.acc <- accuracy(coil.ets,coil.test) tht.acc <- accuracy(coil.tht,coil.test) rwf.acc <- accuracy(coil.rwf,coil.test) str.acc <- accuracy(coil.struc,coil.test) Using MAE on the hold out data, I would choose ARIMA for short term forecast (1 - 12 months). for long term, I would rely on random walk forecast. Please note that ARIMA picked a random walk model with drift (0,1,0)+drift which tends to be much more accurate than pure random walk model in these type of problems specifically short term. See below chart. This is based on the accuracy function as shown in the above code. Specific answers to your specific questions: Also one question I had was, before passing to ARIMA or neural net should I smooth the data? If yes, using what? No, Forecasting methods naturally smooths your data to fit model. The data shows both Seasonality and trend. The above data doesnt show trend or seasonality. If you determine that the data exhibits seasonality and trend, then choose an appropriate method. Practical Tips to improve accuracy: Combine variety of forecasting methods: - You could try using non extrapolation methods such as forecasting by analogy, judgmental forecasting or other techniques and combine them with your statitical methods to provide accurate predictions. See this article for benefits of combining. I tried combining the above 5 methods, but the prediction were not accurate as individual methods, one possible reason is that individual forecast are similar. You reap the benefits of combining forecast when you combine diverse methods such as statistical and judgmental forecasts. Detect and Understand Outliers: - Real world data is filled with outliers. Identify and appropriately treat outliers in time series. Recommend reading this post. In looking at your coil data, is the drop prior to 2009 is an outlier ?? Edit The data appears to be following some type of macro economic trends. My guess is the downward trend seen before 2009 follows slump in economy seen between 2008 - 2009 and start to pick up post 2009. If this is the case, then I would all together avoid using any extrapolation methods and instead rely on sound theory on how these economic trends behave such as the one referenced by @GraemeWalsh. Hope this helps
Timeseries analysis procedure and methods using R The approach that you have taken is reasonable. If you are new to forecasting, then I would recommend following books: Forecasting methods and applications by Makridakis, Wheelright and Hyndman Forec
19,527
How do I train a (logistic?) regression in R using L1 loss function?
What you want to do does not exist because it is, for lack of better word, mathematically flawed. But first, I will stress why I think the premises of your question are sound. I will then try to explain why I think the conclusions you draw from them rest on a misunderstanding of the logistic model and, finally, I will suggest an alternative approach. I will denote $\{(\pmb x_i,y_i)\}_{i=1}^n$ your $n$ observations (the bolder letters denote vectors) which lie in $p$ dimensional space (the first entry of $\pmb x_i$ is 1) with $p<n$, $y_i\in [0,1]$ and $f(\pmb x_i)= f(\pmb x_i'\pmb\beta)$ is an monotonous function of $\pmb x_i'\pmb\beta$, say like the logistic curve to fix ideas. For expediency, I will just assume that $n$ is sufficiently large compared to $p$. You are correct that if you intend to use TVD as criterion to evaluate the fitted model, then it is reasonable to expect your fit to optimize that same criterion among all possible candidate, on your data. Hence $$\pmb\beta^*=\underset{\pmb\beta\in\mathbb{R}^{p}}{\arg\min}\;\;\;\;\;||\pmb y-f(\pmb x_i'\pmb\beta)||_1$$ The problem is the error term: $\epsilon_i=y_i-f(\pmb x_i'\pmb\beta)$ and if we enforce $E(\pmb\epsilon)=0$ (we simply want our model to be asymptotically unbiased), then, $\epsilon_i$ must be heteroskedastic. This is because $y_i$ can take on only two values, 0 and 1. Therefore, given $\pmb x_i$, $\epsilon_i$ can also only take on two values: $1-f(\pmb x_i'\pmb\beta)$ when $y_i=1$, which occurs with probability $f(\pmb x_i'\pmb\beta)$, and $-f(\pmb x_i'\pmb\beta)$ when $y_i=1$, which occurs with probability $1-f(\pmb x_i'\pmb\beta)$. These consideration together imply that: $$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\text{var}(\pmb\epsilon)=E(\pmb\epsilon^2)=(1-f(\pmb x'\pmb\beta))^2f(\pmb x'\pmb\beta)+(-f(\pmb x'\pmb\beta))^2(1-f(\pmb x'\pmb\beta))\\ \;=(1-f(\pmb x'\pmb\beta))f(\pmb x'\pmb\beta)\\ =E(\pmb y|\pmb x)E(1-\pmb y|\pmb x)$$ hence $\text{var}(\pmb\epsilon)$ is not constant but concave parabola shaped and is maximized when $\pmb x$ is such that $E(y|\pmb x)\approx .5$. This inherent heteroskedasticity of the residuals has consequences. It implies among other things that when minimizing the $l_1$ loss function, you are asymptotically over-weighting part of your sample. That is, the fitted $\pmb\beta^*$ don't fit the data at all but only the portion of it that is clustered around places where $\pmb x$ is such that $E(\pmb y|\pmb x)\approx .5$. To wit, these are the least informative data points in your sample: they correspond to those observations for which the noise component is the largest. Hence, your fit is pulled towards $\pmb\beta^*=\pmb\beta:f(\pmb x'\pmb\beta)\approx .5$, e.g. made irrelevant. One solution, as is clear from the exposition above is to drop the requirement of unbiased-ness. A popular way to bias the estimator (with some Bayesian interpretation attached) is by including a shrinkage term. If we re-scale the response: $$y^+_i=2(y_i-.5),1\leq i\leq n$$ and, for computational expediency, replace $f(\pmb x'\pmb\beta)$ by another monotone function $g(\pmb x,[c,\pmb\gamma])=\pmb x'[c,\pmb\gamma]$ --it will be convenient for the sequel to denote the first component of the vector of parameter as $c$ and the remaining $p-1$ ones $\pmb\gamma$-- and include a shrinkage term (for example one of the form $||\pmb\gamma||_2$), the resulting optimization problem becomes: $$[c^*,\pmb\gamma^{*}]=\underset{\pmb[c,\pmb\gamma]\in\mathbb{R}^{p}}{\arg\min}\;\;\sum_{i=1}^n\max(0,1-y_i^+\pmb x_i'\pmb[c,\pmb\gamma])+\frac{1}{2}||\pmb\gamma||_2$$ Note that in this new (also convex) optimization problem, the penalty for a correctly classified observations is 0 and it grows linearly with $\pmb x'\pmb[c,\gamma]$ for a miss-classified one --as in the $l_1$ loss. The $[c^*,\pmb\gamma^*]$ solution to this second optimization problem are the celebrated linear svm (with perfect separation) coefficients. As opposed to the $\pmb\beta^*$, it makes sense to learn these $[c^*,\pmb\gamma^{*}]$ from the data with an TVD-type penalty ('type' because of the bias term). Consequently, this solution is widely implemented. See for example the R package LiblineaR.
How do I train a (logistic?) regression in R using L1 loss function?
What you want to do does not exist because it is, for lack of better word, mathematically flawed. But first, I will stress why I think the premises of your question are sound. I will then try to expl
How do I train a (logistic?) regression in R using L1 loss function? What you want to do does not exist because it is, for lack of better word, mathematically flawed. But first, I will stress why I think the premises of your question are sound. I will then try to explain why I think the conclusions you draw from them rest on a misunderstanding of the logistic model and, finally, I will suggest an alternative approach. I will denote $\{(\pmb x_i,y_i)\}_{i=1}^n$ your $n$ observations (the bolder letters denote vectors) which lie in $p$ dimensional space (the first entry of $\pmb x_i$ is 1) with $p<n$, $y_i\in [0,1]$ and $f(\pmb x_i)= f(\pmb x_i'\pmb\beta)$ is an monotonous function of $\pmb x_i'\pmb\beta$, say like the logistic curve to fix ideas. For expediency, I will just assume that $n$ is sufficiently large compared to $p$. You are correct that if you intend to use TVD as criterion to evaluate the fitted model, then it is reasonable to expect your fit to optimize that same criterion among all possible candidate, on your data. Hence $$\pmb\beta^*=\underset{\pmb\beta\in\mathbb{R}^{p}}{\arg\min}\;\;\;\;\;||\pmb y-f(\pmb x_i'\pmb\beta)||_1$$ The problem is the error term: $\epsilon_i=y_i-f(\pmb x_i'\pmb\beta)$ and if we enforce $E(\pmb\epsilon)=0$ (we simply want our model to be asymptotically unbiased), then, $\epsilon_i$ must be heteroskedastic. This is because $y_i$ can take on only two values, 0 and 1. Therefore, given $\pmb x_i$, $\epsilon_i$ can also only take on two values: $1-f(\pmb x_i'\pmb\beta)$ when $y_i=1$, which occurs with probability $f(\pmb x_i'\pmb\beta)$, and $-f(\pmb x_i'\pmb\beta)$ when $y_i=1$, which occurs with probability $1-f(\pmb x_i'\pmb\beta)$. These consideration together imply that: $$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\text{var}(\pmb\epsilon)=E(\pmb\epsilon^2)=(1-f(\pmb x'\pmb\beta))^2f(\pmb x'\pmb\beta)+(-f(\pmb x'\pmb\beta))^2(1-f(\pmb x'\pmb\beta))\\ \;=(1-f(\pmb x'\pmb\beta))f(\pmb x'\pmb\beta)\\ =E(\pmb y|\pmb x)E(1-\pmb y|\pmb x)$$ hence $\text{var}(\pmb\epsilon)$ is not constant but concave parabola shaped and is maximized when $\pmb x$ is such that $E(y|\pmb x)\approx .5$. This inherent heteroskedasticity of the residuals has consequences. It implies among other things that when minimizing the $l_1$ loss function, you are asymptotically over-weighting part of your sample. That is, the fitted $\pmb\beta^*$ don't fit the data at all but only the portion of it that is clustered around places where $\pmb x$ is such that $E(\pmb y|\pmb x)\approx .5$. To wit, these are the least informative data points in your sample: they correspond to those observations for which the noise component is the largest. Hence, your fit is pulled towards $\pmb\beta^*=\pmb\beta:f(\pmb x'\pmb\beta)\approx .5$, e.g. made irrelevant. One solution, as is clear from the exposition above is to drop the requirement of unbiased-ness. A popular way to bias the estimator (with some Bayesian interpretation attached) is by including a shrinkage term. If we re-scale the response: $$y^+_i=2(y_i-.5),1\leq i\leq n$$ and, for computational expediency, replace $f(\pmb x'\pmb\beta)$ by another monotone function $g(\pmb x,[c,\pmb\gamma])=\pmb x'[c,\pmb\gamma]$ --it will be convenient for the sequel to denote the first component of the vector of parameter as $c$ and the remaining $p-1$ ones $\pmb\gamma$-- and include a shrinkage term (for example one of the form $||\pmb\gamma||_2$), the resulting optimization problem becomes: $$[c^*,\pmb\gamma^{*}]=\underset{\pmb[c,\pmb\gamma]\in\mathbb{R}^{p}}{\arg\min}\;\;\sum_{i=1}^n\max(0,1-y_i^+\pmb x_i'\pmb[c,\pmb\gamma])+\frac{1}{2}||\pmb\gamma||_2$$ Note that in this new (also convex) optimization problem, the penalty for a correctly classified observations is 0 and it grows linearly with $\pmb x'\pmb[c,\gamma]$ for a miss-classified one --as in the $l_1$ loss. The $[c^*,\pmb\gamma^*]$ solution to this second optimization problem are the celebrated linear svm (with perfect separation) coefficients. As opposed to the $\pmb\beta^*$, it makes sense to learn these $[c^*,\pmb\gamma^{*}]$ from the data with an TVD-type penalty ('type' because of the bias term). Consequently, this solution is widely implemented. See for example the R package LiblineaR.
How do I train a (logistic?) regression in R using L1 loss function? What you want to do does not exist because it is, for lack of better word, mathematically flawed. But first, I will stress why I think the premises of your question are sound. I will then try to expl
19,528
How do I train a (logistic?) regression in R using L1 loss function?
I'm not sure why you would want to use L1 loss for something constrained between 0 and 1. Depending on what your goal is, you may want to consider something like hinge loss instead, which is similar to L1 loss in one direction and flat in the other. In any case, the code below should do what you've asked for. Note that the optimal response is basically a step function. set.seed(1) # Fake data x = seq(-1, 1, length = 100) y = rbinom(100, plogis(x), size = 1) # plogis is the logistic function # L1 loss loss = function(y, yhat){ sum(abs(y - yhat)) } # Function to estimate loss associated with a given slope & intercept fn = function(par){ a = par[1] b = par[2] loss(y = y, yhat = plogis(a + b * x)) } # Find the optimal parameters par = optim( par = c(a = 0, b = 0), fn = fn )$par # Plot the results plot(y ~ x) curve(plogis(par[1] + par[2] * x), add = TRUE, n = 1000)
How do I train a (logistic?) regression in R using L1 loss function?
I'm not sure why you would want to use L1 loss for something constrained between 0 and 1. Depending on what your goal is, you may want to consider something like hinge loss instead, which is similar
How do I train a (logistic?) regression in R using L1 loss function? I'm not sure why you would want to use L1 loss for something constrained between 0 and 1. Depending on what your goal is, you may want to consider something like hinge loss instead, which is similar to L1 loss in one direction and flat in the other. In any case, the code below should do what you've asked for. Note that the optimal response is basically a step function. set.seed(1) # Fake data x = seq(-1, 1, length = 100) y = rbinom(100, plogis(x), size = 1) # plogis is the logistic function # L1 loss loss = function(y, yhat){ sum(abs(y - yhat)) } # Function to estimate loss associated with a given slope & intercept fn = function(par){ a = par[1] b = par[2] loss(y = y, yhat = plogis(a + b * x)) } # Find the optimal parameters par = optim( par = c(a = 0, b = 0), fn = fn )$par # Plot the results plot(y ~ x) curve(plogis(par[1] + par[2] * x), add = TRUE, n = 1000)
How do I train a (logistic?) regression in R using L1 loss function? I'm not sure why you would want to use L1 loss for something constrained between 0 and 1. Depending on what your goal is, you may want to consider something like hinge loss instead, which is similar
19,529
How do I train a (logistic?) regression in R using L1 loss function?
You can use the glmnet package for fitting L1, L2 models. It's not limited to logistic regression but includes it. Here is the vignette: http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html There is also a webminar: https://www.youtube.com/watch?v=BU2gjoLPfDc Liblinear is good, but i've found glmnet easier to get started. Glmnet includes a function thats does cross-validation and selects a regularization parameter for you based in different metrics such as the AUC. Regarding theory, I would read the tibshiarini paper regarding lasso (L1 regularization) and the chapter in elements of statistical learning. http://statweb.stanford.edu/~tibs/lasso/lasso.pdf About the log loss, it's just to evaluate models. It's not a loss function for model fitting.
How do I train a (logistic?) regression in R using L1 loss function?
You can use the glmnet package for fitting L1, L2 models. It's not limited to logistic regression but includes it. Here is the vignette: http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html There
How do I train a (logistic?) regression in R using L1 loss function? You can use the glmnet package for fitting L1, L2 models. It's not limited to logistic regression but includes it. Here is the vignette: http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html There is also a webminar: https://www.youtube.com/watch?v=BU2gjoLPfDc Liblinear is good, but i've found glmnet easier to get started. Glmnet includes a function thats does cross-validation and selects a regularization parameter for you based in different metrics such as the AUC. Regarding theory, I would read the tibshiarini paper regarding lasso (L1 regularization) and the chapter in elements of statistical learning. http://statweb.stanford.edu/~tibs/lasso/lasso.pdf About the log loss, it's just to evaluate models. It's not a loss function for model fitting.
How do I train a (logistic?) regression in R using L1 loss function? You can use the glmnet package for fitting L1, L2 models. It's not limited to logistic regression but includes it. Here is the vignette: http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html There
19,530
Stepwise regression in R – Critical p-value
As I explained in my comment on your other question, step uses AIC rather than p-values. However, for a single variable at a time, AIC does correspond to using a p-value of 0.15 (or to be more precise, 0.1573): Consider comparing two models, which differ by a single variable. Call the models $\cal{M}_0$ (smaller model) and $\cal{M}_1$ (larger model), and let their AIC's be $\text{AIC}_0$ and $\text{AIC}_1$ respectively. Using the AIC criterion, you'd use the larger model if $\text{AIC}_1<\text{AIC}_0$. This will be the case if $-2\log\cal{L_0}-(-2\log\cal{L_1})>2$. But this is simply the statistic in a likelihood ratio test. From Wilks' theorem, we'll reject the null if the statistic exceeds the upper $\alpha$ quantile of a $\chi^2_1$. So if we use a hypothesis test to choose between the smaller model and the larger, we choose the larger model when $-2\log\cal{L_0}-(-2\log\cal{L_1})>C_\alpha$. Now $2$ lies at the 84.27 percentile of a $\chi^2_1$. Hence, if we choose the larger model when it has smaller AIC, this corresponds to rejecting the null hypothesis for a test of the additional term with a p-value of $1-0.843=0.157$, or $15.7\%$ So how do you modify it? Easy. Change the k parameter in step from 2 to something else. You want 10% instead? Make it 2.7: qchisq(0.10,1,lower.tail=FALSE) [1] 2.705543 You want 2.5%? Set k=5: qchisq(0.025,1,lower.tail=FALSE) [1] 5.023886 and so on. However, even though that solves your question, I advise you to pay close attention to Frank Harrell's answer on your other question, and to search out responses from a great many statisticians on other questions relating to stepwise regression here, which advice tends to be very consistently to avoid stepwise procedures in general.
Stepwise regression in R – Critical p-value
As I explained in my comment on your other question, step uses AIC rather than p-values. However, for a single variable at a time, AIC does correspond to using a p-value of 0.15 (or to be more precise
Stepwise regression in R – Critical p-value As I explained in my comment on your other question, step uses AIC rather than p-values. However, for a single variable at a time, AIC does correspond to using a p-value of 0.15 (or to be more precise, 0.1573): Consider comparing two models, which differ by a single variable. Call the models $\cal{M}_0$ (smaller model) and $\cal{M}_1$ (larger model), and let their AIC's be $\text{AIC}_0$ and $\text{AIC}_1$ respectively. Using the AIC criterion, you'd use the larger model if $\text{AIC}_1<\text{AIC}_0$. This will be the case if $-2\log\cal{L_0}-(-2\log\cal{L_1})>2$. But this is simply the statistic in a likelihood ratio test. From Wilks' theorem, we'll reject the null if the statistic exceeds the upper $\alpha$ quantile of a $\chi^2_1$. So if we use a hypothesis test to choose between the smaller model and the larger, we choose the larger model when $-2\log\cal{L_0}-(-2\log\cal{L_1})>C_\alpha$. Now $2$ lies at the 84.27 percentile of a $\chi^2_1$. Hence, if we choose the larger model when it has smaller AIC, this corresponds to rejecting the null hypothesis for a test of the additional term with a p-value of $1-0.843=0.157$, or $15.7\%$ So how do you modify it? Easy. Change the k parameter in step from 2 to something else. You want 10% instead? Make it 2.7: qchisq(0.10,1,lower.tail=FALSE) [1] 2.705543 You want 2.5%? Set k=5: qchisq(0.025,1,lower.tail=FALSE) [1] 5.023886 and so on. However, even though that solves your question, I advise you to pay close attention to Frank Harrell's answer on your other question, and to search out responses from a great many statisticians on other questions relating to stepwise regression here, which advice tends to be very consistently to avoid stepwise procedures in general.
Stepwise regression in R – Critical p-value As I explained in my comment on your other question, step uses AIC rather than p-values. However, for a single variable at a time, AIC does correspond to using a p-value of 0.15 (or to be more precise
19,531
Stepwise regression in R – Critical p-value
As said above, the step function in R is based on AIC criteria. But I guess by p-value you mean alpha to enter and alpha to leave. What you can do is to use the function stepwise written by Paul Rubin and available here. As you can see you have the arguments of alpha.to.enter and alpha.to.leave that you can change. Note that this function uses the F-test or equivalently t-test to select the models. Moreover, it can handle not only stepwise regression but also forward selection and backward elimination as well if you properly define the arguments.
Stepwise regression in R – Critical p-value
As said above, the step function in R is based on AIC criteria. But I guess by p-value you mean alpha to enter and alpha to leave. What you can do is to use the function stepwise written by Paul Rubin
Stepwise regression in R – Critical p-value As said above, the step function in R is based on AIC criteria. But I guess by p-value you mean alpha to enter and alpha to leave. What you can do is to use the function stepwise written by Paul Rubin and available here. As you can see you have the arguments of alpha.to.enter and alpha.to.leave that you can change. Note that this function uses the F-test or equivalently t-test to select the models. Moreover, it can handle not only stepwise regression but also forward selection and backward elimination as well if you properly define the arguments.
Stepwise regression in R – Critical p-value As said above, the step function in R is based on AIC criteria. But I guess by p-value you mean alpha to enter and alpha to leave. What you can do is to use the function stepwise written by Paul Rubin
19,532
What to make of explanatories in time series?
Based upon the comments that you've offered to the responses, you need to be aware of spurious causation. Any variable with a time trend is going to be correlated with another variable that also has a time trend. For example, my weight from birth to age 27 is going to be highly correlated with your weight from birth to age 27. Obviously, my weight isn't caused by your weight. If it was, I'd ask that you go to the gym more frequently, please. As you are familiar with cross-section data, I'll give you an omitted variables explanation. Let my weight be $x_t$ and your weight be $y_t$, where $$\begin{align*}x_t &= \alpha_0 + \alpha_1 t + \epsilon_t \text{ and} \\ y_t &= \beta_0 + \beta_1 t + \eta_t.\end{align*}$$ Then the regression $$\begin{equation*}y_t = \gamma_0 + \gamma_1 x_t + \nu_t\end{equation*}$$ has an omitted variable---the time trend---that is correlated with the included variable, $x_t$. Hence, the coefficient $\gamma_1$ will be biased (in this case, it will be positive, as our weights grow over time). When you are performing time series analysis, you need to be sure that your variables are stationary or you'll get these spurious causation results. An exception would be integrated series, but I'd refer you to time series texts to hear more about that.
What to make of explanatories in time series?
Based upon the comments that you've offered to the responses, you need to be aware of spurious causation. Any variable with a time trend is going to be correlated with another variable that also has a
What to make of explanatories in time series? Based upon the comments that you've offered to the responses, you need to be aware of spurious causation. Any variable with a time trend is going to be correlated with another variable that also has a time trend. For example, my weight from birth to age 27 is going to be highly correlated with your weight from birth to age 27. Obviously, my weight isn't caused by your weight. If it was, I'd ask that you go to the gym more frequently, please. As you are familiar with cross-section data, I'll give you an omitted variables explanation. Let my weight be $x_t$ and your weight be $y_t$, where $$\begin{align*}x_t &= \alpha_0 + \alpha_1 t + \epsilon_t \text{ and} \\ y_t &= \beta_0 + \beta_1 t + \eta_t.\end{align*}$$ Then the regression $$\begin{equation*}y_t = \gamma_0 + \gamma_1 x_t + \nu_t\end{equation*}$$ has an omitted variable---the time trend---that is correlated with the included variable, $x_t$. Hence, the coefficient $\gamma_1$ will be biased (in this case, it will be positive, as our weights grow over time). When you are performing time series analysis, you need to be sure that your variables are stationary or you'll get these spurious causation results. An exception would be integrated series, but I'd refer you to time series texts to hear more about that.
What to make of explanatories in time series? Based upon the comments that you've offered to the responses, you need to be aware of spurious causation. Any variable with a time trend is going to be correlated with another variable that also has a
19,533
What to make of explanatories in time series?
The same intuition as in cross-section regression can be used in time-series regression. It is perfectly valid to try to explain the trend using other variables. The main difference is that it is implicitly assumed that the regressors are random variables. So in regression model: $$Y_t=\beta_0+X_{t1}\beta_1+...+X_{tk}\beta_k+\varepsilon_t$$ we require $E(\varepsilon_t|X_{t1},...,X_{tk})=0$ instead of $E\varepsilon_t=0$ and $E(\varepsilon_t^2|X_{t1},...,X_{tk})=\sigma^2$ instead of $E\varepsilon_t^2=\sigma^2$. The practical part of regression stays the same, all the usual statistics and methods apply. The hard part is to show for which types of random variables, or in this cases stochastic processes $X_{tk}$ we can use classical methods. The usual central limit theorem cannot be applied, since it involves independent random variables. Time series processes are usually not independent. This is where importance of stationarity comes into play. It is shown that for large part of stationary processes the central limit theorem can be applied, so classical regression analysis can be applied. The main caveat of time-series regression is that it can massively fail when the regressors are not stationary. Then usual regression methods can show that the trend is explained, when in fact it is not. So if you want to explain trend you must check for non-stationarity before proceeding. Otherwise you might arrive at false conclusions.
What to make of explanatories in time series?
The same intuition as in cross-section regression can be used in time-series regression. It is perfectly valid to try to explain the trend using other variables. The main difference is that it is impl
What to make of explanatories in time series? The same intuition as in cross-section regression can be used in time-series regression. It is perfectly valid to try to explain the trend using other variables. The main difference is that it is implicitly assumed that the regressors are random variables. So in regression model: $$Y_t=\beta_0+X_{t1}\beta_1+...+X_{tk}\beta_k+\varepsilon_t$$ we require $E(\varepsilon_t|X_{t1},...,X_{tk})=0$ instead of $E\varepsilon_t=0$ and $E(\varepsilon_t^2|X_{t1},...,X_{tk})=\sigma^2$ instead of $E\varepsilon_t^2=\sigma^2$. The practical part of regression stays the same, all the usual statistics and methods apply. The hard part is to show for which types of random variables, or in this cases stochastic processes $X_{tk}$ we can use classical methods. The usual central limit theorem cannot be applied, since it involves independent random variables. Time series processes are usually not independent. This is where importance of stationarity comes into play. It is shown that for large part of stationary processes the central limit theorem can be applied, so classical regression analysis can be applied. The main caveat of time-series regression is that it can massively fail when the regressors are not stationary. Then usual regression methods can show that the trend is explained, when in fact it is not. So if you want to explain trend you must check for non-stationarity before proceeding. Otherwise you might arrive at false conclusions.
What to make of explanatories in time series? The same intuition as in cross-section regression can be used in time-series regression. It is perfectly valid to try to explain the trend using other variables. The main difference is that it is impl
19,534
What to make of explanatories in time series?
When you have supporting/causal/helping/right-hand side/exogenous/predictor series, the approach that is preferred is to construct a single equation, multiple-input Transfer Function. One needs to examine possible model residuals for both unspecified/omitted deterministic inputs i.e. do Intervention Detection ala Ruey Tsay 1988 Journal of Forecasting and unspecified stochastic inputs via an ARIMA component. Thus you can explicitly include not only the user-suggested causals (and any needed lags !) but two kinds of omitted structures ( dummies and ARIMA ). Care should be taken to ensure that the parameters of the final model do not change significantly over time otherwise data segmentation might be in order and that the residuals from the final model can not be proven to have heterogeneous variance. The trend in the original series may be due to trends in the predictor series or due to Autoregressive dynamics in the series of interest or potentially due to an omitted deterministic series proxied by a steady state constant or even one or more local time trends.
What to make of explanatories in time series?
When you have supporting/causal/helping/right-hand side/exogenous/predictor series, the approach that is preferred is to construct a single equation, multiple-input Transfer Function. One needs to exa
What to make of explanatories in time series? When you have supporting/causal/helping/right-hand side/exogenous/predictor series, the approach that is preferred is to construct a single equation, multiple-input Transfer Function. One needs to examine possible model residuals for both unspecified/omitted deterministic inputs i.e. do Intervention Detection ala Ruey Tsay 1988 Journal of Forecasting and unspecified stochastic inputs via an ARIMA component. Thus you can explicitly include not only the user-suggested causals (and any needed lags !) but two kinds of omitted structures ( dummies and ARIMA ). Care should be taken to ensure that the parameters of the final model do not change significantly over time otherwise data segmentation might be in order and that the residuals from the final model can not be proven to have heterogeneous variance. The trend in the original series may be due to trends in the predictor series or due to Autoregressive dynamics in the series of interest or potentially due to an omitted deterministic series proxied by a steady state constant or even one or more local time trends.
What to make of explanatories in time series? When you have supporting/causal/helping/right-hand side/exogenous/predictor series, the approach that is preferred is to construct a single equation, multiple-input Transfer Function. One needs to exa
19,535
What to make of explanatories in time series?
As a less technical point of view, often times it's not very helpful just explaining the trend; that is, to treat time as the predictor of primary interest. The variation of a series over time often imply the underlying effects of other variables, including autoregressive and/or exogenous processes, which is more conceptually relevant to investigate. It follows that if those variables also vary over time, then controlling for the time effect is in fact needed to not fall in the artificially significant relationship as @mpiktas showed.
What to make of explanatories in time series?
As a less technical point of view, often times it's not very helpful just explaining the trend; that is, to treat time as the predictor of primary interest. The variation of a series over time often i
What to make of explanatories in time series? As a less technical point of view, often times it's not very helpful just explaining the trend; that is, to treat time as the predictor of primary interest. The variation of a series over time often imply the underlying effects of other variables, including autoregressive and/or exogenous processes, which is more conceptually relevant to investigate. It follows that if those variables also vary over time, then controlling for the time effect is in fact needed to not fall in the artificially significant relationship as @mpiktas showed.
What to make of explanatories in time series? As a less technical point of view, often times it's not very helpful just explaining the trend; that is, to treat time as the predictor of primary interest. The variation of a series over time often i
19,536
How do I find the probability of a type II error?
In addition to specifying $\alpha$ (probability of a type I error), you need a fully specified hypothesis pair, i.e., $\mu_{0}$, $\mu_{1}$ and $\sigma$ need to be known. $\beta$ (probability of type II error) is $1 - \textrm{power}$. I assume a one-sided $H_{1}: \mu_{1} > \mu_{0}$. In R: > sigma <- 15 # theoretical standard deviation > mu0 <- 100 # expected value under H0 > mu1 <- 130 # expected value under H1 > alpha <- 0.05 # probability of type I error # critical value for a level alpha test > crit <- qnorm(1-alpha, mu0, sigma) # power: probability for values > critical value under H1 > (pow <- pnorm(crit, mu1, sigma, lower.tail=FALSE)) [1] 0.63876 # probability for type II error: 1 - power > (beta <- 1-pow) [1] 0.36124 Edit: visualization xLims <- c(50, 180) left <- seq(xLims[1], crit, length.out=100) right <- seq(crit, xLims[2], length.out=100) yH0r <- dnorm(right, mu0, sigma) yH1l <- dnorm(left, mu1, sigma) yH1r <- dnorm(right, mu1, sigma) curve(dnorm(x, mu0, sigma), xlim=xLims, lwd=2, col="red", xlab="x", ylab="density", main="Normal distribution under H0 and H1", ylim=c(0, 0.03), xaxs="i") curve(dnorm(x, mu1, sigma), lwd=2, col="blue", add=TRUE) polygon(c(right, rev(right)), c(yH0r, numeric(length(right))), border=NA, col=rgb(1, 0.3, 0.3, 0.6)) polygon(c(left, rev(left)), c(yH1l, numeric(length(left))), border=NA, col=rgb(0.3, 0.3, 1, 0.6)) polygon(c(right, rev(right)), c(yH1r, numeric(length(right))), border=NA, density=5, lty=2, lwd=2, angle=45, col="darkgray") abline(v=crit, lty=1, lwd=3, col="red") text(crit+1, 0.03, adj=0, label="critical value") text(mu0-10, 0.025, adj=1, label="distribution under H0") text(mu1+10, 0.025, adj=0, label="distribution under H1") text(crit+8, 0.01, adj=0, label="power", cex=1.3) text(crit-12, 0.004, expression(beta), cex=1.3) text(crit+5, 0.0015, expression(alpha), cex=1.3)
How do I find the probability of a type II error?
In addition to specifying $\alpha$ (probability of a type I error), you need a fully specified hypothesis pair, i.e., $\mu_{0}$, $\mu_{1}$ and $\sigma$ need to be known. $\beta$ (probability of type I
How do I find the probability of a type II error? In addition to specifying $\alpha$ (probability of a type I error), you need a fully specified hypothesis pair, i.e., $\mu_{0}$, $\mu_{1}$ and $\sigma$ need to be known. $\beta$ (probability of type II error) is $1 - \textrm{power}$. I assume a one-sided $H_{1}: \mu_{1} > \mu_{0}$. In R: > sigma <- 15 # theoretical standard deviation > mu0 <- 100 # expected value under H0 > mu1 <- 130 # expected value under H1 > alpha <- 0.05 # probability of type I error # critical value for a level alpha test > crit <- qnorm(1-alpha, mu0, sigma) # power: probability for values > critical value under H1 > (pow <- pnorm(crit, mu1, sigma, lower.tail=FALSE)) [1] 0.63876 # probability for type II error: 1 - power > (beta <- 1-pow) [1] 0.36124 Edit: visualization xLims <- c(50, 180) left <- seq(xLims[1], crit, length.out=100) right <- seq(crit, xLims[2], length.out=100) yH0r <- dnorm(right, mu0, sigma) yH1l <- dnorm(left, mu1, sigma) yH1r <- dnorm(right, mu1, sigma) curve(dnorm(x, mu0, sigma), xlim=xLims, lwd=2, col="red", xlab="x", ylab="density", main="Normal distribution under H0 and H1", ylim=c(0, 0.03), xaxs="i") curve(dnorm(x, mu1, sigma), lwd=2, col="blue", add=TRUE) polygon(c(right, rev(right)), c(yH0r, numeric(length(right))), border=NA, col=rgb(1, 0.3, 0.3, 0.6)) polygon(c(left, rev(left)), c(yH1l, numeric(length(left))), border=NA, col=rgb(0.3, 0.3, 1, 0.6)) polygon(c(right, rev(right)), c(yH1r, numeric(length(right))), border=NA, density=5, lty=2, lwd=2, angle=45, col="darkgray") abline(v=crit, lty=1, lwd=3, col="red") text(crit+1, 0.03, adj=0, label="critical value") text(mu0-10, 0.025, adj=1, label="distribution under H0") text(mu1+10, 0.025, adj=0, label="distribution under H1") text(crit+8, 0.01, adj=0, label="power", cex=1.3) text(crit-12, 0.004, expression(beta), cex=1.3) text(crit+5, 0.0015, expression(alpha), cex=1.3)
How do I find the probability of a type II error? In addition to specifying $\alpha$ (probability of a type I error), you need a fully specified hypothesis pair, i.e., $\mu_{0}$, $\mu_{1}$ and $\sigma$ need to be known. $\beta$ (probability of type I
19,537
How do I find the probability of a type II error?
To supplement caracal's answer, if you are looking for a user-friendly GUI option for calculating Type II error rates or power for many common designs including the ones implied by your question, you may wish to check out the free software, G Power 3.
How do I find the probability of a type II error?
To supplement caracal's answer, if you are looking for a user-friendly GUI option for calculating Type II error rates or power for many common designs including the ones implied by your question, you
How do I find the probability of a type II error? To supplement caracal's answer, if you are looking for a user-friendly GUI option for calculating Type II error rates or power for many common designs including the ones implied by your question, you may wish to check out the free software, G Power 3.
How do I find the probability of a type II error? To supplement caracal's answer, if you are looking for a user-friendly GUI option for calculating Type II error rates or power for many common designs including the ones implied by your question, you
19,538
Calculating percentile of normal distribution
For Mathematica $VersionNumber > 5 you can use Quantile[NormalDistribution[μ, σ], 100 q] for the q-th percentile. Otherwise, you have to load the appropriate Statistics package first.
Calculating percentile of normal distribution
For Mathematica $VersionNumber > 5 you can use Quantile[NormalDistribution[μ, σ], 100 q] for the q-th percentile. Otherwise, you have to load the appropriate Statistics package first.
Calculating percentile of normal distribution For Mathematica $VersionNumber > 5 you can use Quantile[NormalDistribution[μ, σ], 100 q] for the q-th percentile. Otherwise, you have to load the appropriate Statistics package first.
Calculating percentile of normal distribution For Mathematica $VersionNumber > 5 you can use Quantile[NormalDistribution[μ, σ], 100 q] for the q-th percentile. Otherwise, you have to load the appropriate Statistics package first.
19,539
Calculating percentile of normal distribution
John Cook's page, Distributions in Scipy, is a good reference for this type of stuff: In [15]: import scipy.stats In [16]: scipy.stats.norm.ppf(0.975) Out[16]: 1.959963984540054
Calculating percentile of normal distribution
John Cook's page, Distributions in Scipy, is a good reference for this type of stuff: In [15]: import scipy.stats In [16]: scipy.stats.norm.ppf(0.975) Out[16]: 1.959963984540054
Calculating percentile of normal distribution John Cook's page, Distributions in Scipy, is a good reference for this type of stuff: In [15]: import scipy.stats In [16]: scipy.stats.norm.ppf(0.975) Out[16]: 1.959963984540054
Calculating percentile of normal distribution John Cook's page, Distributions in Scipy, is a good reference for this type of stuff: In [15]: import scipy.stats In [16]: scipy.stats.norm.ppf(0.975) Out[16]: 1.959963984540054
19,540
Calculating percentile of normal distribution
Well, you didn't ask about R, but in R you do it using ?qnorm (It's actually the quantile, not the percentile, or so I believe) > qnorm(.5) [1] 0 > qnorm(.95) [1] 1.644854
Calculating percentile of normal distribution
Well, you didn't ask about R, but in R you do it using ?qnorm (It's actually the quantile, not the percentile, or so I believe) > qnorm(.5) [1] 0 > qnorm(.95) [1] 1.644854
Calculating percentile of normal distribution Well, you didn't ask about R, but in R you do it using ?qnorm (It's actually the quantile, not the percentile, or so I believe) > qnorm(.5) [1] 0 > qnorm(.95) [1] 1.644854
Calculating percentile of normal distribution Well, you didn't ask about R, but in R you do it using ?qnorm (It's actually the quantile, not the percentile, or so I believe) > qnorm(.5) [1] 0 > qnorm(.95) [1] 1.644854
19,541
Calculating percentile of normal distribution
In Python, you can use the stats module from the scipy package (look for cdf(), as in the following example). (It seems the transcendantal package also includes usual cumulative distributions).
Calculating percentile of normal distribution
In Python, you can use the stats module from the scipy package (look for cdf(), as in the following example). (It seems the transcendantal package also includes usual cumulative distributions).
Calculating percentile of normal distribution In Python, you can use the stats module from the scipy package (look for cdf(), as in the following example). (It seems the transcendantal package also includes usual cumulative distributions).
Calculating percentile of normal distribution In Python, you can use the stats module from the scipy package (look for cdf(), as in the following example). (It seems the transcendantal package also includes usual cumulative distributions).
19,542
Calculating percentile of normal distribution
You can use the inverse erf function, which is available in MatLab and Mathematica, for instance. For the normal CDF, starting from $$y=\Phi\left(x\right)=\frac{1}{2}\left[1+\text{erf}\left(\frac{x}{\sqrt{2}}\right)\right]$$ We get $$x=\sqrt{2}\ \text{erf}^{-1}\left(2y-1\right)$$ For the log-normal CDF, starting from $$y=F_{x}(x;\mu,\sigma)=\frac{1}{2}\text{erfc}\left(\frac{-\log x-\mu}{\sigma\sqrt{2}}\right)$$ We get $$-\log \left(x\right)=\mu+\sigma\sqrt{2}\ \text{erfc}^{-1}\left(2y\right)$$
Calculating percentile of normal distribution
You can use the inverse erf function, which is available in MatLab and Mathematica, for instance. For the normal CDF, starting from $$y=\Phi\left(x\right)=\frac{1}{2}\left[1+\text{erf}\left(\frac{x}{\
Calculating percentile of normal distribution You can use the inverse erf function, which is available in MatLab and Mathematica, for instance. For the normal CDF, starting from $$y=\Phi\left(x\right)=\frac{1}{2}\left[1+\text{erf}\left(\frac{x}{\sqrt{2}}\right)\right]$$ We get $$x=\sqrt{2}\ \text{erf}^{-1}\left(2y-1\right)$$ For the log-normal CDF, starting from $$y=F_{x}(x;\mu,\sigma)=\frac{1}{2}\text{erfc}\left(\frac{-\log x-\mu}{\sigma\sqrt{2}}\right)$$ We get $$-\log \left(x\right)=\mu+\sigma\sqrt{2}\ \text{erfc}^{-1}\left(2y\right)$$
Calculating percentile of normal distribution You can use the inverse erf function, which is available in MatLab and Mathematica, for instance. For the normal CDF, starting from $$y=\Phi\left(x\right)=\frac{1}{2}\left[1+\text{erf}\left(\frac{x}{\
19,543
Transform "Standard Poisson" to any Poisson
No, that is not possible. For instance, assume we want to "transform" Poisson realizations with $\lambda=1$ to Poisson samples with $\lambda'=5$. The PMF at $0$ for $\lambda=1$ is $\frac{1}{e}\approx 0.368$, so about 36.8% of the original samples will be $0$. But the cumulative distribution function for $\lambda'=5$ is only $0.265$ for $x=3$. That is, we would need somehow map an original observation of $0$ to transformed observations $0,1,2,3,4$ - and this in a way that satisfies the PMF for the new $\lambda'$. This is simply not possible without a RNG. The same holds for "transformations" between any two discrete distributions (except of course for trivial cases).
Transform "Standard Poisson" to any Poisson
No, that is not possible. For instance, assume we want to "transform" Poisson realizations with $\lambda=1$ to Poisson samples with $\lambda'=5$. The PMF at $0$ for $\lambda=1$ is $\frac{1}{e}\approx
Transform "Standard Poisson" to any Poisson No, that is not possible. For instance, assume we want to "transform" Poisson realizations with $\lambda=1$ to Poisson samples with $\lambda'=5$. The PMF at $0$ for $\lambda=1$ is $\frac{1}{e}\approx 0.368$, so about 36.8% of the original samples will be $0$. But the cumulative distribution function for $\lambda'=5$ is only $0.265$ for $x=3$. That is, we would need somehow map an original observation of $0$ to transformed observations $0,1,2,3,4$ - and this in a way that satisfies the PMF for the new $\lambda'$. This is simply not possible without a RNG. The same holds for "transformations" between any two discrete distributions (except of course for trivial cases).
Transform "Standard Poisson" to any Poisson No, that is not possible. For instance, assume we want to "transform" Poisson realizations with $\lambda=1$ to Poisson samples with $\lambda'=5$. The PMF at $0$ for $\lambda=1$ is $\frac{1}{e}\approx
19,544
Transform "Standard Poisson" to any Poisson
If you have a lot of independent Poisson values when $\lambda=1$ then is possible to construct independent Poisson values when $\lambda^\prime=n$ for any positive integer $n$ by adding $n$ of your original values together To take Stephan Kolassa's example with $n=5$, by simulating $500000$ cases of $\lambda=1$ to generate $100000$ cases of $\lambda^\prime=5$, and then comparing to the actual distribution in red, you could use this R code: set.seed(2021) n <- 5 samplesize <- 10^5 x1 <- rpois(n * samplesize, lambda=1) xn <- rowSums(matrix(x1, ncol=n)) table(xn) # xn # 0 1 2 3 4 5 6 7 8 9 10 11 12 # 693 3403 8359 14001 17540 17385 14758 10594 6452 3642 1724 869 367 # 13 14 15 16 17 18 # 134 54 14 9 1 1 plot(table(xn) / samplesize) m <- min(xn):max(xn) points(m, dpois(m, lambda=n), col="red") which is pretty close for a simulation
Transform "Standard Poisson" to any Poisson
If you have a lot of independent Poisson values when $\lambda=1$ then is possible to construct independent Poisson values when $\lambda^\prime=n$ for any positive integer $n$ by adding $n$ of your ori
Transform "Standard Poisson" to any Poisson If you have a lot of independent Poisson values when $\lambda=1$ then is possible to construct independent Poisson values when $\lambda^\prime=n$ for any positive integer $n$ by adding $n$ of your original values together To take Stephan Kolassa's example with $n=5$, by simulating $500000$ cases of $\lambda=1$ to generate $100000$ cases of $\lambda^\prime=5$, and then comparing to the actual distribution in red, you could use this R code: set.seed(2021) n <- 5 samplesize <- 10^5 x1 <- rpois(n * samplesize, lambda=1) xn <- rowSums(matrix(x1, ncol=n)) table(xn) # xn # 0 1 2 3 4 5 6 7 8 9 10 11 12 # 693 3403 8359 14001 17540 17385 14758 10594 6452 3642 1724 869 367 # 13 14 15 16 17 18 # 134 54 14 9 1 1 plot(table(xn) / samplesize) m <- min(xn):max(xn) points(m, dpois(m, lambda=n), col="red") which is pretty close for a simulation
Transform "Standard Poisson" to any Poisson If you have a lot of independent Poisson values when $\lambda=1$ then is possible to construct independent Poisson values when $\lambda^\prime=n$ for any positive integer $n$ by adding $n$ of your ori
19,545
Transform "Standard Poisson" to any Poisson
To clarify, $\lambda=\operatorname{E}(X)=\operatorname{Var}(X)$. The normal approximation is $N(\mu=\lambda,\sigma^2=\lambda)$ when $\lambda$ is "large". $\lambda$ is often interpreted as an event arrival rate. If you construct pseudorandom arrival times for the rate $\lambda_1 = \frac{1}{\text{1 hour}}$; you could "reuse" the constructed arrival times for a slower arrival process $\lambda_{24} = \frac{1}{\text{24 hour}}$ by scaling the pseudorandom intervals. Defining $T_0$ as the interval start time, your arrival times for the faster process would stretch: $$t_{i,24} = 24 ~(t_{i,1}-T_0)\quad .$$ In general, $$t_{i,\lambda_2} = (t_{i,\lambda_1} - T_0)~ \frac{\lambda_1}{\lambda_2}\quad.$$
Transform "Standard Poisson" to any Poisson
To clarify, $\lambda=\operatorname{E}(X)=\operatorname{Var}(X)$. The normal approximation is $N(\mu=\lambda,\sigma^2=\lambda)$ when $\lambda$ is "large". $\lambda$ is often interpreted as an event ar
Transform "Standard Poisson" to any Poisson To clarify, $\lambda=\operatorname{E}(X)=\operatorname{Var}(X)$. The normal approximation is $N(\mu=\lambda,\sigma^2=\lambda)$ when $\lambda$ is "large". $\lambda$ is often interpreted as an event arrival rate. If you construct pseudorandom arrival times for the rate $\lambda_1 = \frac{1}{\text{1 hour}}$; you could "reuse" the constructed arrival times for a slower arrival process $\lambda_{24} = \frac{1}{\text{24 hour}}$ by scaling the pseudorandom intervals. Defining $T_0$ as the interval start time, your arrival times for the faster process would stretch: $$t_{i,24} = 24 ~(t_{i,1}-T_0)\quad .$$ In general, $$t_{i,\lambda_2} = (t_{i,\lambda_1} - T_0)~ \frac{\lambda_1}{\lambda_2}\quad.$$
Transform "Standard Poisson" to any Poisson To clarify, $\lambda=\operatorname{E}(X)=\operatorname{Var}(X)$. The normal approximation is $N(\mu=\lambda,\sigma^2=\lambda)$ when $\lambda$ is "large". $\lambda$ is often interpreted as an event ar
19,546
Trend in irregular time series data
Rather than try to decompose the time series explicitly, I would instead suggest that you model the data spatio-temporally because, as you'll see below, the long-term trend likely varies spatially, the seasonal trend varies with the long-term trend and spatially. I have found that generalised additive models (GAMs) are a good model for fitting irregular time series such as you describe. Below I illustrate a quick model I prepared for the full data of the following form \begin{align} \begin{split} \mathrm{E}(y_i) & = \alpha + f_1(\text{ToD}_i) + f_2(\text{DoY}_i) + f_3(\text{Year}_i) + f_4(\text{x}_i, \text{y}_i) + \\ & \quad f_5(\text{DoY}_i, \text{Year}_i) + f_6(\text{x}_i, \text{y}_i, \text{ToD}_i) + \\ & \quad f_7(\text{x}_i, \text{y}_i, \text{DoY}_i) + f_8(\text{x}_i, \text{y}_i, \text{Year}_i) \end{split} \end{align} where $\alpha$ is the model intercept, $f_1(\text{ToD}_i)$ is a smooth function of time of day, $f_2(\text{DoY}_i)$ is a smooth function of day of year , $f_3(\text{Year}_i)$ is a smooth function of year, $f_4(\text{x}_i, \text{y}_i)$ is a 2D smooth of longitude and latitude, $f_5(\text{DoY}_i, \text{Year}_i)$ is a tensor product smooth of day of year and year, $f_6(\text{x}_i, \text{y}_i, \text{ToD}_i)$ tensor product smooth of location & time of day $f_7(\text{x}_i, \text{y}_i, \text{DoY}_i)$ tensor product smooth of location day of year& $f_8(\text{x}_i, \text{y}_i, \text{Year}_i$ tensor product smooth of location & year Effectively, the first four smooths are the main effects of time of day, season, long-term trend, spatial variation whilst the remaining four tensor product smooths model smooth interactions between the stated covariates, which model how the seasonal pattern of temperature varies over time, how the time of day effect varies spatially, how the seasonal effect varies spatially, and how the long-term trend varies spatially The data are loaded into R and massaged a bit with the following code library('mgcv') library('ggplot2') library('viridis') theme_set(theme_bw()) library('gganimate') galveston <- read.csv('gbtemp.csv') galveston <- transform(galveston, datetime = as.POSIXct(paste(DATE, TIME), format = '%m/%d/%y %H:%M', tz = "CDT")) galveston <- transform(galveston, STATION_ID = factor(STATION_ID), DoY = as.numeric(format(datetime, format = '%j')), ToD = as.numeric(format(datetime, format = '%H')) + (as.numeric(format(datetime, format = '%M')) / 60)) The model itself is fitted using the bam() function which is designed for fitting GAMs to larger data sets such as this. You can use gam() for this model also, but it will take somewhat longer to fit. knots <- list(DoY = c(0.5, 366.5)) M <- list(c(1, 0.5), NA) m <- bam(MEASUREMENT ~ s(ToD, k = 10) + s(DoY, k = 30, bs = 'cc') + s(YEAR, k = 30) + s(LONGITUDE, LATITUDE, k = 100, bs = 'ds', m = c(1, 0.5)) + ti(DoY, YEAR, bs = c('cc', 'tp'), k = c(15, 15)) + ti(LONGITUDE, LATITUDE, ToD, d = c(2,1), bs = c('ds','tp'), m = M, k = c(20, 10)) + ti(LONGITUDE, LATITUDE, DoY, d = c(2,1), bs = c('ds','cc'), m = M, k = c(25, 15)) + ti(LONGITUDE, LATITUDE, YEAR, d = c(2,1), bs = c('ds','tp'), m = M), k = c(25, 15)), data = galveston, method = 'fREML', knots = knots, nthreads = 4, discrete = TRUE) The s() terms are the main effects, whilst the ti() terms are tensor product interaction smooths where the main effects of the named covariates have been removed from the basis. These ti() smooths are a way to include interactions of the stated variables in a numerically stable way. The knots object is just setting the endpoints of the cyclic smooth I used for the day of year effect — we want 23:59 on Dec 31st to join up smoothly with 00:01 Jan 1st. This accounts to some extent for leap years. The model summary indicates all these effects are significant; > summary(m) Family: gaussian Link function: identity Formula: MEASUREMENT ~ s(ToD, k = 10) + s(DoY, k = 12, bs = "cc") + s(YEAR, k = 30) + s(LONGITUDE, LATITUDE, k = 100, bs = "ds", m = c(1, 0.5)) + ti(DoY, YEAR, bs = c("cc", "tp"), k = c(12, 15)) + ti(LONGITUDE, LATITUDE, ToD, d = c(2, 1), bs = c("ds", "tp"), m = list(c(1, 0.5), NA), k = c(20, 10)) + ti(LONGITUDE, LATITUDE, DoY, d = c(2, 1), bs = c("ds", "cc"), m = list(c(1, 0.5), NA), k = c(25, 12)) + ti(LONGITUDE, LATITUDE, YEAR, d = c(2, 1), bs = c("ds", "tp"), m = list(c(1, 0.5), NA), k = c(25, 15)) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 21.75561 0.07508 289.8 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(ToD) 3.036 3.696 5.956 0.000189 *** s(DoY) 9.580 10.000 3520.098 < 2e-16 *** s(YEAR) 27.979 28.736 59.282 < 2e-16 *** s(LONGITUDE,LATITUDE) 54.555 99.000 4.765 < 2e-16 *** ti(DoY,YEAR) 131.317 140.000 34.592 < 2e-16 *** ti(ToD,LONGITUDE,LATITUDE) 42.805 171.000 0.880 < 2e-16 *** ti(DoY,LONGITUDE,LATITUDE) 83.277 240.000 1.225 < 2e-16 *** ti(YEAR,LONGITUDE,LATITUDE) 84.862 329.000 1.101 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.94 Deviance explained = 94.2% fREML = 29807 Scale est. = 2.6318 n = 15276 A more careful analysis would want to check if we need all these interactions; some of the spatial ti() terms explain only small amounts of variation in the data, as indicated by the $F$ statistic; there's a lot of data here so even small effect sizes may be statistically significant but uninteresting. As a quick check, however, removing the three spatial ti() smooths (m.sub), results in a significantly poorer fit as assessed by AIC: > AIC(m, m.sub) df AIC m 447.5680 58583.81 m.sub 239.7336 59197.05 We can plot the partial effects of the first five smooths using the plot() method — the 3D tensor product smooths can't be plotted easily and not by default. plot(m, pages = 1, scheme = 2, shade = TRUE, scale = 0) The scale = 0 argument there puts all the plots on their own scale, to compare the magnitudes of the effects, we can turn this off: plot(m, pages = 1, scheme = 2, shade = TRUE) Now we can see that the seasonal effect dominates. The long-term trend (on average) is shown in the upper right plot. To really look at the long-term trend however, you need to pick a station and then predict from the model for that station, fixing time of day and day of year to some representative values (midday, for a day of the year in summer say). There early year or two of the series has some low temperature values relative to the rest of the records, which is likely being picked up in all the smooths involving YEAR. These data should be looked at more closely. This isn't really the place to get into that, but here are a couple of visualisations of the model fits. First I look at the spatial pattern of temperature and how it varies over the years of the series. To do that I predict from the model for a 100x100 grid over the spatial domain, at midday on day 180 of each year: pdata <- with(galveston, expand.grid(ToD = 12, DoY = 180, YEAR = seq(min(YEAR), max(YEAR), by = 1), LONGITUDE = seq(min(LONGITUDE), max(LONGITUDE), length = 100), LATITUDE = seq(min(LATITUDE), max(LATITUDE), length = 100))) fit <- predict(m, pdata) then I set to missing, NA, the predicted values fit for all data points that lie some distance from the observations (proportional; dist) ind <- exclude.too.far(pdata$LONGITUDE, pdata$LATITUDE, galveston$LONGITUDE, galveston$LATITUDE, dist = 0.1) fit[ind] <- NA and join the predictions to the prediction data pred <- cbind(pdata, Fitted = fit) Setting predicted values to NA like this stops us extrapolating beyond the support of the data. Using ggplot2 ggplot(pred, aes(x = LONGITUDE, y = LATITUDE)) + geom_raster(aes(fill = Fitted)) + facet_wrap(~ YEAR, ncol = 12) + scale_fill_viridis(name = expression(degree*C), option = 'plasma', na.value = 'transparent') + coord_quickmap() + theme(legend.position = 'top', legend.key.width = unit(2, 'cm')) we obtain the following We can see the year-to-year variation in temperatures in a bit more detail if we animate rather than facet the plot p <- ggplot(pred, aes(x = LONGITUDE, y = LATITUDE, frame = YEAR)) + geom_raster(aes(fill = Fitted)) + scale_fill_viridis(name = expression(degree*C), option = 'plasma', na.value = 'transparent') + coord_quickmap() + theme(legend.position = 'top', legend.key.width = unit(2, 'cm'))+ labs(x = 'Longitude', y = 'Latitude') gganimate(p, 'galveston.gif', interval = .2, ani.width = 500, ani.height = 800) To look at the long-term trends in more detail, we can predict for particular stations. For example, for STATION_ID 13364 and predicting for days in the four quarters, we might use the following to prepare values of the covariates we want to predict at (midday, on day of year 1, 90, 180, and 270, at the selected station, and evaluating the long-term trend at 500 equally spaced values) pdata <- with(galveston, expand.grid(ToD = 12, DoY = c(1, 90, 180, 270), YEAR = seq(min(YEAR), max(YEAR), length = 500), LONGITUDE = -94.8751, LATITUDE = 29.50866)) Then we predict and ask for standard errors, to form an approximate pointwise 95% confidence interval fit <- data.frame(predict(m, newdata = pdata, se.fit = TRUE)) fit <- transform(fit, upper = fit + (2 * se.fit), lower = fit - (2 * se.fit)) pred <- cbind(pdata, fit) which we plot ggplot(pred, aes(x = YEAR, y = fit, group = factor(DoY))) + geom_ribbon(aes(ymin = lower, ymax = upper), fill = 'grey', alpha = 0.5) + geom_line() + facet_wrap(~ DoY, scales = 'free_y') + labs(x = NULL, y = expression(Temperature ~ (degree * C))) producing Obviously, there's a lot more to modelling these data than what I show here, and we'd want to check for residual autocorrelation and overfitting of the splines, but approaching the problem as one of modelling the features of the data allows for a more detailed examination of the trends. You could of course just model each STATION_ID separately, but that would throw away data, and many stations have few observations. Here the model borrows from all the station information to fill in the gaps and assist in estimating the trends of interest. Some notes on bam() The bam() model is using all of mgcv's tricks to estimate the model quickly (multiple threads 4), fast REML smoothness selection (method = 'fREML'), and discretization of covariates. With these options turned on the model fits in less than a minute on my 2013-era dual 4-core Xeon workstation with 64Gb of RAM.
Trend in irregular time series data
Rather than try to decompose the time series explicitly, I would instead suggest that you model the data spatio-temporally because, as you'll see below, the long-term trend likely varies spatially, th
Trend in irregular time series data Rather than try to decompose the time series explicitly, I would instead suggest that you model the data spatio-temporally because, as you'll see below, the long-term trend likely varies spatially, the seasonal trend varies with the long-term trend and spatially. I have found that generalised additive models (GAMs) are a good model for fitting irregular time series such as you describe. Below I illustrate a quick model I prepared for the full data of the following form \begin{align} \begin{split} \mathrm{E}(y_i) & = \alpha + f_1(\text{ToD}_i) + f_2(\text{DoY}_i) + f_3(\text{Year}_i) + f_4(\text{x}_i, \text{y}_i) + \\ & \quad f_5(\text{DoY}_i, \text{Year}_i) + f_6(\text{x}_i, \text{y}_i, \text{ToD}_i) + \\ & \quad f_7(\text{x}_i, \text{y}_i, \text{DoY}_i) + f_8(\text{x}_i, \text{y}_i, \text{Year}_i) \end{split} \end{align} where $\alpha$ is the model intercept, $f_1(\text{ToD}_i)$ is a smooth function of time of day, $f_2(\text{DoY}_i)$ is a smooth function of day of year , $f_3(\text{Year}_i)$ is a smooth function of year, $f_4(\text{x}_i, \text{y}_i)$ is a 2D smooth of longitude and latitude, $f_5(\text{DoY}_i, \text{Year}_i)$ is a tensor product smooth of day of year and year, $f_6(\text{x}_i, \text{y}_i, \text{ToD}_i)$ tensor product smooth of location & time of day $f_7(\text{x}_i, \text{y}_i, \text{DoY}_i)$ tensor product smooth of location day of year& $f_8(\text{x}_i, \text{y}_i, \text{Year}_i$ tensor product smooth of location & year Effectively, the first four smooths are the main effects of time of day, season, long-term trend, spatial variation whilst the remaining four tensor product smooths model smooth interactions between the stated covariates, which model how the seasonal pattern of temperature varies over time, how the time of day effect varies spatially, how the seasonal effect varies spatially, and how the long-term trend varies spatially The data are loaded into R and massaged a bit with the following code library('mgcv') library('ggplot2') library('viridis') theme_set(theme_bw()) library('gganimate') galveston <- read.csv('gbtemp.csv') galveston <- transform(galveston, datetime = as.POSIXct(paste(DATE, TIME), format = '%m/%d/%y %H:%M', tz = "CDT")) galveston <- transform(galveston, STATION_ID = factor(STATION_ID), DoY = as.numeric(format(datetime, format = '%j')), ToD = as.numeric(format(datetime, format = '%H')) + (as.numeric(format(datetime, format = '%M')) / 60)) The model itself is fitted using the bam() function which is designed for fitting GAMs to larger data sets such as this. You can use gam() for this model also, but it will take somewhat longer to fit. knots <- list(DoY = c(0.5, 366.5)) M <- list(c(1, 0.5), NA) m <- bam(MEASUREMENT ~ s(ToD, k = 10) + s(DoY, k = 30, bs = 'cc') + s(YEAR, k = 30) + s(LONGITUDE, LATITUDE, k = 100, bs = 'ds', m = c(1, 0.5)) + ti(DoY, YEAR, bs = c('cc', 'tp'), k = c(15, 15)) + ti(LONGITUDE, LATITUDE, ToD, d = c(2,1), bs = c('ds','tp'), m = M, k = c(20, 10)) + ti(LONGITUDE, LATITUDE, DoY, d = c(2,1), bs = c('ds','cc'), m = M, k = c(25, 15)) + ti(LONGITUDE, LATITUDE, YEAR, d = c(2,1), bs = c('ds','tp'), m = M), k = c(25, 15)), data = galveston, method = 'fREML', knots = knots, nthreads = 4, discrete = TRUE) The s() terms are the main effects, whilst the ti() terms are tensor product interaction smooths where the main effects of the named covariates have been removed from the basis. These ti() smooths are a way to include interactions of the stated variables in a numerically stable way. The knots object is just setting the endpoints of the cyclic smooth I used for the day of year effect — we want 23:59 on Dec 31st to join up smoothly with 00:01 Jan 1st. This accounts to some extent for leap years. The model summary indicates all these effects are significant; > summary(m) Family: gaussian Link function: identity Formula: MEASUREMENT ~ s(ToD, k = 10) + s(DoY, k = 12, bs = "cc") + s(YEAR, k = 30) + s(LONGITUDE, LATITUDE, k = 100, bs = "ds", m = c(1, 0.5)) + ti(DoY, YEAR, bs = c("cc", "tp"), k = c(12, 15)) + ti(LONGITUDE, LATITUDE, ToD, d = c(2, 1), bs = c("ds", "tp"), m = list(c(1, 0.5), NA), k = c(20, 10)) + ti(LONGITUDE, LATITUDE, DoY, d = c(2, 1), bs = c("ds", "cc"), m = list(c(1, 0.5), NA), k = c(25, 12)) + ti(LONGITUDE, LATITUDE, YEAR, d = c(2, 1), bs = c("ds", "tp"), m = list(c(1, 0.5), NA), k = c(25, 15)) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 21.75561 0.07508 289.8 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(ToD) 3.036 3.696 5.956 0.000189 *** s(DoY) 9.580 10.000 3520.098 < 2e-16 *** s(YEAR) 27.979 28.736 59.282 < 2e-16 *** s(LONGITUDE,LATITUDE) 54.555 99.000 4.765 < 2e-16 *** ti(DoY,YEAR) 131.317 140.000 34.592 < 2e-16 *** ti(ToD,LONGITUDE,LATITUDE) 42.805 171.000 0.880 < 2e-16 *** ti(DoY,LONGITUDE,LATITUDE) 83.277 240.000 1.225 < 2e-16 *** ti(YEAR,LONGITUDE,LATITUDE) 84.862 329.000 1.101 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.94 Deviance explained = 94.2% fREML = 29807 Scale est. = 2.6318 n = 15276 A more careful analysis would want to check if we need all these interactions; some of the spatial ti() terms explain only small amounts of variation in the data, as indicated by the $F$ statistic; there's a lot of data here so even small effect sizes may be statistically significant but uninteresting. As a quick check, however, removing the three spatial ti() smooths (m.sub), results in a significantly poorer fit as assessed by AIC: > AIC(m, m.sub) df AIC m 447.5680 58583.81 m.sub 239.7336 59197.05 We can plot the partial effects of the first five smooths using the plot() method — the 3D tensor product smooths can't be plotted easily and not by default. plot(m, pages = 1, scheme = 2, shade = TRUE, scale = 0) The scale = 0 argument there puts all the plots on their own scale, to compare the magnitudes of the effects, we can turn this off: plot(m, pages = 1, scheme = 2, shade = TRUE) Now we can see that the seasonal effect dominates. The long-term trend (on average) is shown in the upper right plot. To really look at the long-term trend however, you need to pick a station and then predict from the model for that station, fixing time of day and day of year to some representative values (midday, for a day of the year in summer say). There early year or two of the series has some low temperature values relative to the rest of the records, which is likely being picked up in all the smooths involving YEAR. These data should be looked at more closely. This isn't really the place to get into that, but here are a couple of visualisations of the model fits. First I look at the spatial pattern of temperature and how it varies over the years of the series. To do that I predict from the model for a 100x100 grid over the spatial domain, at midday on day 180 of each year: pdata <- with(galveston, expand.grid(ToD = 12, DoY = 180, YEAR = seq(min(YEAR), max(YEAR), by = 1), LONGITUDE = seq(min(LONGITUDE), max(LONGITUDE), length = 100), LATITUDE = seq(min(LATITUDE), max(LATITUDE), length = 100))) fit <- predict(m, pdata) then I set to missing, NA, the predicted values fit for all data points that lie some distance from the observations (proportional; dist) ind <- exclude.too.far(pdata$LONGITUDE, pdata$LATITUDE, galveston$LONGITUDE, galveston$LATITUDE, dist = 0.1) fit[ind] <- NA and join the predictions to the prediction data pred <- cbind(pdata, Fitted = fit) Setting predicted values to NA like this stops us extrapolating beyond the support of the data. Using ggplot2 ggplot(pred, aes(x = LONGITUDE, y = LATITUDE)) + geom_raster(aes(fill = Fitted)) + facet_wrap(~ YEAR, ncol = 12) + scale_fill_viridis(name = expression(degree*C), option = 'plasma', na.value = 'transparent') + coord_quickmap() + theme(legend.position = 'top', legend.key.width = unit(2, 'cm')) we obtain the following We can see the year-to-year variation in temperatures in a bit more detail if we animate rather than facet the plot p <- ggplot(pred, aes(x = LONGITUDE, y = LATITUDE, frame = YEAR)) + geom_raster(aes(fill = Fitted)) + scale_fill_viridis(name = expression(degree*C), option = 'plasma', na.value = 'transparent') + coord_quickmap() + theme(legend.position = 'top', legend.key.width = unit(2, 'cm'))+ labs(x = 'Longitude', y = 'Latitude') gganimate(p, 'galveston.gif', interval = .2, ani.width = 500, ani.height = 800) To look at the long-term trends in more detail, we can predict for particular stations. For example, for STATION_ID 13364 and predicting for days in the four quarters, we might use the following to prepare values of the covariates we want to predict at (midday, on day of year 1, 90, 180, and 270, at the selected station, and evaluating the long-term trend at 500 equally spaced values) pdata <- with(galveston, expand.grid(ToD = 12, DoY = c(1, 90, 180, 270), YEAR = seq(min(YEAR), max(YEAR), length = 500), LONGITUDE = -94.8751, LATITUDE = 29.50866)) Then we predict and ask for standard errors, to form an approximate pointwise 95% confidence interval fit <- data.frame(predict(m, newdata = pdata, se.fit = TRUE)) fit <- transform(fit, upper = fit + (2 * se.fit), lower = fit - (2 * se.fit)) pred <- cbind(pdata, fit) which we plot ggplot(pred, aes(x = YEAR, y = fit, group = factor(DoY))) + geom_ribbon(aes(ymin = lower, ymax = upper), fill = 'grey', alpha = 0.5) + geom_line() + facet_wrap(~ DoY, scales = 'free_y') + labs(x = NULL, y = expression(Temperature ~ (degree * C))) producing Obviously, there's a lot more to modelling these data than what I show here, and we'd want to check for residual autocorrelation and overfitting of the splines, but approaching the problem as one of modelling the features of the data allows for a more detailed examination of the trends. You could of course just model each STATION_ID separately, but that would throw away data, and many stations have few observations. Here the model borrows from all the station information to fill in the gaps and assist in estimating the trends of interest. Some notes on bam() The bam() model is using all of mgcv's tricks to estimate the model quickly (multiple threads 4), fast REML smoothness selection (method = 'fREML'), and discretization of covariates. With these options turned on the model fits in less than a minute on my 2013-era dual 4-core Xeon workstation with 64Gb of RAM.
Trend in irregular time series data Rather than try to decompose the time series explicitly, I would instead suggest that you model the data spatio-temporally because, as you'll see below, the long-term trend likely varies spatially, th
19,547
Trend in irregular time series data
You could just use the function decompose which separates your time series in three components, trend, seasonal and random. You can also extract the different values from the outcome and plot them. Make sure that you define your data as a time series. The function stl is basically doing the same but gives you more flexibility in how you can decompose your data. I also recomend the following website https://www.otexts.org/fpp/6 Does this help?
Trend in irregular time series data
You could just use the function decompose which separates your time series in three components, trend, seasonal and random. You can also extract the different values from the outcome and plot them. Ma
Trend in irregular time series data You could just use the function decompose which separates your time series in three components, trend, seasonal and random. You can also extract the different values from the outcome and plot them. Make sure that you define your data as a time series. The function stl is basically doing the same but gives you more flexibility in how you can decompose your data. I also recomend the following website https://www.otexts.org/fpp/6 Does this help?
Trend in irregular time series data You could just use the function decompose which separates your time series in three components, trend, seasonal and random. You can also extract the different values from the outcome and plot them. Ma
19,548
Trend in irregular time series data
One option would be to use a regression strategy which treats your data as draws from a continuous underlying function (see discussion here: Is there any gold standard for modeling irregularly spaced time series?). From there you could use a method like Singular Spectrum Analysis to decompose the signal (for R: https://cran.r-project.org/web/packages/Rssa/index.html).
Trend in irregular time series data
One option would be to use a regression strategy which treats your data as draws from a continuous underlying function (see discussion here: Is there any gold standard for modeling irregularly spaced
Trend in irregular time series data One option would be to use a regression strategy which treats your data as draws from a continuous underlying function (see discussion here: Is there any gold standard for modeling irregularly spaced time series?). From there you could use a method like Singular Spectrum Analysis to decompose the signal (for R: https://cran.r-project.org/web/packages/Rssa/index.html).
Trend in irregular time series data One option would be to use a regression strategy which treats your data as draws from a continuous underlying function (see discussion here: Is there any gold standard for modeling irregularly spaced
19,549
What is the difference between a neural network and a perceptron?
Yes, there is - "perceptron" refers to a particular supervised learning model, which was outlined by Rosenblatt in 1957. The perceptron is a particular type of neural network, and is in fact historically important as one of the types of neural network developed. There are other types of neural network which were developed after the perceptron, and the diversity of neural networks continues to grow (especially given how cutting-edge and fashionable deep learning is these days).
What is the difference between a neural network and a perceptron?
Yes, there is - "perceptron" refers to a particular supervised learning model, which was outlined by Rosenblatt in 1957. The perceptron is a particular type of neural network, and is in fact historica
What is the difference between a neural network and a perceptron? Yes, there is - "perceptron" refers to a particular supervised learning model, which was outlined by Rosenblatt in 1957. The perceptron is a particular type of neural network, and is in fact historically important as one of the types of neural network developed. There are other types of neural network which were developed after the perceptron, and the diversity of neural networks continues to grow (especially given how cutting-edge and fashionable deep learning is these days).
What is the difference between a neural network and a perceptron? Yes, there is - "perceptron" refers to a particular supervised learning model, which was outlined by Rosenblatt in 1957. The perceptron is a particular type of neural network, and is in fact historica
19,550
What is the difference between a neural network and a perceptron?
Perceptron models are contained within the set of neural net models. A (single layer) perceptron is a single layer neural network that works as a linear binary classifier. Being a single layer neural network it can be trained without the use of more advanced algorithms like back propagation and instead can be trained by "stepping towards" your error in steps specified by a learning rate. When someone says perceptron, I usually think of the single layer version. If you're talking about a multilayer perceptron, however, then the term is the same as a feed-forward neural network.
What is the difference between a neural network and a perceptron?
Perceptron models are contained within the set of neural net models. A (single layer) perceptron is a single layer neural network that works as a linear binary classifier. Being a single layer neural
What is the difference between a neural network and a perceptron? Perceptron models are contained within the set of neural net models. A (single layer) perceptron is a single layer neural network that works as a linear binary classifier. Being a single layer neural network it can be trained without the use of more advanced algorithms like back propagation and instead can be trained by "stepping towards" your error in steps specified by a learning rate. When someone says perceptron, I usually think of the single layer version. If you're talking about a multilayer perceptron, however, then the term is the same as a feed-forward neural network.
What is the difference between a neural network and a perceptron? Perceptron models are contained within the set of neural net models. A (single layer) perceptron is a single layer neural network that works as a linear binary classifier. Being a single layer neural
19,551
What is the difference between a neural network and a perceptron?
Perceptron Learning procedure cannot be generalised to hidden layers • The perceptron convergence procedure works by ensuring that every time the weights change, they get closer to every “generously feasible” set of weights. – This type of guarantee cannot be extended to more complex networks in which the average of two good solutions may be a bad solution. • So “multi-layer” neural networks do not use the perceptron learning procedure. – They should never have been called multi-layer perceptrons. -Reference Coursera.org - Neural net course - Week 3
What is the difference between a neural network and a perceptron?
Perceptron Learning procedure cannot be generalised to hidden layers • The perceptron convergence procedure works by ensuring that every time the weights change, they get closer to every “generously
What is the difference between a neural network and a perceptron? Perceptron Learning procedure cannot be generalised to hidden layers • The perceptron convergence procedure works by ensuring that every time the weights change, they get closer to every “generously feasible” set of weights. – This type of guarantee cannot be extended to more complex networks in which the average of two good solutions may be a bad solution. • So “multi-layer” neural networks do not use the perceptron learning procedure. – They should never have been called multi-layer perceptrons. -Reference Coursera.org - Neural net course - Week 3
What is the difference between a neural network and a perceptron? Perceptron Learning procedure cannot be generalised to hidden layers • The perceptron convergence procedure works by ensuring that every time the weights change, they get closer to every “generously
19,552
What is the difference between a neural network and a perceptron?
As @Nick mentioned Preceptron is a neural network with single layer, which use hand-written programs based on common sense to define the features. This features used as input of network and then make binary decision based on that. [Image & explanation was based on Hinton Slide's in Coursera]
What is the difference between a neural network and a perceptron?
As @Nick mentioned Preceptron is a neural network with single layer, which use hand-written programs based on common sense to define the features. This features used as input of network and then make
What is the difference between a neural network and a perceptron? As @Nick mentioned Preceptron is a neural network with single layer, which use hand-written programs based on common sense to define the features. This features used as input of network and then make binary decision based on that. [Image & explanation was based on Hinton Slide's in Coursera]
What is the difference between a neural network and a perceptron? As @Nick mentioned Preceptron is a neural network with single layer, which use hand-written programs based on common sense to define the features. This features used as input of network and then make
19,553
Why use a z test rather than a t test with proportional data?
Short version: You don't use a t-test because the obvious statistic doesn't have a t-distribution. It does (approximately) have a z-distribution. Longer version: In the usual t-tests, the t-statistics are all of the form: $\frac{d}{s}$, where $s$ is an estimated standard error of $d$. The t-distribution arises from the following: 1) $d$ is normally distributed (with mean 0, since we're talking about distribution under $H_0$) 2) $k.s^2$ is $\chi^2$, for some $k$ (I don't want to belabor the details of what $k$ will be, since I'm covering many different forms of t-test here) 3) $d$ and $s$ are independent Those are a pretty strict set of circumstances. You only get all three to hold when you have normal data. If, instead, the estimate, $s$ is replaced by the actual value of the standard error of $d$ ($\sigma_d$), that form of statistic would have a $z-$distribution. When sample sizes are sufficiently large, a statistic like $d$ (which is often a shifted mean or a difference of means) is very often asymptotically normally distributed*, due to the central limit theorem. * more precisely, a standardized version of $d$, $d/\sigma_d$ will be asymptotically standard normal Many people think that this immediately justifies using a t-test, but as you see from the above list, we only satisfied the first of the three conditions under which the t-test was derived. On the other hand, there's another theorem, called Slutsky's theorem that helps us out. As long as the denominator converges in probability to that unknown standard error, $\sigma_d$ (a fairly weak condition), then $d/s$ should converge to a standard normal distribution. The usual one and two-sample proportions tests are of this form, and thus we have some justification for treating them as asymptotically normal, but we have no justification for treating them as $t$-distributed. In practice, as long as $np$ and $n(1-p)$ are not too small**, the asymptotic normality of the one and two-sample proportions tests comes in very rapidly (that is, often surprisingly small $n$ is enough for both theorems to 'kick in' as it were and the asymptotic behavior to be a good approximation to small sample behavior). ** though there are other ways to characterize "large enough" than that, conditions of that form seem to be the most common. While we don't seem to have a good argument (at least not that I have seen) that would establish that the t should be expected to be better than the z as an approximation to the discrete distribution of the test statistic at any particular sample size, nevertheless in practice the approximation obtained by using a t-test on 0-1 data seems to be quite good, as long as the usual conditions under which the z should be a reasonable approximation hold. is there a simple way to conduct an omnibus test for significant differences between more than 2 proportions (in the form of percentages) Sure. You can put it into the form of a chi-square test. (Indeed, akin to ANOVA you can even construct contrasts and multiple comparisons and such.) It's not clear from your question, however, whether your generalization will have two samples with several categories, or multiple samples with two categories (or even both at once, I guess). In either case, you can get a chi-square. If you are more specific I should be able to give more specific details.
Why use a z test rather than a t test with proportional data?
Short version: You don't use a t-test because the obvious statistic doesn't have a t-distribution. It does (approximately) have a z-distribution. Longer version: In the usual t-tests, the t-statistics
Why use a z test rather than a t test with proportional data? Short version: You don't use a t-test because the obvious statistic doesn't have a t-distribution. It does (approximately) have a z-distribution. Longer version: In the usual t-tests, the t-statistics are all of the form: $\frac{d}{s}$, where $s$ is an estimated standard error of $d$. The t-distribution arises from the following: 1) $d$ is normally distributed (with mean 0, since we're talking about distribution under $H_0$) 2) $k.s^2$ is $\chi^2$, for some $k$ (I don't want to belabor the details of what $k$ will be, since I'm covering many different forms of t-test here) 3) $d$ and $s$ are independent Those are a pretty strict set of circumstances. You only get all three to hold when you have normal data. If, instead, the estimate, $s$ is replaced by the actual value of the standard error of $d$ ($\sigma_d$), that form of statistic would have a $z-$distribution. When sample sizes are sufficiently large, a statistic like $d$ (which is often a shifted mean or a difference of means) is very often asymptotically normally distributed*, due to the central limit theorem. * more precisely, a standardized version of $d$, $d/\sigma_d$ will be asymptotically standard normal Many people think that this immediately justifies using a t-test, but as you see from the above list, we only satisfied the first of the three conditions under which the t-test was derived. On the other hand, there's another theorem, called Slutsky's theorem that helps us out. As long as the denominator converges in probability to that unknown standard error, $\sigma_d$ (a fairly weak condition), then $d/s$ should converge to a standard normal distribution. The usual one and two-sample proportions tests are of this form, and thus we have some justification for treating them as asymptotically normal, but we have no justification for treating them as $t$-distributed. In practice, as long as $np$ and $n(1-p)$ are not too small**, the asymptotic normality of the one and two-sample proportions tests comes in very rapidly (that is, often surprisingly small $n$ is enough for both theorems to 'kick in' as it were and the asymptotic behavior to be a good approximation to small sample behavior). ** though there are other ways to characterize "large enough" than that, conditions of that form seem to be the most common. While we don't seem to have a good argument (at least not that I have seen) that would establish that the t should be expected to be better than the z as an approximation to the discrete distribution of the test statistic at any particular sample size, nevertheless in practice the approximation obtained by using a t-test on 0-1 data seems to be quite good, as long as the usual conditions under which the z should be a reasonable approximation hold. is there a simple way to conduct an omnibus test for significant differences between more than 2 proportions (in the form of percentages) Sure. You can put it into the form of a chi-square test. (Indeed, akin to ANOVA you can even construct contrasts and multiple comparisons and such.) It's not clear from your question, however, whether your generalization will have two samples with several categories, or multiple samples with two categories (or even both at once, I guess). In either case, you can get a chi-square. If you are more specific I should be able to give more specific details.
Why use a z test rather than a t test with proportional data? Short version: You don't use a t-test because the obvious statistic doesn't have a t-distribution. It does (approximately) have a z-distribution. Longer version: In the usual t-tests, the t-statistics
19,554
Why use a z test rather than a t test with proportional data?
The reason you can use a $z$-test with proportion data is because the standard deviation of a proportion is a function of the proportion itself. Thus, once you have estimated the proportion in your sample, you don't have an extra source of uncertainty that you have to take into account. As a result, you can use the normal distribution instead of the $t$ distribution as your sampling distribution. To learn more about this, see my answer here: The $z$-test vs the $\chi^2$-test for comparing the odds of catching a cold in 2 groups. If you have more than 2 groups, you can use logistic regression, as you note. You do have to know the $n_j$s in each group however. If you just had a set of observed proportions, but didn't know how many trials had been observed to generate those proportions, you cannot run a proper test of whether the proportions differed.
Why use a z test rather than a t test with proportional data?
The reason you can use a $z$-test with proportion data is because the standard deviation of a proportion is a function of the proportion itself. Thus, once you have estimated the proportion in your s
Why use a z test rather than a t test with proportional data? The reason you can use a $z$-test with proportion data is because the standard deviation of a proportion is a function of the proportion itself. Thus, once you have estimated the proportion in your sample, you don't have an extra source of uncertainty that you have to take into account. As a result, you can use the normal distribution instead of the $t$ distribution as your sampling distribution. To learn more about this, see my answer here: The $z$-test vs the $\chi^2$-test for comparing the odds of catching a cold in 2 groups. If you have more than 2 groups, you can use logistic regression, as you note. You do have to know the $n_j$s in each group however. If you just had a set of observed proportions, but didn't know how many trials had been observed to generate those proportions, you cannot run a proper test of whether the proportions differed.
Why use a z test rather than a t test with proportional data? The reason you can use a $z$-test with proportion data is because the standard deviation of a proportion is a function of the proportion itself. Thus, once you have estimated the proportion in your s
19,555
Constructing example showing $\mathbb{E}(X^{-1})=(\mathbb{E}(X))^{-1}$
Let's construct all possible examples of random variables $X$ for which $E[X]E[1/X]=1$. Then, among them, we may follow some heuristics to obtain the simplest possible example. These heuristics consist of giving the simplest possible values to all expressions that drop out of a preliminary analysis. This turns out to be the textbook example. Preliminary analysis This requires only a little bit of analysis based on definitions. The solution is of only secondary interest: the main objective is to develop insights to help us understand the results intuitively. First observe that Jensen's Inequality (or the Cauchy-Schwarz Inequality) implies that for a positive random variable $X$, $E[X]E[1/X] \ge 1$, with equality holding if and only if $X$ is "degenerate": that is, $X$ is almost surely constant. When $X$ is a negative random variable, $-X$ is positive and the preceding result holds with the inequality sign reversed. Consequently, any example where $E[1/X]=1/E[X]$ must have positive probability of being negative and positive probability of being positive. The insight here is that any $X$ with $E[X]E[1/X]=1$ must somehow be "balancing" the inequality from its positive part against the inequality in the other direction from its negative part. This will become clearer as we go along. Consider any nonzero random variable $X$. An initial step in formulating a definition of expectation (at least when this is done in full generality using measure theory) is to decompose $X$ into its positive and negative parts, both of which are positive random variables: $$\eqalign{ Y &= \operatorname{Positive part}(X) = \max(0, X);\\ Z &= \operatorname{Negative part}(X) = -\min(0, X). }$$ Let's think of $X$ as a mixture of $Y$ with weight $p$ and $-Z$ with weight $1-p$ where $$p=\Pr(X\gt 0),\ 1-p = \Pr(X \lt 0).$$ Obviously $$0 \lt p \lt 1.$$ This will enable us to write expectations of $X$ and $1/X$ in terms of the expectations of the positive variables $Y$ and $Z$. To simplify the forthcoming algebra a little, note that uniformly rescaling $X$ by a number $\sigma$ does not change $E[X]E[1/X]$--but it does multiply $E[Y]$ and $E[Z]$ each by $\sigma$. For positive $\sigma$, this simply amounts to selecting the units of measurement of $X$. A negative $\sigma$ switches the roles of $Y$ and $Z$. Choosing the sign of $\sigma$ appropriately we may therefore suppose $$E[Z]=1\text{ and }E[Y] \ge E[Z].\tag{1}$$ Notation That's it for preliminary simplifications. To create a nice notation, let us therefore write $$\mu = E[Y];\ \nu = E[1/Y];\ \lambda=E[1/Z]$$ for the three expectations we cannot control. All three quantities are positive. Jensen's Inequality asserts $$\mu\nu \ge 1\text{ and }\lambda \ge 1.\tag{2}$$ The Law of Total Probability expresses the expectations of $X$ and $1/X$ in terms of the quantities we have named: $$E[X] = E[X\mid X\gt 0]\Pr(X \gt 0) + E[X\mid X \lt 0]\Pr(X \lt 0) = \mu p - (1-p) = (\mu + 1)p - 1$$ and, since $1/X$ has the same sign as $X$, $$E\left[\frac{1}{X}\right] = E\left[\frac{1}{X}\mid X\gt 0\right]\Pr(X \gt 0) + E\left[\frac{1}{X}\mid X \lt 0\right]\Pr(X \lt 0) = \nu p - \lambda(1-p) = (\nu + \lambda)p - \lambda.$$ Equating the product of these two expressions with $1$ provides an essential relationship among the variables: $$1 = E[X]E\left[\frac{1}{X}\right] = ((\mu +1)p - 1)((\nu + \lambda)p - \lambda).\tag{*}$$ Reformulation of the Problem Suppose the parts of $X$--$Y$ and $Z$--are any positive random variables (degenerate or not). That determines $\mu, \nu,$ and $\lambda$. When can we find $p$, with $0 \lt p \lt 1$, for which $(*)$ holds? This clearly articulates the "balancing" insight previously stated only vaguely: we are going to hold $Y$ and $Z$ fixed and hope to find a value of $p$ that appropriately balances their relative contributions to $X$. Although it's not immediately evident that such a $p$ need exist, what is clear is that it depends only on the moments $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$. The problem thereby is reduced to relatively simple algebra--all the analysis of random variables has been completed. Solution This algebraic problem isn't too hard to solve, because $(*)$ is at worst a quadratic equation for $p$ and the governing inequalities $(1)$ and $(2)$ are relatively simple. Indeed, $(*)$ tells us the product of its roots $p_1$ and $p_2$ is $$p_1p_2 = (\lambda - 1)\frac{1}{(\mu+1)(\nu+\lambda)} \ge 0$$ and the sum is $$p_1 + p_2 = (2\lambda + \lambda \mu + \nu)\frac{1}{(\mu+1)(\nu+\lambda)} \gt 0.$$ Therefore both roots must be positive. Furthermore, their average is less than $1$, because $$ 1 - \frac{(p_1+p_2)}{2} = \frac{\lambda \mu + \nu + 2 \mu \nu}{2(\mu+1)(\nu+\lambda)} \gt 0.$$ (By doing a bit of algebra, it's not hard to show the larger of the two roots does not exceed $1$, either.) A Theorem Here is what we have found: Given any two positive random variables $Y$ and $Z$ (at least one of which is nondegenerate) for which $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$ exist and are finite. Then there exist either one or two values $p$, with $0 \lt p \lt 1$, that determine a mixture variable $X$ with weight $p$ for $Y$ and weight $1-p$ for $-Z$ and for which $E[X]E[1/X]=1$. Every such instance of a random variable $X$ with $E[X]E[1/X]=1$ is of this form. That gives us a rich set of examples indeed! Constructing the Simplest Possible Example Having characterized all examples, let's proceed to construct one that is as simple as possible. For the negative part $Z$, let's choose a degenerate variable--the very simplest kind of random variable. It will be scaled to make its value $1$, whence $\lambda=1$. The solution of $(*)$ includes $p_1=0$, reducing it to an easily solved linear equation: the only positive root is $$p = \frac{1}{1+\mu} + \frac{1}{1+\nu}.\tag{3}$$ For the positive part $Y$, we obtain nothing useful if $Y$ is degenerate, so let's give it some probability at just two distinct positive values $a \lt b$, say $\Pr(X=b)=q$. In this case the definition of expectation gives $$\mu = E[Y] = (1-q)a + qb;\ \nu = E[1/Y] = (1-q)/a + q/b.$$ To make this even simpler, let's make $Y$ and $1/Y$ identical: this forces $q=1-q=1/2$ and $a=1/b$. Now $$\mu = \nu = \frac{b + 1/b}{2}.$$ The solution $(3)$ simplifies to $$p = \frac{2}{1+\mu} = \frac{4}{2 + b + 1/b}.$$ How can we make this involve simple numbers? Since $a\lt b$ and $ab=1$, necessarily $b\gt 1$. Let's choose the simplest number greater than $1$ for $b$; namely, $b=2$. The foregoing formula yields $p = 4/(2+2+1/2) = 8/9$ and our candidate for the simplest possible example therefore is $$\eqalign{ \Pr(X=2) = \Pr(X=b) = \Pr(Y=b)p = qp = \frac{1}{2}\frac{8}{9} = \frac{4}{9};\\ \Pr(X=1/2) = \Pr(X=a) = \Pr(Y=a)p = qp = \cdots = \frac{4}{9};\\ \Pr(X=-1) = \Pr(Z=1)(1-p) = 1-p = \frac{1}{9}. }$$ This is the very example offered in the textbook.
Constructing example showing $\mathbb{E}(X^{-1})=(\mathbb{E}(X))^{-1}$
Let's construct all possible examples of random variables $X$ for which $E[X]E[1/X]=1$. Then, among them, we may follow some heuristics to obtain the simplest possible example. These heuristics cons
Constructing example showing $\mathbb{E}(X^{-1})=(\mathbb{E}(X))^{-1}$ Let's construct all possible examples of random variables $X$ for which $E[X]E[1/X]=1$. Then, among them, we may follow some heuristics to obtain the simplest possible example. These heuristics consist of giving the simplest possible values to all expressions that drop out of a preliminary analysis. This turns out to be the textbook example. Preliminary analysis This requires only a little bit of analysis based on definitions. The solution is of only secondary interest: the main objective is to develop insights to help us understand the results intuitively. First observe that Jensen's Inequality (or the Cauchy-Schwarz Inequality) implies that for a positive random variable $X$, $E[X]E[1/X] \ge 1$, with equality holding if and only if $X$ is "degenerate": that is, $X$ is almost surely constant. When $X$ is a negative random variable, $-X$ is positive and the preceding result holds with the inequality sign reversed. Consequently, any example where $E[1/X]=1/E[X]$ must have positive probability of being negative and positive probability of being positive. The insight here is that any $X$ with $E[X]E[1/X]=1$ must somehow be "balancing" the inequality from its positive part against the inequality in the other direction from its negative part. This will become clearer as we go along. Consider any nonzero random variable $X$. An initial step in formulating a definition of expectation (at least when this is done in full generality using measure theory) is to decompose $X$ into its positive and negative parts, both of which are positive random variables: $$\eqalign{ Y &= \operatorname{Positive part}(X) = \max(0, X);\\ Z &= \operatorname{Negative part}(X) = -\min(0, X). }$$ Let's think of $X$ as a mixture of $Y$ with weight $p$ and $-Z$ with weight $1-p$ where $$p=\Pr(X\gt 0),\ 1-p = \Pr(X \lt 0).$$ Obviously $$0 \lt p \lt 1.$$ This will enable us to write expectations of $X$ and $1/X$ in terms of the expectations of the positive variables $Y$ and $Z$. To simplify the forthcoming algebra a little, note that uniformly rescaling $X$ by a number $\sigma$ does not change $E[X]E[1/X]$--but it does multiply $E[Y]$ and $E[Z]$ each by $\sigma$. For positive $\sigma$, this simply amounts to selecting the units of measurement of $X$. A negative $\sigma$ switches the roles of $Y$ and $Z$. Choosing the sign of $\sigma$ appropriately we may therefore suppose $$E[Z]=1\text{ and }E[Y] \ge E[Z].\tag{1}$$ Notation That's it for preliminary simplifications. To create a nice notation, let us therefore write $$\mu = E[Y];\ \nu = E[1/Y];\ \lambda=E[1/Z]$$ for the three expectations we cannot control. All three quantities are positive. Jensen's Inequality asserts $$\mu\nu \ge 1\text{ and }\lambda \ge 1.\tag{2}$$ The Law of Total Probability expresses the expectations of $X$ and $1/X$ in terms of the quantities we have named: $$E[X] = E[X\mid X\gt 0]\Pr(X \gt 0) + E[X\mid X \lt 0]\Pr(X \lt 0) = \mu p - (1-p) = (\mu + 1)p - 1$$ and, since $1/X$ has the same sign as $X$, $$E\left[\frac{1}{X}\right] = E\left[\frac{1}{X}\mid X\gt 0\right]\Pr(X \gt 0) + E\left[\frac{1}{X}\mid X \lt 0\right]\Pr(X \lt 0) = \nu p - \lambda(1-p) = (\nu + \lambda)p - \lambda.$$ Equating the product of these two expressions with $1$ provides an essential relationship among the variables: $$1 = E[X]E\left[\frac{1}{X}\right] = ((\mu +1)p - 1)((\nu + \lambda)p - \lambda).\tag{*}$$ Reformulation of the Problem Suppose the parts of $X$--$Y$ and $Z$--are any positive random variables (degenerate or not). That determines $\mu, \nu,$ and $\lambda$. When can we find $p$, with $0 \lt p \lt 1$, for which $(*)$ holds? This clearly articulates the "balancing" insight previously stated only vaguely: we are going to hold $Y$ and $Z$ fixed and hope to find a value of $p$ that appropriately balances their relative contributions to $X$. Although it's not immediately evident that such a $p$ need exist, what is clear is that it depends only on the moments $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$. The problem thereby is reduced to relatively simple algebra--all the analysis of random variables has been completed. Solution This algebraic problem isn't too hard to solve, because $(*)$ is at worst a quadratic equation for $p$ and the governing inequalities $(1)$ and $(2)$ are relatively simple. Indeed, $(*)$ tells us the product of its roots $p_1$ and $p_2$ is $$p_1p_2 = (\lambda - 1)\frac{1}{(\mu+1)(\nu+\lambda)} \ge 0$$ and the sum is $$p_1 + p_2 = (2\lambda + \lambda \mu + \nu)\frac{1}{(\mu+1)(\nu+\lambda)} \gt 0.$$ Therefore both roots must be positive. Furthermore, their average is less than $1$, because $$ 1 - \frac{(p_1+p_2)}{2} = \frac{\lambda \mu + \nu + 2 \mu \nu}{2(\mu+1)(\nu+\lambda)} \gt 0.$$ (By doing a bit of algebra, it's not hard to show the larger of the two roots does not exceed $1$, either.) A Theorem Here is what we have found: Given any two positive random variables $Y$ and $Z$ (at least one of which is nondegenerate) for which $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$ exist and are finite. Then there exist either one or two values $p$, with $0 \lt p \lt 1$, that determine a mixture variable $X$ with weight $p$ for $Y$ and weight $1-p$ for $-Z$ and for which $E[X]E[1/X]=1$. Every such instance of a random variable $X$ with $E[X]E[1/X]=1$ is of this form. That gives us a rich set of examples indeed! Constructing the Simplest Possible Example Having characterized all examples, let's proceed to construct one that is as simple as possible. For the negative part $Z$, let's choose a degenerate variable--the very simplest kind of random variable. It will be scaled to make its value $1$, whence $\lambda=1$. The solution of $(*)$ includes $p_1=0$, reducing it to an easily solved linear equation: the only positive root is $$p = \frac{1}{1+\mu} + \frac{1}{1+\nu}.\tag{3}$$ For the positive part $Y$, we obtain nothing useful if $Y$ is degenerate, so let's give it some probability at just two distinct positive values $a \lt b$, say $\Pr(X=b)=q$. In this case the definition of expectation gives $$\mu = E[Y] = (1-q)a + qb;\ \nu = E[1/Y] = (1-q)/a + q/b.$$ To make this even simpler, let's make $Y$ and $1/Y$ identical: this forces $q=1-q=1/2$ and $a=1/b$. Now $$\mu = \nu = \frac{b + 1/b}{2}.$$ The solution $(3)$ simplifies to $$p = \frac{2}{1+\mu} = \frac{4}{2 + b + 1/b}.$$ How can we make this involve simple numbers? Since $a\lt b$ and $ab=1$, necessarily $b\gt 1$. Let's choose the simplest number greater than $1$ for $b$; namely, $b=2$. The foregoing formula yields $p = 4/(2+2+1/2) = 8/9$ and our candidate for the simplest possible example therefore is $$\eqalign{ \Pr(X=2) = \Pr(X=b) = \Pr(Y=b)p = qp = \frac{1}{2}\frac{8}{9} = \frac{4}{9};\\ \Pr(X=1/2) = \Pr(X=a) = \Pr(Y=a)p = qp = \cdots = \frac{4}{9};\\ \Pr(X=-1) = \Pr(Z=1)(1-p) = 1-p = \frac{1}{9}. }$$ This is the very example offered in the textbook.
Constructing example showing $\mathbb{E}(X^{-1})=(\mathbb{E}(X))^{-1}$ Let's construct all possible examples of random variables $X$ for which $E[X]E[1/X]=1$. Then, among them, we may follow some heuristics to obtain the simplest possible example. These heuristics cons
19,556
Constructing example showing $\mathbb{E}(X^{-1})=(\mathbb{E}(X))^{-1}$
As you've mentioned, if $X$ is positive then $E(1/X)=1/E(X)$ occurs only when $X$ is almost surely constant. Otherwise you need $X$ to take both negative and positive values. To construct such an example, first go as simple as possible. Assume $X$ takes two values, $a$ and $b$, with probabilities $p$ and $1-p$ respectively. Then $$E(X)=ap+b(1-p)$$ and $$E(1/X)=\frac1ap+\frac1b(1-p).$$ To have $1/E(X)=E(1/X)$ we require $$ap+b(1-p)=\frac1{\frac1ap+\frac1b(1-p)}$$ which rearranges to the requirement $$(a-b)^2p(1-p)=0.$$ This means the only possible solution must have either $a=b$, or $p=0$, or $p=1$. In all cases we return to the degenerate case: $X$ is constant. Next try: a distribution with three possible values. Here there are many more choices. The example you cited tries an $X$ such that $1/X$ has the same distribution. If we know $X$ takes three values, it must be that one of the values is either $1$ or $-1$, and the other two must be $a$ and $1/a$ for some choice of $a$. For definiteness let's try $P(X=a)=P(X=1/a)=p$, and $P(X=-1)=1-2p$. Then $$ E(1/X)=E(X)=(a+\frac1a)p-(1-2p)=(2+a+\frac1a)p-1.\tag1 $$ To meet the requirement $1/E(X)=E(1/X)$ we demand $E(X)=1$ or $E(X)=-1$. Expression (1) is never $-1$ unless $p=0$, which returns us to the degenerate case again. So aim for $E(X)=1$, which gives $$(2+a+\frac1a)p=2\quad\Leftrightarrow\quad p=\frac2{2+a+\frac1a}=\frac{2a}{(a+1)^2}.\tag2$$ Expression (2) gives an entire family of solutions that meet the requirement. The only constraint is that $a$ must be positive. The example you cited takes $a=2$. Only the case $a=1$ is degenerate.
Constructing example showing $\mathbb{E}(X^{-1})=(\mathbb{E}(X))^{-1}$
As you've mentioned, if $X$ is positive then $E(1/X)=1/E(X)$ occurs only when $X$ is almost surely constant. Otherwise you need $X$ to take both negative and positive values. To construct such an exam
Constructing example showing $\mathbb{E}(X^{-1})=(\mathbb{E}(X))^{-1}$ As you've mentioned, if $X$ is positive then $E(1/X)=1/E(X)$ occurs only when $X$ is almost surely constant. Otherwise you need $X$ to take both negative and positive values. To construct such an example, first go as simple as possible. Assume $X$ takes two values, $a$ and $b$, with probabilities $p$ and $1-p$ respectively. Then $$E(X)=ap+b(1-p)$$ and $$E(1/X)=\frac1ap+\frac1b(1-p).$$ To have $1/E(X)=E(1/X)$ we require $$ap+b(1-p)=\frac1{\frac1ap+\frac1b(1-p)}$$ which rearranges to the requirement $$(a-b)^2p(1-p)=0.$$ This means the only possible solution must have either $a=b$, or $p=0$, or $p=1$. In all cases we return to the degenerate case: $X$ is constant. Next try: a distribution with three possible values. Here there are many more choices. The example you cited tries an $X$ such that $1/X$ has the same distribution. If we know $X$ takes three values, it must be that one of the values is either $1$ or $-1$, and the other two must be $a$ and $1/a$ for some choice of $a$. For definiteness let's try $P(X=a)=P(X=1/a)=p$, and $P(X=-1)=1-2p$. Then $$ E(1/X)=E(X)=(a+\frac1a)p-(1-2p)=(2+a+\frac1a)p-1.\tag1 $$ To meet the requirement $1/E(X)=E(1/X)$ we demand $E(X)=1$ or $E(X)=-1$. Expression (1) is never $-1$ unless $p=0$, which returns us to the degenerate case again. So aim for $E(X)=1$, which gives $$(2+a+\frac1a)p=2\quad\Leftrightarrow\quad p=\frac2{2+a+\frac1a}=\frac{2a}{(a+1)^2}.\tag2$$ Expression (2) gives an entire family of solutions that meet the requirement. The only constraint is that $a$ must be positive. The example you cited takes $a=2$. Only the case $a=1$ is degenerate.
Constructing example showing $\mathbb{E}(X^{-1})=(\mathbb{E}(X))^{-1}$ As you've mentioned, if $X$ is positive then $E(1/X)=1/E(X)$ occurs only when $X$ is almost surely constant. Otherwise you need $X$ to take both negative and positive values. To construct such an exam
19,557
Whence the beta distribution?
As a former physicist I can see how it could have been derived. This is how physicists proceed: when they encounter a finite integral of a positive function, such as beta function: $$B(x,y) = \int_0^1t^{x-1}(1-t)^{y-1}\,dt$$ they instinctively define a density: $$f(s|x,y)=\frac{s^{x-1}(1-s)^{y-1}}{\int_0^1t^{x-1}(1-t)^{y-1}\,dt}=\frac{s^{x-1}(1-s)^{y-1}}{B(x,y)},$$ where $0<s<1$ They do this to all kinds of integrals all the time so often that it happens reflexively without even thinking. They call this procedure "normalization" or similar names. Notice how by definition trivially the density has all the properties that you want it to have, such as always positive and adds up to one. The density $f(t)$ that I gave above is of Beta distribution. UPDATE @whuber's asking what's so special about Beta distribution while the above logic could be applied to an infinite number of suitable integrals (as I noted in my answer above)? The special part comes from the binomial distribution. I'll write its PDF using similar notation to my beta, not the usual notation for parameters and variables: $$ f'(x,y|s) = \binom {y+x} x s^x(1-s)^{y}$$ Here, $x,y$ - number of successes and failures, and $s$ - probability of success. You can see how this is very similar to the numerator in the Beta distribution. In fact, if you look for the prior for Binomial distribution, it'll be the Beta distribution. It's not surprising also because the domain of Beta is 0 to 1, and that's what you do in Bayes theorem: integrate over the parameter $s$, which is the probability of success in this case as shown below: $$\hat f(x|X)=\frac{f'(X|s)f(s)}{\int_0^1 f'(X|s)f(s)ds},$$ here $f(s)$ - probability (density) of probability of success given the prior settings of Beta distribution, and $f'(X|s)$ - density of this data set (i.e. observed success and failures) given a probability $s$.
Whence the beta distribution?
As a former physicist I can see how it could have been derived. This is how physicists proceed: when they encounter a finite integral of a positive function, such as beta function: $$B(x,y) = \int_0^1
Whence the beta distribution? As a former physicist I can see how it could have been derived. This is how physicists proceed: when they encounter a finite integral of a positive function, such as beta function: $$B(x,y) = \int_0^1t^{x-1}(1-t)^{y-1}\,dt$$ they instinctively define a density: $$f(s|x,y)=\frac{s^{x-1}(1-s)^{y-1}}{\int_0^1t^{x-1}(1-t)^{y-1}\,dt}=\frac{s^{x-1}(1-s)^{y-1}}{B(x,y)},$$ where $0<s<1$ They do this to all kinds of integrals all the time so often that it happens reflexively without even thinking. They call this procedure "normalization" or similar names. Notice how by definition trivially the density has all the properties that you want it to have, such as always positive and adds up to one. The density $f(t)$ that I gave above is of Beta distribution. UPDATE @whuber's asking what's so special about Beta distribution while the above logic could be applied to an infinite number of suitable integrals (as I noted in my answer above)? The special part comes from the binomial distribution. I'll write its PDF using similar notation to my beta, not the usual notation for parameters and variables: $$ f'(x,y|s) = \binom {y+x} x s^x(1-s)^{y}$$ Here, $x,y$ - number of successes and failures, and $s$ - probability of success. You can see how this is very similar to the numerator in the Beta distribution. In fact, if you look for the prior for Binomial distribution, it'll be the Beta distribution. It's not surprising also because the domain of Beta is 0 to 1, and that's what you do in Bayes theorem: integrate over the parameter $s$, which is the probability of success in this case as shown below: $$\hat f(x|X)=\frac{f'(X|s)f(s)}{\int_0^1 f'(X|s)f(s)ds},$$ here $f(s)$ - probability (density) of probability of success given the prior settings of Beta distribution, and $f'(X|s)$ - density of this data set (i.e. observed success and failures) given a probability $s$.
Whence the beta distribution? As a former physicist I can see how it could have been derived. This is how physicists proceed: when they encounter a finite integral of a positive function, such as beta function: $$B(x,y) = \int_0^1
19,558
Whence the beta distribution?
Thomas Bayes (1763) derived the Beta distribution [without using this name] as the very first example of posterior distribution, predating Leonhard Euler (1766) work on the Beta integral pointed out by Glen_b by a few years, but the integral also appears in Euler (1729 or 1738) [Opera Omnia, I14, 1{24] as a way to generalise the factorial function $-$which may be why the normalising Beta constant $B(a,b)$ is also called the Euler function$-$. Davies mentions Wallis (1616-1703), Newton (1642-1726), and Stirling (1692-1770) dealing with special cases of the integral even earlier. Karl Pearson (1895) first catalogued this family of distributions as Pearson Type I. Although it did not historically appear in that order, an intuitive entry to the Beta distribution is through Fisher's $F(p,q)$ distribution, which corresponds to the distribution of a ratio $$ \varrho=\hat\sigma^2_1\big/\hat\sigma_2^2\qquad p\hat\sigma_1^2\sim\chi^2_p\quad q\hat\sigma_1^2\sim\chi^2_q$$ where I purposely used the usual notations for variance estimators as this is how this distribution appeared and was motivated, for testing the equality of two variances. Then $$ \frac{p\varrho}{q+p\varrho}\sim B(p/2,q/2) $$ while, conversely, if $\omega\sim B(a,b)$, then $$ \dfrac{\omega/a}{(1-\omega)/b}\sim F(2a,2b) $$ Finding the density of a $B(a,b)$ distribution is thus a change of variable step: starting from the density of a $F(p,q)$ distribution, $$ f_{p,q}(x) \propto \{px/q\}^{p/2-1}(1+px/q)^{-(p+q)/2}$$ and considering the change of variable$$y=\frac{\{px/q\}}{\{1+px/q\}}\quad y\in(0,1)$$which inverts into$$x=\frac{qy}{p(1-y)}$$ the Jacobian is$$\frac{\text{d}x}{\text{d}y}=\frac{q}{p(1-y)}+\frac{qy}{p(1-y)^2}=\frac{p}{q(1-y)^2}$$leads to the density of the transform $$g(y)\propto y^{p/2-1}(1-y)^{q/2+1}(1-y)^{-2}=y^{p/2-1}(1-y)^{q/2+1}$$ [where all normalisation constants are obtained by imposing for the density to integrate to one.
Whence the beta distribution?
Thomas Bayes (1763) derived the Beta distribution [without using this name] as the very first example of posterior distribution, predating Leonhard Euler (1766) work on the Beta integral pointed out b
Whence the beta distribution? Thomas Bayes (1763) derived the Beta distribution [without using this name] as the very first example of posterior distribution, predating Leonhard Euler (1766) work on the Beta integral pointed out by Glen_b by a few years, but the integral also appears in Euler (1729 or 1738) [Opera Omnia, I14, 1{24] as a way to generalise the factorial function $-$which may be why the normalising Beta constant $B(a,b)$ is also called the Euler function$-$. Davies mentions Wallis (1616-1703), Newton (1642-1726), and Stirling (1692-1770) dealing with special cases of the integral even earlier. Karl Pearson (1895) first catalogued this family of distributions as Pearson Type I. Although it did not historically appear in that order, an intuitive entry to the Beta distribution is through Fisher's $F(p,q)$ distribution, which corresponds to the distribution of a ratio $$ \varrho=\hat\sigma^2_1\big/\hat\sigma_2^2\qquad p\hat\sigma_1^2\sim\chi^2_p\quad q\hat\sigma_1^2\sim\chi^2_q$$ where I purposely used the usual notations for variance estimators as this is how this distribution appeared and was motivated, for testing the equality of two variances. Then $$ \frac{p\varrho}{q+p\varrho}\sim B(p/2,q/2) $$ while, conversely, if $\omega\sim B(a,b)$, then $$ \dfrac{\omega/a}{(1-\omega)/b}\sim F(2a,2b) $$ Finding the density of a $B(a,b)$ distribution is thus a change of variable step: starting from the density of a $F(p,q)$ distribution, $$ f_{p,q}(x) \propto \{px/q\}^{p/2-1}(1+px/q)^{-(p+q)/2}$$ and considering the change of variable$$y=\frac{\{px/q\}}{\{1+px/q\}}\quad y\in(0,1)$$which inverts into$$x=\frac{qy}{p(1-y)}$$ the Jacobian is$$\frac{\text{d}x}{\text{d}y}=\frac{q}{p(1-y)}+\frac{qy}{p(1-y)^2}=\frac{p}{q(1-y)^2}$$leads to the density of the transform $$g(y)\propto y^{p/2-1}(1-y)^{q/2+1}(1-y)^{-2}=y^{p/2-1}(1-y)^{q/2+1}$$ [where all normalisation constants are obtained by imposing for the density to integrate to one.
Whence the beta distribution? Thomas Bayes (1763) derived the Beta distribution [without using this name] as the very first example of posterior distribution, predating Leonhard Euler (1766) work on the Beta integral pointed out b
19,559
Whence the beta distribution?
The beta distribution can be seen as the distribution of probabilities in the center of a jittered distribution First of all, I am not good in mathematically precise descriptions of concepts in my head, but I'll try my best using a simple example: Imagine you have a bow, many arrows and a target. Let's further say your hit rate $\lambda$ (for hitting the target) is precisely a function of the distance to the center of the target and of the following form \begin{eqnarray} \lambda=g(x)=\lambda_{max}-(q|x-x_0|)^\frac{1}{q},~q > 0,~0 \leq \lambda \leq \lambda_{max} \end{eqnarray} where x is the distance to the center of the target ($x_0$). For $q=1/2$ this would be a first order approximation of a Gaussian. That would mean that you most frequently hit the bull-eye. Similarly, it approximates any bell-shaped curve, for example, resulting from diffusion of Brownian particles. Now, let is furthermore assume that somebody really brave/stupid tries to trick you and displaces the target on every shot. Thereby we make $x_0$ itself to be a random variable. If the distribution of that person's movements can be described by a (p-1)-power of $g(x)$ (that is $P(x_0) = C\cdot g(x)^{p-1})$), a simple transformation of random variables (remember $P(\lambda)d\lambda=P(x_0)dx_0$) leads to a Beta distributed $\lambda$: \begin{eqnarray}P(\lambda) = P(g^{-1}(\lambda)) \biggl|\frac{dg^{-1}(\lambda)}{d\lambda}\biggl| = C' \cdot \lambda^{p-1} \cdot (\lambda_{max} - \lambda)^{q-1}\end{eqnarray} where the normalization constant $C'$ is the beta function. For the standard parametrization of the beta distribution we would set $\lambda_{max} = 1$. In other words the beta distribution can be seen as the distribution of probabilities in the center of a jittered distribution. I hope that this derivation gets somewhat close to what your instructor meant. Note that the functional forms of $g(x)$ and $P(x_0)$ are very flexible and reach from triangle like distributions and U-shaped distributions (see example below) to sharply peaked distributions. FYI: I discovered this as a side effect in my doctoral work and reported about it in my thesis in the context of non-stationary neural tuning curves leading to zero-inflated spike count distributions (bimodal with a mode at zero). Applying the concept described above yielded the Beta-Poisson mixture distribution for the neural acticity. That distribution can be fit to data. The fitted parameters allow to estimate both, the distribution $g(x)$ as well as the jitter distribution $p(x_0)$ by applying the reverse logics. The Beta-Poisson mixture is a very interesting and flexible alternative to the widely used negative binomial distribution (which is a Gamma-Poisson mixture) to model overdispersion. Below you find an example the "Jitter $\rightarrow$ Beta" - idea in action: A: Simulated 1D trial displacement, drawn from the jitter distribution in the inset ($P(jitter)\propto g(x)^{p-1}$). The trial-averaged firing field (solid black line) is broader and has a lower peak rate as compared to the underlying tuning curve without jitter (solid blue line, parameters used: $\lambda_{max} = 10, p = .6, q=.5$. B: The resulting distribution of $\lambda$ at $x_0$ across N=100 trials and the analytical pdf of the Beta distribution. C: Simulated spike count distribution from a Poisson process with parameters $\lambda_i$ where i denote the indices of the trials and the resulting Beta-Poisson distribution as derived as sketched above. D: Analogous situation in 2D with random shift angles leading to the identical statistics.
Whence the beta distribution?
The beta distribution can be seen as the distribution of probabilities in the center of a jittered distribution First of all, I am not good in mathematically precise descriptions of concepts in my hea
Whence the beta distribution? The beta distribution can be seen as the distribution of probabilities in the center of a jittered distribution First of all, I am not good in mathematically precise descriptions of concepts in my head, but I'll try my best using a simple example: Imagine you have a bow, many arrows and a target. Let's further say your hit rate $\lambda$ (for hitting the target) is precisely a function of the distance to the center of the target and of the following form \begin{eqnarray} \lambda=g(x)=\lambda_{max}-(q|x-x_0|)^\frac{1}{q},~q > 0,~0 \leq \lambda \leq \lambda_{max} \end{eqnarray} where x is the distance to the center of the target ($x_0$). For $q=1/2$ this would be a first order approximation of a Gaussian. That would mean that you most frequently hit the bull-eye. Similarly, it approximates any bell-shaped curve, for example, resulting from diffusion of Brownian particles. Now, let is furthermore assume that somebody really brave/stupid tries to trick you and displaces the target on every shot. Thereby we make $x_0$ itself to be a random variable. If the distribution of that person's movements can be described by a (p-1)-power of $g(x)$ (that is $P(x_0) = C\cdot g(x)^{p-1})$), a simple transformation of random variables (remember $P(\lambda)d\lambda=P(x_0)dx_0$) leads to a Beta distributed $\lambda$: \begin{eqnarray}P(\lambda) = P(g^{-1}(\lambda)) \biggl|\frac{dg^{-1}(\lambda)}{d\lambda}\biggl| = C' \cdot \lambda^{p-1} \cdot (\lambda_{max} - \lambda)^{q-1}\end{eqnarray} where the normalization constant $C'$ is the beta function. For the standard parametrization of the beta distribution we would set $\lambda_{max} = 1$. In other words the beta distribution can be seen as the distribution of probabilities in the center of a jittered distribution. I hope that this derivation gets somewhat close to what your instructor meant. Note that the functional forms of $g(x)$ and $P(x_0)$ are very flexible and reach from triangle like distributions and U-shaped distributions (see example below) to sharply peaked distributions. FYI: I discovered this as a side effect in my doctoral work and reported about it in my thesis in the context of non-stationary neural tuning curves leading to zero-inflated spike count distributions (bimodal with a mode at zero). Applying the concept described above yielded the Beta-Poisson mixture distribution for the neural acticity. That distribution can be fit to data. The fitted parameters allow to estimate both, the distribution $g(x)$ as well as the jitter distribution $p(x_0)$ by applying the reverse logics. The Beta-Poisson mixture is a very interesting and flexible alternative to the widely used negative binomial distribution (which is a Gamma-Poisson mixture) to model overdispersion. Below you find an example the "Jitter $\rightarrow$ Beta" - idea in action: A: Simulated 1D trial displacement, drawn from the jitter distribution in the inset ($P(jitter)\propto g(x)^{p-1}$). The trial-averaged firing field (solid black line) is broader and has a lower peak rate as compared to the underlying tuning curve without jitter (solid blue line, parameters used: $\lambda_{max} = 10, p = .6, q=.5$. B: The resulting distribution of $\lambda$ at $x_0$ across N=100 trials and the analytical pdf of the Beta distribution. C: Simulated spike count distribution from a Poisson process with parameters $\lambda_i$ where i denote the indices of the trials and the resulting Beta-Poisson distribution as derived as sketched above. D: Analogous situation in 2D with random shift angles leading to the identical statistics.
Whence the beta distribution? The beta distribution can be seen as the distribution of probabilities in the center of a jittered distribution First of all, I am not good in mathematically precise descriptions of concepts in my hea
19,560
What is the difference between $\beta_1$ and $\hat{\beta}_1$?
$\beta_1$ is an idea - it doesn't really exist in practice. But if the Gauss-Markov assumption hold, $\beta_1$ would give you that optimal slope with values above and below it on a vertical "slice" vertical to the dependent variable forming a nice normal Gaussian distribution of residuals. $\hat \beta_1$ is the estimate of $\beta_1$ based on the sample. The idea is that you are working with a sample from a population. Your sample forms a data cloud, if you will. One of the dimensions corresponds to the dependent variable, and you try to fit the line that minimizes the error terms - in OLS, this is the projection of the dependent variable on the vector subspace formed by the column space of the model matrix. These estimates of the population parameters are denoted with the $\hat \beta$ symbol. The more data points you have the more accurate the estimated coefficients, $\hat \beta_i$ are, and the better the estimation of these idealized population coefficients, $\beta_i$. Here is the difference in slopes ($\beta$ versus $\hat \beta$) between the "population" in blue, and the sample in isolated black dots: The regression line is dotted and in black, whereas the synthetically perfect "population" line is in solid blue. The abundance of points provides a tactile sense of the normality of the residuals distribution.
What is the difference between $\beta_1$ and $\hat{\beta}_1$?
$\beta_1$ is an idea - it doesn't really exist in practice. But if the Gauss-Markov assumption hold, $\beta_1$ would give you that optimal slope with values above and below it on a vertical "slice" ve
What is the difference between $\beta_1$ and $\hat{\beta}_1$? $\beta_1$ is an idea - it doesn't really exist in practice. But if the Gauss-Markov assumption hold, $\beta_1$ would give you that optimal slope with values above and below it on a vertical "slice" vertical to the dependent variable forming a nice normal Gaussian distribution of residuals. $\hat \beta_1$ is the estimate of $\beta_1$ based on the sample. The idea is that you are working with a sample from a population. Your sample forms a data cloud, if you will. One of the dimensions corresponds to the dependent variable, and you try to fit the line that minimizes the error terms - in OLS, this is the projection of the dependent variable on the vector subspace formed by the column space of the model matrix. These estimates of the population parameters are denoted with the $\hat \beta$ symbol. The more data points you have the more accurate the estimated coefficients, $\hat \beta_i$ are, and the better the estimation of these idealized population coefficients, $\beta_i$. Here is the difference in slopes ($\beta$ versus $\hat \beta$) between the "population" in blue, and the sample in isolated black dots: The regression line is dotted and in black, whereas the synthetically perfect "population" line is in solid blue. The abundance of points provides a tactile sense of the normality of the residuals distribution.
What is the difference between $\beta_1$ and $\hat{\beta}_1$? $\beta_1$ is an idea - it doesn't really exist in practice. But if the Gauss-Markov assumption hold, $\beta_1$ would give you that optimal slope with values above and below it on a vertical "slice" ve
19,561
What is the difference between $\beta_1$ and $\hat{\beta}_1$?
The "hat" symbol generally denotes an estimate, as opposed to the "true" value. Therefore $\hat{\beta}$ is an estimate of $\beta$. A few symbols have their own conventions: the sample variance, for example, is often written as $s^2$, not $\hat{\sigma}^2$, though some people use both to distinguish between biased and unbiased estimates. In your specific case, the $\hat{\beta}$ values are parameter estimates for a linear model. The linear model supposes that the outcome variable $y$ is generated by a linear combination of the data values $x_i$s, each weighted by the corresponding $\beta_i$ value (plus some error $\epsilon$) $$ y = \beta_0 + \beta_1x_1 + \beta_2 x_2 + \cdots + \beta_n x_n + \epsilon$$ In practice, of course, the "true" $\beta$ values are usually unknown and may not even exist (perhaps the data is not generated by a linear model). Nevertheless, we can estimate values from the data that approximate $y$ and these estimates are denoted as $\hat{\beta}$.
What is the difference between $\beta_1$ and $\hat{\beta}_1$?
The "hat" symbol generally denotes an estimate, as opposed to the "true" value. Therefore $\hat{\beta}$ is an estimate of $\beta$. A few symbols have their own conventions: the sample variance, for ex
What is the difference between $\beta_1$ and $\hat{\beta}_1$? The "hat" symbol generally denotes an estimate, as opposed to the "true" value. Therefore $\hat{\beta}$ is an estimate of $\beta$. A few symbols have their own conventions: the sample variance, for example, is often written as $s^2$, not $\hat{\sigma}^2$, though some people use both to distinguish between biased and unbiased estimates. In your specific case, the $\hat{\beta}$ values are parameter estimates for a linear model. The linear model supposes that the outcome variable $y$ is generated by a linear combination of the data values $x_i$s, each weighted by the corresponding $\beta_i$ value (plus some error $\epsilon$) $$ y = \beta_0 + \beta_1x_1 + \beta_2 x_2 + \cdots + \beta_n x_n + \epsilon$$ In practice, of course, the "true" $\beta$ values are usually unknown and may not even exist (perhaps the data is not generated by a linear model). Nevertheless, we can estimate values from the data that approximate $y$ and these estimates are denoted as $\hat{\beta}$.
What is the difference between $\beta_1$ and $\hat{\beta}_1$? The "hat" symbol generally denotes an estimate, as opposed to the "true" value. Therefore $\hat{\beta}$ is an estimate of $\beta$. A few symbols have their own conventions: the sample variance, for ex
19,562
What is the difference between $\beta_1$ and $\hat{\beta}_1$?
The equation $$y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ is what is termed as the true model. This equation says that the relation between the variable $x$ and the variable $y$ can be explained by a line $y = \beta_0 + \beta_1x$. However, since observed values are never going to follow that exact equation (due to errors), an additional $\epsilon_i$ error term is added to indicate errors. The errors can be interpreted as natural deviations away from the relationship of $x$ and $y$. Below I show two pairs of $x$ and $y$ (the black dots are data). In general one can see that as $x$ increases $y$ increases. For both of the pairs, the true equation is $$y_i = 4 + 3x_i + \epsilon_i $$ but the two plots have different errors. The plot on the left has large errors and the plot on the right small errors(because the points are tighter). (I know the true equation because I generated the data on my own. In general, you never know the true equation) Lets look at the plot on the left. The true $\beta_0 = 4$ and the true $\beta_1$ = 3. But in practice when given data, we don't know the truth. So we estimate the truth. We estimate $\beta_0$ with $\hat{\beta}_0$ and $\beta_1$ with $\hat{\beta}_1$. Depending on which statistical methods are used, the estimates can be very different. In the regression setting, the estimates are obtained via a method called Ordinary Least Squares. This is also know as the method of line of best fit. Basically, you need to draw the line that best fits the data. I am not discussing formulas here, but using the formula for OLS, you get $$\hat{\beta}_0 = 4.809 \quad \text{ and } \quad \hat{\beta}_1 = 2.889 $$ and the resulting line of best fit is, A simple example would be the relationship between heights of mothers and daughters. Let $x = $ height of mothers and $y$ = heights of daughters. Naturally, one would expect taller mothers to have taller daughters (due to genetic similarity). However, do you think one equation can summarize exactly the height of a mother and a daughter, so that if I know the height of the mother I will be able to predict the exact height of the daughter? No. On the other hand, one might be able to summarize the relationship with the help of an on an average statement. TL DR: $\beta$ is the population truth. It represents the unknown relationship between $y$ and $x$. Since we cannot always get all possible values of $y$ and $x$, we collect a sample from the population, and try and estimate $\beta$ using the data. $\hat{\beta}$ is our estimate. It is a function of the data. $\beta$ is not a function of the data, but the truth.
What is the difference between $\beta_1$ and $\hat{\beta}_1$?
The equation $$y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ is what is termed as the true model. This equation says that the relation between the variable $x$ and the variable $y$ can be explained by a
What is the difference between $\beta_1$ and $\hat{\beta}_1$? The equation $$y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ is what is termed as the true model. This equation says that the relation between the variable $x$ and the variable $y$ can be explained by a line $y = \beta_0 + \beta_1x$. However, since observed values are never going to follow that exact equation (due to errors), an additional $\epsilon_i$ error term is added to indicate errors. The errors can be interpreted as natural deviations away from the relationship of $x$ and $y$. Below I show two pairs of $x$ and $y$ (the black dots are data). In general one can see that as $x$ increases $y$ increases. For both of the pairs, the true equation is $$y_i = 4 + 3x_i + \epsilon_i $$ but the two plots have different errors. The plot on the left has large errors and the plot on the right small errors(because the points are tighter). (I know the true equation because I generated the data on my own. In general, you never know the true equation) Lets look at the plot on the left. The true $\beta_0 = 4$ and the true $\beta_1$ = 3. But in practice when given data, we don't know the truth. So we estimate the truth. We estimate $\beta_0$ with $\hat{\beta}_0$ and $\beta_1$ with $\hat{\beta}_1$. Depending on which statistical methods are used, the estimates can be very different. In the regression setting, the estimates are obtained via a method called Ordinary Least Squares. This is also know as the method of line of best fit. Basically, you need to draw the line that best fits the data. I am not discussing formulas here, but using the formula for OLS, you get $$\hat{\beta}_0 = 4.809 \quad \text{ and } \quad \hat{\beta}_1 = 2.889 $$ and the resulting line of best fit is, A simple example would be the relationship between heights of mothers and daughters. Let $x = $ height of mothers and $y$ = heights of daughters. Naturally, one would expect taller mothers to have taller daughters (due to genetic similarity). However, do you think one equation can summarize exactly the height of a mother and a daughter, so that if I know the height of the mother I will be able to predict the exact height of the daughter? No. On the other hand, one might be able to summarize the relationship with the help of an on an average statement. TL DR: $\beta$ is the population truth. It represents the unknown relationship between $y$ and $x$. Since we cannot always get all possible values of $y$ and $x$, we collect a sample from the population, and try and estimate $\beta$ using the data. $\hat{\beta}$ is our estimate. It is a function of the data. $\beta$ is not a function of the data, but the truth.
What is the difference between $\beta_1$ and $\hat{\beta}_1$? The equation $$y_i = \beta_0 + \beta_1 x_i + \epsilon_i $$ is what is termed as the true model. This equation says that the relation between the variable $x$ and the variable $y$ can be explained by a
19,563
Is there ever a reason not to use orthogonal polynomials when fitting regressions?
Ever a reason? Sure; likely several. Consider, for example, where I am interested in the values of the raw coefficients (say to compare them with hypothesized values), and collinearity isn't a particular problem. It's pretty much the same reason why I often don't mean center in ordinary linear regression (which is the linear orthogonal polynomial) They're not things you can't deal with via orthogonal polynomials; it's more a matter of convenience, but convenience is a big reason why I do a lot of things. That said, I lean toward orthogonal polynomials in many cases while fitting polynomials, since they do have some distinct benefits.
Is there ever a reason not to use orthogonal polynomials when fitting regressions?
Ever a reason? Sure; likely several. Consider, for example, where I am interested in the values of the raw coefficients (say to compare them with hypothesized values), and collinearity isn't a particu
Is there ever a reason not to use orthogonal polynomials when fitting regressions? Ever a reason? Sure; likely several. Consider, for example, where I am interested in the values of the raw coefficients (say to compare them with hypothesized values), and collinearity isn't a particular problem. It's pretty much the same reason why I often don't mean center in ordinary linear regression (which is the linear orthogonal polynomial) They're not things you can't deal with via orthogonal polynomials; it's more a matter of convenience, but convenience is a big reason why I do a lot of things. That said, I lean toward orthogonal polynomials in many cases while fitting polynomials, since they do have some distinct benefits.
Is there ever a reason not to use orthogonal polynomials when fitting regressions? Ever a reason? Sure; likely several. Consider, for example, where I am interested in the values of the raw coefficients (say to compare them with hypothesized values), and collinearity isn't a particu
19,564
Is there ever a reason not to use orthogonal polynomials when fitting regressions?
Because if your model leaves R when it grows up, you have to remember to pack its centring & normalization constants, & then it has to lug them around the whole time. Imagine coming across it one day hard-coded into SQL, & the horror of realizing it's mislaid them!
Is there ever a reason not to use orthogonal polynomials when fitting regressions?
Because if your model leaves R when it grows up, you have to remember to pack its centring & normalization constants, & then it has to lug them around the whole time. Imagine coming across it one day
Is there ever a reason not to use orthogonal polynomials when fitting regressions? Because if your model leaves R when it grows up, you have to remember to pack its centring & normalization constants, & then it has to lug them around the whole time. Imagine coming across it one day hard-coded into SQL, & the horror of realizing it's mislaid them!
Is there ever a reason not to use orthogonal polynomials when fitting regressions? Because if your model leaves R when it grows up, you have to remember to pack its centring & normalization constants, & then it has to lug them around the whole time. Imagine coming across it one day
19,565
Sum of two normal products is Laplace?
An elementary sequence of steps using well-known relationships among distributions and a simple algebraic polarization identity provide an elementary and intuitive demonstration. I have found this polarization identity generally useful for reasoning about, and computing with, products of random variables, because it reduces them to linear combinations of squares. It is a bit like working with matrices by diagonalizing them first. (There's more than a superficial connection here.) A Laplace distribution is a difference of two Exponentials (which intuitively makes some sense, because an Exponential is a "half-Laplace" distribution). (The link demonstrates this by manipulating characteristic functions, but the relation can be proven using an elementary integration following from the definition of a difference as a convolution.) An Exponential distribution (which itself is a $\Gamma(1)$ distribution) is also a (scaled version of a) $\chi^2(2)$ distribution. The scale factor is $1/2$. This can easily be seen by comparing the PDFs of the two distributions. $\chi^2$ distributions are obtained naturally as sums of squares of iid Normal distributions (having zero means). The degrees of freedom, $2$, count the number of Normal distributions in the sum. The algebraic relation $$X_1X_2 + X_3X_4 = \left[\left(\frac{X_1+X_2}{2}\right)^2 + \left(\frac{X_3+X_4}{2}\right)^2\right] - \left[\left(\frac{X_1-X_2}{2}\right)^2 + \left(\frac{X_3-X_4}{2}\right)^2\right]$$ exhibits $X_1X_2 + X_3X_4$ in terms of squares of four distributions, each of which is a linear combination of standard Normals. It is easy to check that all four linear combinations are linearly independent (and each follows a Normal$(0,\sqrt{1/2})$ distribution). Thus the first two terms, which sum the squares of two identically distributed Normal distributions of mean zero, form a scaled $\chi^2(2)$ distribution (and its scale factor of $\sqrt{1/2}\ ^2=1/2$ is exactly what is needed to make it an Exponential distribution) and the second two terms independently have an Exponential distribution, too, for the same reason. Therefore $X_1X_2+X_3X_4$, being the difference of two independent Exponential distributions, has a (standard) Laplace distribution.
Sum of two normal products is Laplace?
An elementary sequence of steps using well-known relationships among distributions and a simple algebraic polarization identity provide an elementary and intuitive demonstration. I have found this pol
Sum of two normal products is Laplace? An elementary sequence of steps using well-known relationships among distributions and a simple algebraic polarization identity provide an elementary and intuitive demonstration. I have found this polarization identity generally useful for reasoning about, and computing with, products of random variables, because it reduces them to linear combinations of squares. It is a bit like working with matrices by diagonalizing them first. (There's more than a superficial connection here.) A Laplace distribution is a difference of two Exponentials (which intuitively makes some sense, because an Exponential is a "half-Laplace" distribution). (The link demonstrates this by manipulating characteristic functions, but the relation can be proven using an elementary integration following from the definition of a difference as a convolution.) An Exponential distribution (which itself is a $\Gamma(1)$ distribution) is also a (scaled version of a) $\chi^2(2)$ distribution. The scale factor is $1/2$. This can easily be seen by comparing the PDFs of the two distributions. $\chi^2$ distributions are obtained naturally as sums of squares of iid Normal distributions (having zero means). The degrees of freedom, $2$, count the number of Normal distributions in the sum. The algebraic relation $$X_1X_2 + X_3X_4 = \left[\left(\frac{X_1+X_2}{2}\right)^2 + \left(\frac{X_3+X_4}{2}\right)^2\right] - \left[\left(\frac{X_1-X_2}{2}\right)^2 + \left(\frac{X_3-X_4}{2}\right)^2\right]$$ exhibits $X_1X_2 + X_3X_4$ in terms of squares of four distributions, each of which is a linear combination of standard Normals. It is easy to check that all four linear combinations are linearly independent (and each follows a Normal$(0,\sqrt{1/2})$ distribution). Thus the first two terms, which sum the squares of two identically distributed Normal distributions of mean zero, form a scaled $\chi^2(2)$ distribution (and its scale factor of $\sqrt{1/2}\ ^2=1/2$ is exactly what is needed to make it an Exponential distribution) and the second two terms independently have an Exponential distribution, too, for the same reason. Therefore $X_1X_2+X_3X_4$, being the difference of two independent Exponential distributions, has a (standard) Laplace distribution.
Sum of two normal products is Laplace? An elementary sequence of steps using well-known relationships among distributions and a simple algebraic polarization identity provide an elementary and intuitive demonstration. I have found this pol
19,566
Sum of two normal products is Laplace?
$X\sim \mathrm{Laplace}(0,1)$ has characteristic function $$ \phi_X(t) = \frac{1}{1+t^2} $$ which is the square of the characteristic function of a product standard normal (see https://math.stackexchange.com/questions/74013/characteristic-function-of-product-of-normal-random-variables). The claim follows by the fact that sums of independent random variables relate to products of characteristic functions.
Sum of two normal products is Laplace?
$X\sim \mathrm{Laplace}(0,1)$ has characteristic function $$ \phi_X(t) = \frac{1}{1+t^2} $$ which is the square of the characteristic function of a product standard normal (see https://math.stackexcha
Sum of two normal products is Laplace? $X\sim \mathrm{Laplace}(0,1)$ has characteristic function $$ \phi_X(t) = \frac{1}{1+t^2} $$ which is the square of the characteristic function of a product standard normal (see https://math.stackexchange.com/questions/74013/characteristic-function-of-product-of-normal-random-variables). The claim follows by the fact that sums of independent random variables relate to products of characteristic functions.
Sum of two normal products is Laplace? $X\sim \mathrm{Laplace}(0,1)$ has characteristic function $$ \phi_X(t) = \frac{1}{1+t^2} $$ which is the square of the characteristic function of a product standard normal (see https://math.stackexcha
19,567
Distinguishing missing at random (MAR) from missing completely at random (MCAR)
Missing at random (MAR) means that the missingness can be explained by variables on which you have full information. It's not a testable assumption, but there are cases where it is reasonable vs. not. For example, take political opinion polls. Many people refuse to answer. If you assume that the reasons people refuse to answer are entirely based on demographics, and if you have those demographics on each person, then the data is MAR. It is known that some of the reasons why people refuse to answer can be based on demographics (for instance, people at both low and high incomes are less likely to answer than those in the middle), but there's really no way to know if that is the full explanation. So, the question becomes "is it full enough?". Often, methods like multiple imputation work better than other methods as long as the data isn't very missing not at random.
Distinguishing missing at random (MAR) from missing completely at random (MCAR)
Missing at random (MAR) means that the missingness can be explained by variables on which you have full information. It's not a testable assumption, but there are cases where it is reasonable vs. not.
Distinguishing missing at random (MAR) from missing completely at random (MCAR) Missing at random (MAR) means that the missingness can be explained by variables on which you have full information. It's not a testable assumption, but there are cases where it is reasonable vs. not. For example, take political opinion polls. Many people refuse to answer. If you assume that the reasons people refuse to answer are entirely based on demographics, and if you have those demographics on each person, then the data is MAR. It is known that some of the reasons why people refuse to answer can be based on demographics (for instance, people at both low and high incomes are less likely to answer than those in the middle), but there's really no way to know if that is the full explanation. So, the question becomes "is it full enough?". Often, methods like multiple imputation work better than other methods as long as the data isn't very missing not at random.
Distinguishing missing at random (MAR) from missing completely at random (MCAR) Missing at random (MAR) means that the missingness can be explained by variables on which you have full information. It's not a testable assumption, but there are cases where it is reasonable vs. not.
19,568
Distinguishing missing at random (MAR) from missing completely at random (MCAR)
I'm not sure if this is correct, but the way I've tried to understand it is as if there is a 2x2 matrix of possibilities which isn't quite symmetrical. Something like: Pattern / Data Explains Pattern Yes No Yes MAR MNAR No -- MCAR That is, if there is a pattern to a variable's missingness and the data we have cannot explain it we have MNAR, but if the data we have (i.e. other variables in our data set) can explain it we have MAR. If there is no pattern to the missingness, it's MCAR. I may be way off here. Also, this leaves open the definition of "Pattern", and "Data explains". I think of "Data explains" as meaning other variables in your data set explain it, but I believe that your procedure can also explain it (e.g. a good example in another thread is if you have three measurement variables that measure the same thing and your procedure is if the first two measurements disagree by too much you take a third measurement). Is this accurate enough for intuition, CV?
Distinguishing missing at random (MAR) from missing completely at random (MCAR)
I'm not sure if this is correct, but the way I've tried to understand it is as if there is a 2x2 matrix of possibilities which isn't quite symmetrical. Something like: Pattern / Data Explains Patte
Distinguishing missing at random (MAR) from missing completely at random (MCAR) I'm not sure if this is correct, but the way I've tried to understand it is as if there is a 2x2 matrix of possibilities which isn't quite symmetrical. Something like: Pattern / Data Explains Pattern Yes No Yes MAR MNAR No -- MCAR That is, if there is a pattern to a variable's missingness and the data we have cannot explain it we have MNAR, but if the data we have (i.e. other variables in our data set) can explain it we have MAR. If there is no pattern to the missingness, it's MCAR. I may be way off here. Also, this leaves open the definition of "Pattern", and "Data explains". I think of "Data explains" as meaning other variables in your data set explain it, but I believe that your procedure can also explain it (e.g. a good example in another thread is if you have three measurement variables that measure the same thing and your procedure is if the first two measurements disagree by too much you take a third measurement). Is this accurate enough for intuition, CV?
Distinguishing missing at random (MAR) from missing completely at random (MCAR) I'm not sure if this is correct, but the way I've tried to understand it is as if there is a 2x2 matrix of possibilities which isn't quite symmetrical. Something like: Pattern / Data Explains Patte
19,569
Distinguishing missing at random (MAR) from missing completely at random (MCAR)
I was also struggling to grasp the difference, so maybe some examples could help. MCAR: Missing completely at random, this is great. It means that the non-response is completely random. So your survey is not biased. MAR: Missing at random, worse situation. Imagine you are asking for IQ and you have much more females participants than males. Lucky for you, IQ is not related to gender, so you can control for gender (apply weighting) to reduce bias. MNAR: Not missing at random, bad. Consider having survey for level of income. And again, you have more females than males participants. In this case, this is a problem, because level of income is related to gender. Therefore your results will be biased. Not easily to get rid of. You see, it is a "triangle" relationship between target variable (Y, such as income), auxiliary variable (X, such as age) and response behavior (R, the response group). If X is related to R only, good-ish (MAR). If there is relation between X and R and X and Y, its bad (MNAR).
Distinguishing missing at random (MAR) from missing completely at random (MCAR)
I was also struggling to grasp the difference, so maybe some examples could help. MCAR: Missing completely at random, this is great. It means that the non-response is completely random. So your survey
Distinguishing missing at random (MAR) from missing completely at random (MCAR) I was also struggling to grasp the difference, so maybe some examples could help. MCAR: Missing completely at random, this is great. It means that the non-response is completely random. So your survey is not biased. MAR: Missing at random, worse situation. Imagine you are asking for IQ and you have much more females participants than males. Lucky for you, IQ is not related to gender, so you can control for gender (apply weighting) to reduce bias. MNAR: Not missing at random, bad. Consider having survey for level of income. And again, you have more females than males participants. In this case, this is a problem, because level of income is related to gender. Therefore your results will be biased. Not easily to get rid of. You see, it is a "triangle" relationship between target variable (Y, such as income), auxiliary variable (X, such as age) and response behavior (R, the response group). If X is related to R only, good-ish (MAR). If there is relation between X and R and X and Y, its bad (MNAR).
Distinguishing missing at random (MAR) from missing completely at random (MCAR) I was also struggling to grasp the difference, so maybe some examples could help. MCAR: Missing completely at random, this is great. It means that the non-response is completely random. So your survey
19,570
Need help identifying a distribution by its histogram
Use fitdistrplus: Here's the CRAN link to fitdistrplus. Here's the old vignette link for fitdistrplus. If the vignette link doesn't work, do a search for "Use of the library fitdistrplus to specify a distribution from data". The vignette does a good job of explaining how to use the package. You can look at how various distributions fit in a short period of time. It also produces a Cullen/Frey Diagram. #Example from the vignette library(fitdistrplus) x1 <- c(6.4, 13.3, 4.1, 1.3, 14.1, 10.6, 9.9, 9.6, 15.3, 22.1, 13.4, 13.2, 8.4, 6.3, 8.9, 5.2, 10.9, 14.4) plotdist(x1) descdist(x1) f1g <- fitdist(x1, "gamma") plot(f1g) summary(f1g)
Need help identifying a distribution by its histogram
Use fitdistrplus: Here's the CRAN link to fitdistrplus. Here's the old vignette link for fitdistrplus. If the vignette link doesn't work, do a search for "Use of the library fitdistrplus to specify a
Need help identifying a distribution by its histogram Use fitdistrplus: Here's the CRAN link to fitdistrplus. Here's the old vignette link for fitdistrplus. If the vignette link doesn't work, do a search for "Use of the library fitdistrplus to specify a distribution from data". The vignette does a good job of explaining how to use the package. You can look at how various distributions fit in a short period of time. It also produces a Cullen/Frey Diagram. #Example from the vignette library(fitdistrplus) x1 <- c(6.4, 13.3, 4.1, 1.3, 14.1, 10.6, 9.9, 9.6, 15.3, 22.1, 13.4, 13.2, 8.4, 6.3, 8.9, 5.2, 10.9, 14.4) plotdist(x1) descdist(x1) f1g <- fitdist(x1, "gamma") plot(f1g) summary(f1g)
Need help identifying a distribution by its histogram Use fitdistrplus: Here's the CRAN link to fitdistrplus. Here's the old vignette link for fitdistrplus. If the vignette link doesn't work, do a search for "Use of the library fitdistrplus to specify a
19,571
Need help identifying a distribution by its histogram
Population is about 15 million samples. Then you will very likely be able to reject any particular distribution of a simple, closed form. Even that tiny bump at the left of the graph is likely to be enough to cause us to say 'clearly not such and such'. On the other hand, it's probably pretty well approximated by a number of common distributions; obvious candidates are things like lognormal and gamma, but there are a host of others. It you look at the log of the x-variable, you can probably decide whether the lognormal is okay on sight (after taking logs, the histogram should look symmetric). If the log is left skew, consider whether Gamma is okay, if it's right skew, consider whether inverse Gamma or (even more skew) inverse Gaussian is okay. But this exercise is more one of finding a distribution that's close enough to live with; none of these suggestions actually have all the features that appear to be present there. If you have any theory at all to support a choice, toss out all this discussion and use that.
Need help identifying a distribution by its histogram
Population is about 15 million samples. Then you will very likely be able to reject any particular distribution of a simple, closed form. Even that tiny bump at the left of the graph is likely to be
Need help identifying a distribution by its histogram Population is about 15 million samples. Then you will very likely be able to reject any particular distribution of a simple, closed form. Even that tiny bump at the left of the graph is likely to be enough to cause us to say 'clearly not such and such'. On the other hand, it's probably pretty well approximated by a number of common distributions; obvious candidates are things like lognormal and gamma, but there are a host of others. It you look at the log of the x-variable, you can probably decide whether the lognormal is okay on sight (after taking logs, the histogram should look symmetric). If the log is left skew, consider whether Gamma is okay, if it's right skew, consider whether inverse Gamma or (even more skew) inverse Gaussian is okay. But this exercise is more one of finding a distribution that's close enough to live with; none of these suggestions actually have all the features that appear to be present there. If you have any theory at all to support a choice, toss out all this discussion and use that.
Need help identifying a distribution by its histogram Population is about 15 million samples. Then you will very likely be able to reject any particular distribution of a simple, closed form. Even that tiny bump at the left of the graph is likely to be
19,572
Need help identifying a distribution by its histogram
I am not sure why you would want to classify a sample to a specific distribution with such a large sample size; parsimony, comparing it to another sample, looking for physical interpretation of the paramters? Most statistical packages(R, SAS, Minitab) allow one to plot data on a graph that yields a straight line if the data come from a particular distribution. I have seen graphs that yield a straight line if the data is normal(log normal-after a log transformation), Weibull, and chi-squared come to mine immediately. This technique will allow you to see outliers and give you the possiblity to assign reasons for why data points are outliers. In R, the normal probability plot is called qqnorm.
Need help identifying a distribution by its histogram
I am not sure why you would want to classify a sample to a specific distribution with such a large sample size; parsimony, comparing it to another sample, looking for physical interpretation of the pa
Need help identifying a distribution by its histogram I am not sure why you would want to classify a sample to a specific distribution with such a large sample size; parsimony, comparing it to another sample, looking for physical interpretation of the paramters? Most statistical packages(R, SAS, Minitab) allow one to plot data on a graph that yields a straight line if the data come from a particular distribution. I have seen graphs that yield a straight line if the data is normal(log normal-after a log transformation), Weibull, and chi-squared come to mine immediately. This technique will allow you to see outliers and give you the possiblity to assign reasons for why data points are outliers. In R, the normal probability plot is called qqnorm.
Need help identifying a distribution by its histogram I am not sure why you would want to classify a sample to a specific distribution with such a large sample size; parsimony, comparing it to another sample, looking for physical interpretation of the pa
19,573
Why don't we see Copula Models as much as Regression Models?
The first and most important reason is that standard regression models had a one to two-hundred year headstart on copula models (depending on exactly where you count the genesis of regression models and copula models). Any explanation is the disparity in usage is going to have to start there. The method of least-squares estimation for fitting functions through data was developed in the early nineteenth century by Legendre and Gauss, and the Gauss-Markov theorem was published by Gauss in 1821. By the late nineteenth century the term "regression" had come into use to describe the narrow phenomenon of regression to the mean, but it was developed further at the end of the nineteenth century in a form that is a clear precursor to the modern theory. In particular, Yule gave a close precursor to the modern regression model in Yule (1897) and Fisher had developed and analysed the standard Gaussian regression model that is used today no later than Fisher (1922). Contrarily, copulas were first introduced into statistics in Sklar (1959) and were developed further over later decades. The initial mathematical result underpinning the field was a "folk theorem" for over a decade, until it was proved by multiple authors in the 1970s. The first statistical conference looking at copulas didn't occur until 1990 and even after this, copulas were only really applied in the field of finance. ​ Copula models did not really become widely visible in the statistics profession until about the turn of the twenty-first century, when Li (2000) popularised them in a seminal article in finance. It is probably only in the last two to three decades that copulas have become broadly known even within the statistical profession. As you point out, the copula theory is mathematically more complex, but it is also much, much younger. Statistical theories and models tend to start out with narrow usage confined to scholars in the field and then --- if they have sufficient value--- they expand out to be used more widely by various professionals in a broader range of applied fields. It is not until they become sufficiently widely used in the professions that universities decide it is worth teaching those models in their regular courses. In the present case, copula models are about twenty years old and they have probably only started being taught in the universities in the last ten years (and at some universities not yet at all). You only have to go back about a decade and statistical students at a university would not even have heard of copula models (unless they ran into them as a speciality) and would not have had any courses that taught it. So, if you are a statistician/econometrician and you are over forty, you probably will not have learned about copula models unless you have personally gone out of your way to self-learn it outside of your university education. However, you will have had at least a few courses that covered regression modelling, GLMs, etc., and you will have had to implement these models regularly as a student in order to complete your degree. If you are a psychologist or scientist over forty, you almost certainly never learned copula models, but you probably would have encountered regression models in your university training. This has a huge impact on the respective level of usage of the two models in subsequent professional work.
Why don't we see Copula Models as much as Regression Models?
The first and most important reason is that standard regression models had a one to two-hundred year headstart on copula models (depending on exactly where you count the genesis of regression models a
Why don't we see Copula Models as much as Regression Models? The first and most important reason is that standard regression models had a one to two-hundred year headstart on copula models (depending on exactly where you count the genesis of regression models and copula models). Any explanation is the disparity in usage is going to have to start there. The method of least-squares estimation for fitting functions through data was developed in the early nineteenth century by Legendre and Gauss, and the Gauss-Markov theorem was published by Gauss in 1821. By the late nineteenth century the term "regression" had come into use to describe the narrow phenomenon of regression to the mean, but it was developed further at the end of the nineteenth century in a form that is a clear precursor to the modern theory. In particular, Yule gave a close precursor to the modern regression model in Yule (1897) and Fisher had developed and analysed the standard Gaussian regression model that is used today no later than Fisher (1922). Contrarily, copulas were first introduced into statistics in Sklar (1959) and were developed further over later decades. The initial mathematical result underpinning the field was a "folk theorem" for over a decade, until it was proved by multiple authors in the 1970s. The first statistical conference looking at copulas didn't occur until 1990 and even after this, copulas were only really applied in the field of finance. ​ Copula models did not really become widely visible in the statistics profession until about the turn of the twenty-first century, when Li (2000) popularised them in a seminal article in finance. It is probably only in the last two to three decades that copulas have become broadly known even within the statistical profession. As you point out, the copula theory is mathematically more complex, but it is also much, much younger. Statistical theories and models tend to start out with narrow usage confined to scholars in the field and then --- if they have sufficient value--- they expand out to be used more widely by various professionals in a broader range of applied fields. It is not until they become sufficiently widely used in the professions that universities decide it is worth teaching those models in their regular courses. In the present case, copula models are about twenty years old and they have probably only started being taught in the universities in the last ten years (and at some universities not yet at all). You only have to go back about a decade and statistical students at a university would not even have heard of copula models (unless they ran into them as a speciality) and would not have had any courses that taught it. So, if you are a statistician/econometrician and you are over forty, you probably will not have learned about copula models unless you have personally gone out of your way to self-learn it outside of your university education. However, you will have had at least a few courses that covered regression modelling, GLMs, etc., and you will have had to implement these models regularly as a student in order to complete your degree. If you are a psychologist or scientist over forty, you almost certainly never learned copula models, but you probably would have encountered regression models in your university training. This has a huge impact on the respective level of usage of the two models in subsequent professional work.
Why don't we see Copula Models as much as Regression Models? The first and most important reason is that standard regression models had a one to two-hundred year headstart on copula models (depending on exactly where you count the genesis of regression models a
19,574
Why don't we see Copula Models as much as Regression Models?
A reason might be that regression and copulas do not answer the same question. Copulas are about the joint distribution while regression is about a conditional distribution or just the conditional mean, depending on how you look at it. Yes, copulas are in a sense more general, as you can derive a regression function from them. But except for the most trivial cases, it would be a fairly involved exercise that would not give a closed-form answer. Then to be able to "see" anything or to get some intuition about the conditional distribution or the conditional mean function, you would need to simulate from the copula. And you do not always have the hardware and the software handy for that. A regression, on the other hand, gives a very straightforward answer to the conditional mean question. It delivers an a solution that is much more easily understandable and much easier to visualize in your mind.* So for the purpose of regression (conditional distribution, conditional mean), regression is just much easier to use. And for the purpose of copulas (joint distribution), regression cannot substitute for copulas. But apparently the interest in a joint distribution is not that common? (I end with a question mark, as I am not sure whether it is the interest that is limited or our apparatus that is inadequate / too complex.) Regarding Ben's answer pointing to the historical reason as the most important one, I wonder if that is the case. Trying to imagine what would have happened had copulas and regression started simultaneously, I still see regression winning the popularity battle due to its relative simplicity as well as sufficiency for a task (modelling of the conditional distribution and/or the conditional mean) that is broadly relevant. *I said more easily and easier which does not mean easy.
Why don't we see Copula Models as much as Regression Models?
A reason might be that regression and copulas do not answer the same question. Copulas are about the joint distribution while regression is about a conditional distribution or just the conditional mea
Why don't we see Copula Models as much as Regression Models? A reason might be that regression and copulas do not answer the same question. Copulas are about the joint distribution while regression is about a conditional distribution or just the conditional mean, depending on how you look at it. Yes, copulas are in a sense more general, as you can derive a regression function from them. But except for the most trivial cases, it would be a fairly involved exercise that would not give a closed-form answer. Then to be able to "see" anything or to get some intuition about the conditional distribution or the conditional mean function, you would need to simulate from the copula. And you do not always have the hardware and the software handy for that. A regression, on the other hand, gives a very straightforward answer to the conditional mean question. It delivers an a solution that is much more easily understandable and much easier to visualize in your mind.* So for the purpose of regression (conditional distribution, conditional mean), regression is just much easier to use. And for the purpose of copulas (joint distribution), regression cannot substitute for copulas. But apparently the interest in a joint distribution is not that common? (I end with a question mark, as I am not sure whether it is the interest that is limited or our apparatus that is inadequate / too complex.) Regarding Ben's answer pointing to the historical reason as the most important one, I wonder if that is the case. Trying to imagine what would have happened had copulas and regression started simultaneously, I still see regression winning the popularity battle due to its relative simplicity as well as sufficiency for a task (modelling of the conditional distribution and/or the conditional mean) that is broadly relevant. *I said more easily and easier which does not mean easy.
Why don't we see Copula Models as much as Regression Models? A reason might be that regression and copulas do not answer the same question. Copulas are about the joint distribution while regression is about a conditional distribution or just the conditional mea
19,575
Why don't we see Copula Models as much as Regression Models?
A short answer is that in practice for many applications we don't need the joint probability distributions. A cynic would say that it's also because the users don't event understand what is a joint probability distribution. A lot of applications of statistical modeling are in inference, such as medical studies, and they're interested in what causes certain outcomes. A regression is one of the tools used to do this. In forecasting applications in many cases users want to do scenario analysis, i.e. "what is y when inputs are x?" - these pre-specify x's and don't need to sample from their joint. On the other hand, copulas are used a lot in some fields such as financial risk management (FRM) to obtain joint distribution of the factors. I'll show you one example that will help me answer your question. In FRM you need to obtain the univariate probability distribution $F_y(y)$ of scalar losses $y$. Here's one way you could do it. map losses $y$ to a vector of risk factors $\vec x$ estimate a model $y=\mathcal L(\vec x)+\varepsilon$, perhaps, with a regression estimate the join distribution of factors $\hat F_{\vec x}(\vec x)$, perhaps, with copulas sample from $\hat F_{\vec x}(.)$ to obtain a set of vectors $\vec x_i$ estimate the univariate probability distribution $\hat F_y(y)$ by fitting it to losses $\hat y_i=\hat{\mathcal L} (\vec x_i)$ Once you have $\hat F_y(.)$ you can obtain all risk metrics that you need. You see how I used both regressions and copula here. So, as I mentioned earlier, in business forecasting our model users are interested only in $\hat y|\vec x$, i.e. "what is $y$ when inputs are $\vec x$?" In this case, as in inference applications, we don't need the joint distribution and copulas at all! We only need the [regression] model $\hat{\mathcal L}$, we can specify $x$. FRM is one of the fields, where we can't specify $\vec x$ in many cases. We try to obtain their joint distribution $F_{\vec x}$. That's what copulas are useful for
Why don't we see Copula Models as much as Regression Models?
A short answer is that in practice for many applications we don't need the joint probability distributions. A cynic would say that it's also because the users don't event understand what is a joint pr
Why don't we see Copula Models as much as Regression Models? A short answer is that in practice for many applications we don't need the joint probability distributions. A cynic would say that it's also because the users don't event understand what is a joint probability distribution. A lot of applications of statistical modeling are in inference, such as medical studies, and they're interested in what causes certain outcomes. A regression is one of the tools used to do this. In forecasting applications in many cases users want to do scenario analysis, i.e. "what is y when inputs are x?" - these pre-specify x's and don't need to sample from their joint. On the other hand, copulas are used a lot in some fields such as financial risk management (FRM) to obtain joint distribution of the factors. I'll show you one example that will help me answer your question. In FRM you need to obtain the univariate probability distribution $F_y(y)$ of scalar losses $y$. Here's one way you could do it. map losses $y$ to a vector of risk factors $\vec x$ estimate a model $y=\mathcal L(\vec x)+\varepsilon$, perhaps, with a regression estimate the join distribution of factors $\hat F_{\vec x}(\vec x)$, perhaps, with copulas sample from $\hat F_{\vec x}(.)$ to obtain a set of vectors $\vec x_i$ estimate the univariate probability distribution $\hat F_y(y)$ by fitting it to losses $\hat y_i=\hat{\mathcal L} (\vec x_i)$ Once you have $\hat F_y(.)$ you can obtain all risk metrics that you need. You see how I used both regressions and copula here. So, as I mentioned earlier, in business forecasting our model users are interested only in $\hat y|\vec x$, i.e. "what is $y$ when inputs are $\vec x$?" In this case, as in inference applications, we don't need the joint distribution and copulas at all! We only need the [regression] model $\hat{\mathcal L}$, we can specify $x$. FRM is one of the fields, where we can't specify $\vec x$ in many cases. We try to obtain their joint distribution $F_{\vec x}$. That's what copulas are useful for
Why don't we see Copula Models as much as Regression Models? A short answer is that in practice for many applications we don't need the joint probability distributions. A cynic would say that it's also because the users don't event understand what is a joint pr
19,576
AUC for someone with no stats knowledge
AUC is difficult to understand and interpret even with statistical knowledge. Without such knowledge I'd stick to the following stylized facts: AUC close to 0.5 means a model performance wasn't better than randomly classifying subjects. It wasn't better than a silly random number generator to mark the samples as positive and negative. AUC is used by some to compare models. Higher AUC suggests better demonstrated performance in classification. AUC is a noisy metric Max AUC is 1, for a classification model that is never wrong Although technically Min AUC is 0, it makes little sense to have AUC lesser than 0.5. AUC zero means that by a simple switch from positive to negative label you get to a perfect classification
AUC for someone with no stats knowledge
AUC is difficult to understand and interpret even with statistical knowledge. Without such knowledge I'd stick to the following stylized facts: AUC close to 0.5 means a model performance wasn't bette
AUC for someone with no stats knowledge AUC is difficult to understand and interpret even with statistical knowledge. Without such knowledge I'd stick to the following stylized facts: AUC close to 0.5 means a model performance wasn't better than randomly classifying subjects. It wasn't better than a silly random number generator to mark the samples as positive and negative. AUC is used by some to compare models. Higher AUC suggests better demonstrated performance in classification. AUC is a noisy metric Max AUC is 1, for a classification model that is never wrong Although technically Min AUC is 0, it makes little sense to have AUC lesser than 0.5. AUC zero means that by a simple switch from positive to negative label you get to a perfect classification
AUC for someone with no stats knowledge AUC is difficult to understand and interpret even with statistical knowledge. Without such knowledge I'd stick to the following stylized facts: AUC close to 0.5 means a model performance wasn't bette
19,577
AUC for someone with no stats knowledge
To keep things reasonably simple, an AUC of 0.9 would mean that if you randomly picked one person/thing from each class of outcome (e.g., one person with the disease and one without), there is a 90% chance that the one from the class of interest (the group being modelled, here those with the disease) has the higher value (or this could be a lower value if the thing of interest was associated with the reference or default class). So if the AUC for predicting "being male" versus "being female" using height was 0.9, this would mean that if you took a random male and a random female, 90% of the time, the male would be taller.
AUC for someone with no stats knowledge
To keep things reasonably simple, an AUC of 0.9 would mean that if you randomly picked one person/thing from each class of outcome (e.g., one person with the disease and one without), there is a 90% c
AUC for someone with no stats knowledge To keep things reasonably simple, an AUC of 0.9 would mean that if you randomly picked one person/thing from each class of outcome (e.g., one person with the disease and one without), there is a 90% chance that the one from the class of interest (the group being modelled, here those with the disease) has the higher value (or this could be a lower value if the thing of interest was associated with the reference or default class). So if the AUC for predicting "being male" versus "being female" using height was 0.9, this would mean that if you took a random male and a random female, 90% of the time, the male would be taller.
AUC for someone with no stats knowledge To keep things reasonably simple, an AUC of 0.9 would mean that if you randomly picked one person/thing from each class of outcome (e.g., one person with the disease and one without), there is a 90% c
19,578
AUC for someone with no stats knowledge
A classifier is a criterion to assign an individual to a category ("positive" or "negative") depending to some of its characteristics. Some classifiers will provide each individual with a number between $0$ and $1$, with $0$ being "totally sure it's negative" and 1 being "totally sure it's positive". We usually take $0.5$ as the threshold between what we take as "positive" and what we take as "negative", but this is not always the case. Taking a low threshold will result in more true positives but also more false ones. Taking a higher threshold will reduce the number of false positives, but we'll also leave as negative some of the cases that where actually positive (thus less true postiives as well). So in the end, since no classifier is perfect, it will be a compromise between the two. Each point in the ROC curve represents the rates of true and false positives for each of the possible thresholds we could choose. The AUC is the area below that curve. A high AUC indicates that the model can get a good FPR (false positive rate) without losing too much TPR (ture positive rate) and vice-versa.(Note that the area below the ROC curve will be big if you get a high TPR already for an FPR close to 0). SIMPLIFIED EXAMPLE: let's say you want to use a person's height to determine whether they're a man or a woman. Your classifier will choose some height $X$ and predict that everyone above height $X$ is male and everyone below it is female. If you choose a very high $X$, like $1.90$m, you will hardly ever mislabel a woman as male, but you will also "miss" many men. On the other hand, if you pick a low $X$ like $1.50$m, you will correctly identify almost all men, but you will also classify a lot of women as male. For each $X$ you can choose, you'll get different true and false positive ratios, but it's ultimately a kind of arbitrary choice depending of what type of error worries you the most. In this context, we could plot the ROC curve with the different TPRs and FPRs, then the AUC would give us an idea of how good of a classifier we can hope to get using height (as opposed to some other classifier we could have thought of using something like weight, age, blood pressure...). (See user215517's answer)
AUC for someone with no stats knowledge
A classifier is a criterion to assign an individual to a category ("positive" or "negative") depending to some of its characteristics. Some classifiers will provide each individual with a number betwe
AUC for someone with no stats knowledge A classifier is a criterion to assign an individual to a category ("positive" or "negative") depending to some of its characteristics. Some classifiers will provide each individual with a number between $0$ and $1$, with $0$ being "totally sure it's negative" and 1 being "totally sure it's positive". We usually take $0.5$ as the threshold between what we take as "positive" and what we take as "negative", but this is not always the case. Taking a low threshold will result in more true positives but also more false ones. Taking a higher threshold will reduce the number of false positives, but we'll also leave as negative some of the cases that where actually positive (thus less true postiives as well). So in the end, since no classifier is perfect, it will be a compromise between the two. Each point in the ROC curve represents the rates of true and false positives for each of the possible thresholds we could choose. The AUC is the area below that curve. A high AUC indicates that the model can get a good FPR (false positive rate) without losing too much TPR (ture positive rate) and vice-versa.(Note that the area below the ROC curve will be big if you get a high TPR already for an FPR close to 0). SIMPLIFIED EXAMPLE: let's say you want to use a person's height to determine whether they're a man or a woman. Your classifier will choose some height $X$ and predict that everyone above height $X$ is male and everyone below it is female. If you choose a very high $X$, like $1.90$m, you will hardly ever mislabel a woman as male, but you will also "miss" many men. On the other hand, if you pick a low $X$ like $1.50$m, you will correctly identify almost all men, but you will also classify a lot of women as male. For each $X$ you can choose, you'll get different true and false positive ratios, but it's ultimately a kind of arbitrary choice depending of what type of error worries you the most. In this context, we could plot the ROC curve with the different TPRs and FPRs, then the AUC would give us an idea of how good of a classifier we can hope to get using height (as opposed to some other classifier we could have thought of using something like weight, age, blood pressure...). (See user215517's answer)
AUC for someone with no stats knowledge A classifier is a criterion to assign an individual to a category ("positive" or "negative") depending to some of its characteristics. Some classifiers will provide each individual with a number betwe
19,579
AUC for someone with no stats knowledge
Following up on the comment from @Nuclear Hoagie, the ROC curve for a model is generated by evaluating classifiers using a sequence of thresholds for declaring positive or negative. The AUC represents the area under the curve over the entire range of possible thresholds. Often, only a restricted range of thresholds is really of interest. When this is the case, AUC may not be the best way to compare models.
AUC for someone with no stats knowledge
Following up on the comment from @Nuclear Hoagie, the ROC curve for a model is generated by evaluating classifiers using a sequence of thresholds for declaring positive or negative. The AUC represents
AUC for someone with no stats knowledge Following up on the comment from @Nuclear Hoagie, the ROC curve for a model is generated by evaluating classifiers using a sequence of thresholds for declaring positive or negative. The AUC represents the area under the curve over the entire range of possible thresholds. Often, only a restricted range of thresholds is really of interest. When this is the case, AUC may not be the best way to compare models.
AUC for someone with no stats knowledge Following up on the comment from @Nuclear Hoagie, the ROC curve for a model is generated by evaluating classifiers using a sequence of thresholds for declaring positive or negative. The AUC represents
19,580
Why the Ridge Regression is NOT scale-invariant?
The intuition here is that there's a sleight-of-hand happening when you use the same symbol $X$ for both the original data and the rescaled data. It's misleading because the rescaling $\tilde{X}= XD$ is not the same as the original $X$, so we should make that explicit and write down how we're rescaling. We can demonstrate this by considering two cases, first with the original units in $X$ and second the case where we use a rescaled matrix $\tilde{X}= XD$ where $D$ is a diagonal matrix that has all positive entries on the diagonal. If $X$ has shape $n \times p$ then $D$ has shape $p \times p$. (You can actually use any $D_{ii} \neq 0$ but "rescaling" is almost always meant to be restricted to multiplication by a positive scalar.) In the first case, we have $$\beta(X) = (X^TX + \lambda I)^{-1}X^T y$$ which is just as written in the question. In the second case, we apply the rescaling to $X$ and we have $$\begin{aligned} \beta(\tilde{X}) &= (\tilde{X}^T\tilde{X} + \lambda I)^{-1}\tilde{X}^T y\\ &= (DX^TXD + \lambda I)^{-1}D X^Ty \\ &= (D(X^\top X + \lambda D^{-2})D)^{-1}DX^Ty \\ &= D^{-1}(X^T X + \lambda D^{-2})^{-1}X^Ty \end{aligned}$$ (remembering that $D$ is diagonal, so $D^T = D$). From this we can conclude that the coefficients $\beta_X$ and $\beta_\tilde{X}$ are only the same if $D=I$. The final line shows that the rescaling two effects on the coefficients. It has a multiplicative effect on the coefficients, just as we would intuitively expect based on what happens when we rescale in the OLS case. The last line makes explicit that the change in scale is "absorbed" in $\lambda$, and that the change in scale is gives $\beta(\tilde{X})_i$ penalized inversely to the square of the rescaling $D_{ii}$. (Thanks to Firebug for this helpful suggestion.)
Why the Ridge Regression is NOT scale-invariant?
The intuition here is that there's a sleight-of-hand happening when you use the same symbol $X$ for both the original data and the rescaled data. It's misleading because the rescaling $\tilde{X}= XD$
Why the Ridge Regression is NOT scale-invariant? The intuition here is that there's a sleight-of-hand happening when you use the same symbol $X$ for both the original data and the rescaled data. It's misleading because the rescaling $\tilde{X}= XD$ is not the same as the original $X$, so we should make that explicit and write down how we're rescaling. We can demonstrate this by considering two cases, first with the original units in $X$ and second the case where we use a rescaled matrix $\tilde{X}= XD$ where $D$ is a diagonal matrix that has all positive entries on the diagonal. If $X$ has shape $n \times p$ then $D$ has shape $p \times p$. (You can actually use any $D_{ii} \neq 0$ but "rescaling" is almost always meant to be restricted to multiplication by a positive scalar.) In the first case, we have $$\beta(X) = (X^TX + \lambda I)^{-1}X^T y$$ which is just as written in the question. In the second case, we apply the rescaling to $X$ and we have $$\begin{aligned} \beta(\tilde{X}) &= (\tilde{X}^T\tilde{X} + \lambda I)^{-1}\tilde{X}^T y\\ &= (DX^TXD + \lambda I)^{-1}D X^Ty \\ &= (D(X^\top X + \lambda D^{-2})D)^{-1}DX^Ty \\ &= D^{-1}(X^T X + \lambda D^{-2})^{-1}X^Ty \end{aligned}$$ (remembering that $D$ is diagonal, so $D^T = D$). From this we can conclude that the coefficients $\beta_X$ and $\beta_\tilde{X}$ are only the same if $D=I$. The final line shows that the rescaling two effects on the coefficients. It has a multiplicative effect on the coefficients, just as we would intuitively expect based on what happens when we rescale in the OLS case. The last line makes explicit that the change in scale is "absorbed" in $\lambda$, and that the change in scale is gives $\beta(\tilde{X})_i$ penalized inversely to the square of the rescaling $D_{ii}$. (Thanks to Firebug for this helpful suggestion.)
Why the Ridge Regression is NOT scale-invariant? The intuition here is that there's a sleight-of-hand happening when you use the same symbol $X$ for both the original data and the rescaled data. It's misleading because the rescaling $\tilde{X}= XD$
19,581
Why the Ridge Regression is NOT scale-invariant?
Write it in terms of the cost function $$ (y - X\beta)^2 + \sum_i \lambda \beta_i^2 $$ As you can see, each of the model parameters $\beta_i$ has the same penalty $\lambda$ (the $\lambda I$ part). If we want it to have the same degree of penalization for each parameter, we need them to have the same scale. This can be achieved either by scaling the data or by using different values of $\lambda$ per parameter, but scaling $\lambda$ is equivalent of scaling the data.
Why the Ridge Regression is NOT scale-invariant?
Write it in terms of the cost function $$ (y - X\beta)^2 + \sum_i \lambda \beta_i^2 $$ As you can see, each of the model parameters $\beta_i$ has the same penalty $\lambda$ (the $\lambda I$ part). If
Why the Ridge Regression is NOT scale-invariant? Write it in terms of the cost function $$ (y - X\beta)^2 + \sum_i \lambda \beta_i^2 $$ As you can see, each of the model parameters $\beta_i$ has the same penalty $\lambda$ (the $\lambda I$ part). If we want it to have the same degree of penalization for each parameter, we need them to have the same scale. This can be achieved either by scaling the data or by using different values of $\lambda$ per parameter, but scaling $\lambda$ is equivalent of scaling the data.
Why the Ridge Regression is NOT scale-invariant? Write it in terms of the cost function $$ (y - X\beta)^2 + \sum_i \lambda \beta_i^2 $$ As you can see, each of the model parameters $\beta_i$ has the same penalty $\lambda$ (the $\lambda I$ part). If
19,582
Why the Ridge Regression is NOT scale-invariant?
Note that being tractable doesn't mean not scale invariant. For example, PCA is tractable but not scale invariant. Now let's look at the solution. $$\hat{\beta} = (X^TX + \lambda I)^{-1}X^TY$$ We have just one penalisation parameter, $\lambda$ which is added to $X^T X$ to form the penalty. Regardless of the units of each variable (columns of $X$) we have the same penalty. If I had $x_1$ in meters but then converted to km with the same penalty I wouldn't achieve the same fit to the data. Lets consider a simple numerical example, two variables with some R code. The main thing to consider is that, under OLS, if I multiply $x_1$ by a factor $k$ then the new $\beta_1$ would be divided by $k$. In this numerical example I multiply $x_1$ by $100$, but we can see that the resulting $\beta_1$ is only divided by $90$. However, under OLS I recover this scaling factor of $100$. set.seed(134221) x1 <- runif(10) x2 <- runif(10) eps <- rnorm(10)*0.1 y <- 2 - 0.3*x1 + 0.9*x2 + eps ## fix lambda = 0.1 X0 <- cbind(1, x1, x2) b0 <- solve(t(X0)%*%X0 + diag(0.1, 3))%*%t(X0)%*%y b0 ## now change x1 <- 100*x1 X1 <- cbind(1, 100*x1, x2) ## now change x1 <- 100*x1 b1 <- solve(t(X1)%*%X1 + diag(0.1, 3))%*%t(X1)%*%y b1 ## under OLS if we have x1 := k*x1 ## then beta1 := beta1/k b0[2]/b1[2] [1] 90.42805 ## OLS regression fit0 <- summary(lm(y ~ X0)) fit1 <- summary(lm(y ~ X1)) fit0$coefficients[2]/fit1$coefficients[2] [1] 100
Why the Ridge Regression is NOT scale-invariant?
Note that being tractable doesn't mean not scale invariant. For example, PCA is tractable but not scale invariant. Now let's look at the solution. $$\hat{\beta} = (X^TX + \lambda I)^{-1}X^TY$$ We have
Why the Ridge Regression is NOT scale-invariant? Note that being tractable doesn't mean not scale invariant. For example, PCA is tractable but not scale invariant. Now let's look at the solution. $$\hat{\beta} = (X^TX + \lambda I)^{-1}X^TY$$ We have just one penalisation parameter, $\lambda$ which is added to $X^T X$ to form the penalty. Regardless of the units of each variable (columns of $X$) we have the same penalty. If I had $x_1$ in meters but then converted to km with the same penalty I wouldn't achieve the same fit to the data. Lets consider a simple numerical example, two variables with some R code. The main thing to consider is that, under OLS, if I multiply $x_1$ by a factor $k$ then the new $\beta_1$ would be divided by $k$. In this numerical example I multiply $x_1$ by $100$, but we can see that the resulting $\beta_1$ is only divided by $90$. However, under OLS I recover this scaling factor of $100$. set.seed(134221) x1 <- runif(10) x2 <- runif(10) eps <- rnorm(10)*0.1 y <- 2 - 0.3*x1 + 0.9*x2 + eps ## fix lambda = 0.1 X0 <- cbind(1, x1, x2) b0 <- solve(t(X0)%*%X0 + diag(0.1, 3))%*%t(X0)%*%y b0 ## now change x1 <- 100*x1 X1 <- cbind(1, 100*x1, x2) ## now change x1 <- 100*x1 b1 <- solve(t(X1)%*%X1 + diag(0.1, 3))%*%t(X1)%*%y b1 ## under OLS if we have x1 := k*x1 ## then beta1 := beta1/k b0[2]/b1[2] [1] 90.42805 ## OLS regression fit0 <- summary(lm(y ~ X0)) fit1 <- summary(lm(y ~ X1)) fit0$coefficients[2]/fit1$coefficients[2] [1] 100
Why the Ridge Regression is NOT scale-invariant? Note that being tractable doesn't mean not scale invariant. For example, PCA is tractable but not scale invariant. Now let's look at the solution. $$\hat{\beta} = (X^TX + \lambda I)^{-1}X^TY$$ We have
19,583
Does the function $e^x/(1+e^x)$ have a standard name?
It does not have a standard name. In different areas of statistics, it has different names. In the neural networks and deep learning community, it is called the sigmoid function. This is confusing for everyone else, because sigmoid is just a fancy way of saying "S-shaped" and this function is not unique among S-shaped functions; for example, $\tanh$ is also S-shaped and widely used in neural networks, yet it is not commonly termed "sigmoidal" in neural network literature. In the GLM literature, this is called the logistic function (as in logistic regression). If the logit function is $$\text{logit}(p)= \log\left(\frac{p}{1-p}\right)= \log(p)-\log(1-p)=x$$ for $p\in(0,1)$, then $$\text{logit}^{-1}(x)= \frac{\exp(x)}{1 + \exp(x)}= \frac{1}{1+\exp(-x)}= p$$ for $x\in\mathbb{R}$. This is the reason some people call $\text{logit}^{-1}$ the inverse logit or anti-logit function. (Thanks, Glen_b!) Rarely, I've seen the name expit used; as far as I can tell, this is a back-formation from the word logit but never really caught on. (Thanks, CliffAB!)
Does the function $e^x/(1+e^x)$ have a standard name?
It does not have a standard name. In different areas of statistics, it has different names. In the neural networks and deep learning community, it is called the sigmoid function. This is confusing for
Does the function $e^x/(1+e^x)$ have a standard name? It does not have a standard name. In different areas of statistics, it has different names. In the neural networks and deep learning community, it is called the sigmoid function. This is confusing for everyone else, because sigmoid is just a fancy way of saying "S-shaped" and this function is not unique among S-shaped functions; for example, $\tanh$ is also S-shaped and widely used in neural networks, yet it is not commonly termed "sigmoidal" in neural network literature. In the GLM literature, this is called the logistic function (as in logistic regression). If the logit function is $$\text{logit}(p)= \log\left(\frac{p}{1-p}\right)= \log(p)-\log(1-p)=x$$ for $p\in(0,1)$, then $$\text{logit}^{-1}(x)= \frac{\exp(x)}{1 + \exp(x)}= \frac{1}{1+\exp(-x)}= p$$ for $x\in\mathbb{R}$. This is the reason some people call $\text{logit}^{-1}$ the inverse logit or anti-logit function. (Thanks, Glen_b!) Rarely, I've seen the name expit used; as far as I can tell, this is a back-formation from the word logit but never really caught on. (Thanks, CliffAB!)
Does the function $e^x/(1+e^x)$ have a standard name? It does not have a standard name. In different areas of statistics, it has different names. In the neural networks and deep learning community, it is called the sigmoid function. This is confusing for
19,584
Textbook on reinforcement learning
I think Sutton and Barto is still the standard. There are a lot of slide decks and notes from AI classes online, but they typically don't go into too much detail. Sutton and Barto is a little old, but they are preparing a 2nd edition of their textbook. A draft, dated January 2018, is available here; it's linked from Sutton's webpage, which also has the full text of the first edition. I would look at this before tackling Kochenderfer et al.'s Decision Making Under Uncertainty. That book has some interesting applications (mostly in aviation) but it moves quickly and bounces around a lot. Szepesvári's Algorithms for Reinforcement Learning is also good, but pithy--it takes about twenty pages to get to $\textrm{TD(}\lambda\textrm{)}$, vs. seven chapers and 150 pages in the newer Sutton and Barto. Other than that, you might try diving into some papers--the reinforcement learning stuff tends to be pretty accessible.
Textbook on reinforcement learning
I think Sutton and Barto is still the standard. There are a lot of slide decks and notes from AI classes online, but they typically don't go into too much detail. Sutton and Barto is a little old, bu
Textbook on reinforcement learning I think Sutton and Barto is still the standard. There are a lot of slide decks and notes from AI classes online, but they typically don't go into too much detail. Sutton and Barto is a little old, but they are preparing a 2nd edition of their textbook. A draft, dated January 2018, is available here; it's linked from Sutton's webpage, which also has the full text of the first edition. I would look at this before tackling Kochenderfer et al.'s Decision Making Under Uncertainty. That book has some interesting applications (mostly in aviation) but it moves quickly and bounces around a lot. Szepesvári's Algorithms for Reinforcement Learning is also good, but pithy--it takes about twenty pages to get to $\textrm{TD(}\lambda\textrm{)}$, vs. seven chapers and 150 pages in the newer Sutton and Barto. Other than that, you might try diving into some papers--the reinforcement learning stuff tends to be pretty accessible.
Textbook on reinforcement learning I think Sutton and Barto is still the standard. There are a lot of slide decks and notes from AI classes online, but they typically don't go into too much detail. Sutton and Barto is a little old, bu
19,585
Textbook on reinforcement learning
You might want to check out Algorithms for Reinforcement Learning by Csaba Szepesvári, published in 2010. PDF downloadable from the web site. In my opinion, it is a bit more technical than Sutton and Barto but covers less material.
Textbook on reinforcement learning
You might want to check out Algorithms for Reinforcement Learning by Csaba Szepesvári, published in 2010. PDF downloadable from the web site. In my opinion, it is a bit more technical than Sutton and
Textbook on reinforcement learning You might want to check out Algorithms for Reinforcement Learning by Csaba Szepesvári, published in 2010. PDF downloadable from the web site. In my opinion, it is a bit more technical than Sutton and Barto but covers less material.
Textbook on reinforcement learning You might want to check out Algorithms for Reinforcement Learning by Csaba Szepesvári, published in 2010. PDF downloadable from the web site. In my opinion, it is a bit more technical than Sutton and
19,586
Textbook on reinforcement learning
Here you have some good textbooks/references: Classic Sutton RS, Barto AG. Reinforcement Learning: An Introduction. Cambridge, Mass: A Bradford Book; 1998. 322 p. The draft for the second edition is available for free: https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html Russell/Norvig Chapter 21: Russell SJ, Norvig P, Davis E. Artificial intelligence: a modern approach. Upper Saddle River, NJ: Prentice Hall; 2010. More technical Szepesvári C. Algorithms for reinforcement learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. 2010;4(1):1–103. http://www.ualberta.ca/~szepesva/RLBook.html Bertsekas DP. Dynamic Programming and Optimal Control. 4th edition. Belmont, Mass.: Athena Scientific; 2007. 1270 p. Chapter 6, vol 2 is available for free: http://web.mit.edu/dimitrib/www/dpchapter.pdf For more recent developments Wiering M, van Otterlo M, editors. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012 Available from: http://link.springer.com/10.1007/978-3-642-27645-3 Kochenderfer MJ, Amato C, Chowdhary G, How JP, Reynolds HJD, Thornton JR, et al. Decision Making Under Uncertainty: Theory and Application. 1 edition. Cambridge, Massachusetts: The MIT Press; 2015. 352 p. Multi-agent reinforcement learning Buşoniu L, Babuška R, Schutter BD. Multi-agent Reinforcement Learning: An Overview. In: Srinivasan D, Jain LC, editors. Innovations in Multi-Agent Systems and Applications - 1 . Springer Berlin Heidelberg; 2010 p. 183–221. Available from: http://link.springer.com/chapter/10.1007/978-3-642-14435-6_7 Schwartz HM. Multi-agent machine learning : a reinforcement approach. Hoboken, New Jersey: Wiley; 2014. Videos / Courses I would also suggest David Silver course in YouTube: https://www.youtube.com/playlist?list=PL5X3mDkKaJrL42i_jhE4N-p6E2Ol62Ofa
Textbook on reinforcement learning
Here you have some good textbooks/references: Classic Sutton RS, Barto AG. Reinforcement Learning: An Introduction. Cambridge, Mass: A Bradford Book; 1998. 322 p. The draft for the second edition is
Textbook on reinforcement learning Here you have some good textbooks/references: Classic Sutton RS, Barto AG. Reinforcement Learning: An Introduction. Cambridge, Mass: A Bradford Book; 1998. 322 p. The draft for the second edition is available for free: https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html Russell/Norvig Chapter 21: Russell SJ, Norvig P, Davis E. Artificial intelligence: a modern approach. Upper Saddle River, NJ: Prentice Hall; 2010. More technical Szepesvári C. Algorithms for reinforcement learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. 2010;4(1):1–103. http://www.ualberta.ca/~szepesva/RLBook.html Bertsekas DP. Dynamic Programming and Optimal Control. 4th edition. Belmont, Mass.: Athena Scientific; 2007. 1270 p. Chapter 6, vol 2 is available for free: http://web.mit.edu/dimitrib/www/dpchapter.pdf For more recent developments Wiering M, van Otterlo M, editors. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012 Available from: http://link.springer.com/10.1007/978-3-642-27645-3 Kochenderfer MJ, Amato C, Chowdhary G, How JP, Reynolds HJD, Thornton JR, et al. Decision Making Under Uncertainty: Theory and Application. 1 edition. Cambridge, Massachusetts: The MIT Press; 2015. 352 p. Multi-agent reinforcement learning Buşoniu L, Babuška R, Schutter BD. Multi-agent Reinforcement Learning: An Overview. In: Srinivasan D, Jain LC, editors. Innovations in Multi-Agent Systems and Applications - 1 . Springer Berlin Heidelberg; 2010 p. 183–221. Available from: http://link.springer.com/chapter/10.1007/978-3-642-14435-6_7 Schwartz HM. Multi-agent machine learning : a reinforcement approach. Hoboken, New Jersey: Wiley; 2014. Videos / Courses I would also suggest David Silver course in YouTube: https://www.youtube.com/playlist?list=PL5X3mDkKaJrL42i_jhE4N-p6E2Ol62Ofa
Textbook on reinforcement learning Here you have some good textbooks/references: Classic Sutton RS, Barto AG. Reinforcement Learning: An Introduction. Cambridge, Mass: A Bradford Book; 1998. 322 p. The draft for the second edition is
19,587
Textbook on reinforcement learning
My favourite lectures notes on reinforcement learning are the ones by Andrew Ng in Stanford's course on ML CS229: Reiforcment learning notes Stanford CS229 You can also download the lecture videos on iTunes. Or on youtube, they start in the following link: Lecture 16 CS229
Textbook on reinforcement learning
My favourite lectures notes on reinforcement learning are the ones by Andrew Ng in Stanford's course on ML CS229: Reiforcment learning notes Stanford CS229 You can also download the lecture videos on
Textbook on reinforcement learning My favourite lectures notes on reinforcement learning are the ones by Andrew Ng in Stanford's course on ML CS229: Reiforcment learning notes Stanford CS229 You can also download the lecture videos on iTunes. Or on youtube, they start in the following link: Lecture 16 CS229
Textbook on reinforcement learning My favourite lectures notes on reinforcement learning are the ones by Andrew Ng in Stanford's course on ML CS229: Reiforcment learning notes Stanford CS229 You can also download the lecture videos on
19,588
Choosing a statistical test based on the outcome of another (e.g. normality)
Given that $p$ is the probability of observing data this extreme or more extreme if $H_0$ is true, then what is the interpretation of $p$ where the $p$ is arrived at through a process where there was a contingent decision made in the selection of the test that produced that $p$? The answer is unknowable (or at least very nearly unknowable). By making the decision to run the test or not on the basis of some other probabilistic process you've made the interpretation of your outcome even more convoluted. $p$ values are maximally interpretable when the sample size and analysis plan was fully selected in advance. In other situations, the interpretations get difficult, that is why it is 'not a good idea'. That being said, it is a widely accepted practice... after all, why even bother to run a test if you find out that the test you had planned to run was invalid? The answer to that question is far less certain. This all boils down to the simple fact that null hypothesis significance testing (the primary use case of $p$) has some problems that are difficult to surmount.
Choosing a statistical test based on the outcome of another (e.g. normality)
Given that $p$ is the probability of observing data this extreme or more extreme if $H_0$ is true, then what is the interpretation of $p$ where the $p$ is arrived at through a process where there was
Choosing a statistical test based on the outcome of another (e.g. normality) Given that $p$ is the probability of observing data this extreme or more extreme if $H_0$ is true, then what is the interpretation of $p$ where the $p$ is arrived at through a process where there was a contingent decision made in the selection of the test that produced that $p$? The answer is unknowable (or at least very nearly unknowable). By making the decision to run the test or not on the basis of some other probabilistic process you've made the interpretation of your outcome even more convoluted. $p$ values are maximally interpretable when the sample size and analysis plan was fully selected in advance. In other situations, the interpretations get difficult, that is why it is 'not a good idea'. That being said, it is a widely accepted practice... after all, why even bother to run a test if you find out that the test you had planned to run was invalid? The answer to that question is far less certain. This all boils down to the simple fact that null hypothesis significance testing (the primary use case of $p$) has some problems that are difficult to surmount.
Choosing a statistical test based on the outcome of another (e.g. normality) Given that $p$ is the probability of observing data this extreme or more extreme if $H_0$ is true, then what is the interpretation of $p$ where the $p$ is arrived at through a process where there was
19,589
Choosing a statistical test based on the outcome of another (e.g. normality)
For example, people often choose to use a non parametric test when some other test suggests that the residuals are not normally distributed. This approach seems pretty widely accepted but does not seem to agree with the first sentence in this paragraph. I was just hoping to get clarification on this issue. Yes, a lot of people do this kind of thing, and change their second test to one that can deal with heteroskedasticity when they reject equality of variance, and so on. Just because something is common, doesn't mean it's necessarily wise. Indeed, in some places (I won't name the worst-offending disciplines) a lot of this formal hypothesis testing contingent on other formal hypothesis testing is actually taught. The problem with doing it is that your procedures don't have their nominal properties, sometimes not even close. (On the other hand, assuming things like that without any consideration at all for potentially extreme violation could be even worse.) Several papers suggest that for the heteroskedastic case, you're better off simply acting as if the variances aren't equal than to test for it and only do something about it on rejection. In the normality case it's less clear. In large samples at least, in many cases normality isn't all that crucial (but ironically, with large samples, your test of normality is much more likely to reject), as long as the non-normality isn't too wild. One exception is for prediction intervals, where you really do need your distributional assumption to be close to right. In part, one problem is that hypothesis tests answer a different question than the one that needs to be answered. You don't really need to know 'is the data truly normal' (almost always, it won't be exactly normal a priori). The question is rather 'how badly will the extent of non-normality impact my inference'. The second issue is usually either just about independent of sample size or actually gets better with increasing sample size - yet hypothesis tests will almost always reject at large sample sizes. There are many situations where there are robust or even distribution free procedures which are very close to fully efficient even at the normal (and potentially far more efficient at some fairly modest departures from it) - in many cases it would seem silly not to take the same prudent approach.
Choosing a statistical test based on the outcome of another (e.g. normality)
For example, people often choose to use a non parametric test when some other test suggests that the residuals are not normally distributed. This approach seems pretty widely accepted but does not see
Choosing a statistical test based on the outcome of another (e.g. normality) For example, people often choose to use a non parametric test when some other test suggests that the residuals are not normally distributed. This approach seems pretty widely accepted but does not seem to agree with the first sentence in this paragraph. I was just hoping to get clarification on this issue. Yes, a lot of people do this kind of thing, and change their second test to one that can deal with heteroskedasticity when they reject equality of variance, and so on. Just because something is common, doesn't mean it's necessarily wise. Indeed, in some places (I won't name the worst-offending disciplines) a lot of this formal hypothesis testing contingent on other formal hypothesis testing is actually taught. The problem with doing it is that your procedures don't have their nominal properties, sometimes not even close. (On the other hand, assuming things like that without any consideration at all for potentially extreme violation could be even worse.) Several papers suggest that for the heteroskedastic case, you're better off simply acting as if the variances aren't equal than to test for it and only do something about it on rejection. In the normality case it's less clear. In large samples at least, in many cases normality isn't all that crucial (but ironically, with large samples, your test of normality is much more likely to reject), as long as the non-normality isn't too wild. One exception is for prediction intervals, where you really do need your distributional assumption to be close to right. In part, one problem is that hypothesis tests answer a different question than the one that needs to be answered. You don't really need to know 'is the data truly normal' (almost always, it won't be exactly normal a priori). The question is rather 'how badly will the extent of non-normality impact my inference'. The second issue is usually either just about independent of sample size or actually gets better with increasing sample size - yet hypothesis tests will almost always reject at large sample sizes. There are many situations where there are robust or even distribution free procedures which are very close to fully efficient even at the normal (and potentially far more efficient at some fairly modest departures from it) - in many cases it would seem silly not to take the same prudent approach.
Choosing a statistical test based on the outcome of another (e.g. normality) For example, people often choose to use a non parametric test when some other test suggests that the residuals are not normally distributed. This approach seems pretty widely accepted but does not see
19,590
Choosing a statistical test based on the outcome of another (e.g. normality)
The main issues have been well explained by others, but are confounded with underlying or associated Over-reverence for P-values, at most one kind of evidence in statistics. Reluctance to see that statistical reports are inevitably based on a combination of choices, some firmly evidence-based, others based on a mix of previous analyses, intuition, guesswork, judgment, theory, so forth. Suppose I and my cautious friend Test Everything both chose a log transformation for a response, but I jump to that conclusion based on a mix of physical reasoning and previous experience with data, while Test Everything chooses log scale based on Box-Cox testing and estimation of a parameter. Now we both use the same multiple regression. Do our P-values have different interpretations? On one interpretation, Test Everything's P-values are conditional on her previous inferences. I used inferences too, but mostly they were informal, based on a long series of previous graphs, calculations, etc. in previous projects. How is that to be reported? Naturally, the regression results are exactly the same for Test Everything and myself. The same mix of sensible advice and dubious philosophy applies to choice of predictors and functional form. Economists, for example, are widely taught to respect previous theoretical discussions and to be wary of data snooping, with good reason in each case. But in the weakest instances the theory concerned is just a tentative suggestion made previously in the literature, very likely after some empirical analysis. But literature references sanctify, while learning from the data in hand is suspect, for many authors.
Choosing a statistical test based on the outcome of another (e.g. normality)
The main issues have been well explained by others, but are confounded with underlying or associated Over-reverence for P-values, at most one kind of evidence in statistics. Reluctance to see that
Choosing a statistical test based on the outcome of another (e.g. normality) The main issues have been well explained by others, but are confounded with underlying or associated Over-reverence for P-values, at most one kind of evidence in statistics. Reluctance to see that statistical reports are inevitably based on a combination of choices, some firmly evidence-based, others based on a mix of previous analyses, intuition, guesswork, judgment, theory, so forth. Suppose I and my cautious friend Test Everything both chose a log transformation for a response, but I jump to that conclusion based on a mix of physical reasoning and previous experience with data, while Test Everything chooses log scale based on Box-Cox testing and estimation of a parameter. Now we both use the same multiple regression. Do our P-values have different interpretations? On one interpretation, Test Everything's P-values are conditional on her previous inferences. I used inferences too, but mostly they were informal, based on a long series of previous graphs, calculations, etc. in previous projects. How is that to be reported? Naturally, the regression results are exactly the same for Test Everything and myself. The same mix of sensible advice and dubious philosophy applies to choice of predictors and functional form. Economists, for example, are widely taught to respect previous theoretical discussions and to be wary of data snooping, with good reason in each case. But in the weakest instances the theory concerned is just a tentative suggestion made previously in the literature, very likely after some empirical analysis. But literature references sanctify, while learning from the data in hand is suspect, for many authors.
Choosing a statistical test based on the outcome of another (e.g. normality) The main issues have been well explained by others, but are confounded with underlying or associated Over-reverence for P-values, at most one kind of evidence in statistics. Reluctance to see that
19,591
Positive correlation and negative regressor coefficient sign
Both @Henry, and @JDav are pointing you in the right direction (+1 to each). However, I'm very visual and it helps me if I can see how this works. In that respect, here's a quick plot in which the first variable is confounded with group membership. If the groups are ignored, the correlation coefficient is positive (as can be seen in the figure), but in a multiple regression, $\beta_{x1}=-1$, albeit with different intercepts for the three groups. As further food for thought, when all variables are categorical (instead of continuous as in this case) the phenomenon of reversing the apparent relationship upon inclusion of other variables is known as Simpson's paradox. Since it's ultimately quite similar, it may help to read about that as well. It is discussed on CV here.
Positive correlation and negative regressor coefficient sign
Both @Henry, and @JDav are pointing you in the right direction (+1 to each). However, I'm very visual and it helps me if I can see how this works. In that respect, here's a quick plot in which the f
Positive correlation and negative regressor coefficient sign Both @Henry, and @JDav are pointing you in the right direction (+1 to each). However, I'm very visual and it helps me if I can see how this works. In that respect, here's a quick plot in which the first variable is confounded with group membership. If the groups are ignored, the correlation coefficient is positive (as can be seen in the figure), but in a multiple regression, $\beta_{x1}=-1$, albeit with different intercepts for the three groups. As further food for thought, when all variables are categorical (instead of continuous as in this case) the phenomenon of reversing the apparent relationship upon inclusion of other variables is known as Simpson's paradox. Since it's ultimately quite similar, it may help to read about that as well. It is discussed on CV here.
Positive correlation and negative regressor coefficient sign Both @Henry, and @JDav are pointing you in the right direction (+1 to each). However, I'm very visual and it helps me if I can see how this works. In that respect, here's a quick plot in which the f
19,592
Positive correlation and negative regressor coefficient sign
If the positively-correlated regressor is the only regressor in a linear model then its coefficient should be positive. If there are several regressors and they are not independent then you can see the effect you are asking about. Read about confounding for some explanation
Positive correlation and negative regressor coefficient sign
If the positively-correlated regressor is the only regressor in a linear model then its coefficient should be positive. If there are several regressors and they are not independent then you can
Positive correlation and negative regressor coefficient sign If the positively-correlated regressor is the only regressor in a linear model then its coefficient should be positive. If there are several regressors and they are not independent then you can see the effect you are asking about. Read about confounding for some explanation
Positive correlation and negative regressor coefficient sign If the positively-correlated regressor is the only regressor in a linear model then its coefficient should be positive. If there are several regressors and they are not independent then you can
19,593
How to compute correlation between/within groups of variables?
What @rolando suggested looks like a good start, if not the whole response (IMO). Let me continue with the correlational approach, following the Classical Test Theory (CTT) framework. Here, as noted by @Jeromy, a summary measure for your group of characteristics might be considered as the totalled (or sum) score of all items (a characteristic, in your words) belonging to what I will now refer to as a scale. Under CTT, this allows us to formalize individual "trait" propensity or liability as one's location on a continuous scale reflecting an underlying construct (a latent trait), although here it is merely an ordinal scale (but this another debate in the psychometrics literature). What you described has to do with what is know as convergent (to what extent items belonging to the same scale do correlate one with each other) and discriminant (items belonging to different scales should not correlate to a great extent) validity in psychometrics. Classical techniques include multi-trait multi-method (MTMM) analysis (Campbell & Fiske, 1959). An illustration of how it works is shown below (three methods or instruments, three constructs or traits): In this MTMM matrix, the diagonal elements might be Cronbach's alpha or test-retest intraclass correlation; these are indicators of the reliability of each measurement scale. The validity of the hypothesized (shared) constructs is assessed by the correlation of scales scores when different instruments are used to assess the same trait; if these instrument were developed independently, high correlation ($> 0.7$) would support the idea that the traits are defined in a consistent and objective manner. The remaining cells in this MTMM matrix summarize relations between traits within method, and between traits across methods, and are indicative of the way unique constructs are measured with different scales and what are the relations between each trait in a given scale. Assuming independent traits, we generally don't expect them to be high (a recommended threshold is $<.3$), but more formal test of hypothesis (on correlation point estimates) can be carried out. A subtlety is that we use so-called "rest correlation", that is we compute correlation between an item (or trait) and its scale (or method) after removing the contribution of this item to the sum score of this scale (correction for overlap). Even if this method was initially developed to assess convergent and discriminant validity of a certain number of traits as studied by different measurement instruments, it can be applied for a single multi-scale instrument. The traits then becomes the items, and the methods are just the different scales. A generalization of this method to a single instrument is also known as multitrait scaling. Items correlating as expected (i.e., with their own scale rather than a different scale) are counted as scaling success. We generally assume, however, that the different scales are not correlated, that is they are targeting different hypothetical constructs. But averaging the within and between-scale correlations provide a quick way of summarizing the internal structure of your instrument. Another convenient way of doing so is to apply a cluster analysis on the matrix of pairwise correlations and see how your variables do hang together. Of note, in both cases, the usual caveats of working with correlation measures apply, that is you cannot account for measurement error, you need a large sample, instruments or tests are assumed to be "parallel" (tau-equivalence, uncorrelated errors, equal error variances). The second part addressed by @rolando is also interesting: If there's no theoretical or substantive indication that the already established grouping of items makes sense, then you'll have to find a way to highlight the structure of your data with e.g., exploratory factor analysis. But even if you trust those "characteristics within a group", you can check that this is a valid assumption. Now, you might be using confirmatory factor analysis model to check that the pattern of items loadings (correlation of an item with its own scale) behaves as expected. Instead of traditional factor analytic methods, you can also take a look at items clustering (Revelle, 1979) which relies on a Cronbach's alpha-based split-rule to group together items into homogeneous scales. A final word: If you are using R, there are two very nice packages that will ease the aforementioned steps: psych, provides you with everything you need for getting started with psychometrics methods, including factor analysis (fa, fa.parallel, principal), items clustering (ICLUST and related methods), Cronbach's alpha (alpha); there's a nice overview available on William Revelle's website, especially An introduction to psychometric theory with applications in R. psy, also includes scree plot (via PCA + simulated datasets) visualization (scree.plot) and MTMM (mtmm). References Campbell, D.T. and Fiske, D.W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56: 81–105. Hays, R.D. and Fayers, P. (2005). Evaluating multi-item scales. In Assessing quality of life in clinical trials, (Fayers, P. and Hays, R., Eds.), pp. 41-53. Oxford. Revelle, W. (1979). Hierarchical Cluster Analysis and the Internal Structure of Tests. Multivariate Behavioral Research, 14: 57-74.
How to compute correlation between/within groups of variables?
What @rolando suggested looks like a good start, if not the whole response (IMO). Let me continue with the correlational approach, following the Classical Test Theory (CTT) framework. Here, as noted b
How to compute correlation between/within groups of variables? What @rolando suggested looks like a good start, if not the whole response (IMO). Let me continue with the correlational approach, following the Classical Test Theory (CTT) framework. Here, as noted by @Jeromy, a summary measure for your group of characteristics might be considered as the totalled (or sum) score of all items (a characteristic, in your words) belonging to what I will now refer to as a scale. Under CTT, this allows us to formalize individual "trait" propensity or liability as one's location on a continuous scale reflecting an underlying construct (a latent trait), although here it is merely an ordinal scale (but this another debate in the psychometrics literature). What you described has to do with what is know as convergent (to what extent items belonging to the same scale do correlate one with each other) and discriminant (items belonging to different scales should not correlate to a great extent) validity in psychometrics. Classical techniques include multi-trait multi-method (MTMM) analysis (Campbell & Fiske, 1959). An illustration of how it works is shown below (three methods or instruments, three constructs or traits): In this MTMM matrix, the diagonal elements might be Cronbach's alpha or test-retest intraclass correlation; these are indicators of the reliability of each measurement scale. The validity of the hypothesized (shared) constructs is assessed by the correlation of scales scores when different instruments are used to assess the same trait; if these instrument were developed independently, high correlation ($> 0.7$) would support the idea that the traits are defined in a consistent and objective manner. The remaining cells in this MTMM matrix summarize relations between traits within method, and between traits across methods, and are indicative of the way unique constructs are measured with different scales and what are the relations between each trait in a given scale. Assuming independent traits, we generally don't expect them to be high (a recommended threshold is $<.3$), but more formal test of hypothesis (on correlation point estimates) can be carried out. A subtlety is that we use so-called "rest correlation", that is we compute correlation between an item (or trait) and its scale (or method) after removing the contribution of this item to the sum score of this scale (correction for overlap). Even if this method was initially developed to assess convergent and discriminant validity of a certain number of traits as studied by different measurement instruments, it can be applied for a single multi-scale instrument. The traits then becomes the items, and the methods are just the different scales. A generalization of this method to a single instrument is also known as multitrait scaling. Items correlating as expected (i.e., with their own scale rather than a different scale) are counted as scaling success. We generally assume, however, that the different scales are not correlated, that is they are targeting different hypothetical constructs. But averaging the within and between-scale correlations provide a quick way of summarizing the internal structure of your instrument. Another convenient way of doing so is to apply a cluster analysis on the matrix of pairwise correlations and see how your variables do hang together. Of note, in both cases, the usual caveats of working with correlation measures apply, that is you cannot account for measurement error, you need a large sample, instruments or tests are assumed to be "parallel" (tau-equivalence, uncorrelated errors, equal error variances). The second part addressed by @rolando is also interesting: If there's no theoretical or substantive indication that the already established grouping of items makes sense, then you'll have to find a way to highlight the structure of your data with e.g., exploratory factor analysis. But even if you trust those "characteristics within a group", you can check that this is a valid assumption. Now, you might be using confirmatory factor analysis model to check that the pattern of items loadings (correlation of an item with its own scale) behaves as expected. Instead of traditional factor analytic methods, you can also take a look at items clustering (Revelle, 1979) which relies on a Cronbach's alpha-based split-rule to group together items into homogeneous scales. A final word: If you are using R, there are two very nice packages that will ease the aforementioned steps: psych, provides you with everything you need for getting started with psychometrics methods, including factor analysis (fa, fa.parallel, principal), items clustering (ICLUST and related methods), Cronbach's alpha (alpha); there's a nice overview available on William Revelle's website, especially An introduction to psychometric theory with applications in R. psy, also includes scree plot (via PCA + simulated datasets) visualization (scree.plot) and MTMM (mtmm). References Campbell, D.T. and Fiske, D.W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56: 81–105. Hays, R.D. and Fayers, P. (2005). Evaluating multi-item scales. In Assessing quality of life in clinical trials, (Fayers, P. and Hays, R., Eds.), pp. 41-53. Oxford. Revelle, W. (1979). Hierarchical Cluster Analysis and the Internal Structure of Tests. Multivariate Behavioral Research, 14: 57-74.
How to compute correlation between/within groups of variables? What @rolando suggested looks like a good start, if not the whole response (IMO). Let me continue with the correlational approach, following the Classical Test Theory (CTT) framework. Here, as noted b
19,594
How to compute correlation between/within groups of variables?
The way I read your terminology, what you want is first to assess internal consistency within each group of variables, and then to assess the correlations among the scale scores which constitute the average of each group of variables. The first can be done using Cronbach's alpha, and the second using Pearson correlation. This assumes you have reasonably normal distributions and reasonably linear relationships. A more involved method, and not necessarily a required one, would be to conduct an exploratory factor analysis. You would try to establish which variables should be grouped together and then again to what degree those factors would be correlated. If you try this method, make sure you use oblique rotation to allow those correlations to show up. Whether you use principal components extraction or principal axis extraction would depend, respectively, on whether your variables are objective, error-free measurements or subjective ones such as survey items that contain a certain amount of error.
How to compute correlation between/within groups of variables?
The way I read your terminology, what you want is first to assess internal consistency within each group of variables, and then to assess the correlations among the scale scores which constitute the a
How to compute correlation between/within groups of variables? The way I read your terminology, what you want is first to assess internal consistency within each group of variables, and then to assess the correlations among the scale scores which constitute the average of each group of variables. The first can be done using Cronbach's alpha, and the second using Pearson correlation. This assumes you have reasonably normal distributions and reasonably linear relationships. A more involved method, and not necessarily a required one, would be to conduct an exploratory factor analysis. You would try to establish which variables should be grouped together and then again to what degree those factors would be correlated. If you try this method, make sure you use oblique rotation to allow those correlations to show up. Whether you use principal components extraction or principal axis extraction would depend, respectively, on whether your variables are objective, error-free measurements or subjective ones such as survey items that contain a certain amount of error.
How to compute correlation between/within groups of variables? The way I read your terminology, what you want is first to assess internal consistency within each group of variables, and then to assess the correlations among the scale scores which constitute the a
19,595
How to compute correlation between/within groups of variables?
The standard tools, at least in psychology, in your situation would be exploratory and confirmatory factor analysis to assess the convergence of the inter-item correlation matrix with some proposed model of the relationship between factors and items. The way that you have phrased your question suggests that you might not be familiar with this literature. For example, here are my notes on the scale construction and factor analysis and here is a tutorial in R on factor analysis form Quick-R. Thus, while it's worth answering your specific question, I think that your broader aims will be better served by examining factor analytic approaches to evaluating multi-item, multi-factor scales. Another standard strategy would be to calculate total scores for each group of variables (what I would call a "scale") and correlate the scales. Many reliability analysis tools will report average inter-item correlation. If you created the 50 by 50 matrix of correlations between items, you could write a function in R that averaged subsets based on combinations of groups of variables. You might not get what you want if you have a mixture of positive and negative items, as the negative correlations might cancel out the positive correlations.
How to compute correlation between/within groups of variables?
The standard tools, at least in psychology, in your situation would be exploratory and confirmatory factor analysis to assess the convergence of the inter-item correlation matrix with some proposed mo
How to compute correlation between/within groups of variables? The standard tools, at least in psychology, in your situation would be exploratory and confirmatory factor analysis to assess the convergence of the inter-item correlation matrix with some proposed model of the relationship between factors and items. The way that you have phrased your question suggests that you might not be familiar with this literature. For example, here are my notes on the scale construction and factor analysis and here is a tutorial in R on factor analysis form Quick-R. Thus, while it's worth answering your specific question, I think that your broader aims will be better served by examining factor analytic approaches to evaluating multi-item, multi-factor scales. Another standard strategy would be to calculate total scores for each group of variables (what I would call a "scale") and correlate the scales. Many reliability analysis tools will report average inter-item correlation. If you created the 50 by 50 matrix of correlations between items, you could write a function in R that averaged subsets based on combinations of groups of variables. You might not get what you want if you have a mixture of positive and negative items, as the negative correlations might cancel out the positive correlations.
How to compute correlation between/within groups of variables? The standard tools, at least in psychology, in your situation would be exploratory and confirmatory factor analysis to assess the convergence of the inter-item correlation matrix with some proposed mo
19,596
How to compute correlation between/within groups of variables?
I would suggest using as a replacement for the notion of correlation, which is defined only for pair-wise, the notion of mutual information and integration in Gaussian models. In Gaussian models, integration of a group of variables $G_1$ is defined as the entropy of the group: $I_1 \propto log(|C_1|)$ where $C_1$ is the correlation matrix of the group of variables $G_1$. It is easy to see that if $G_1$ is comprised only of 2 variables, its integration is $log ( 1 - \rho^2)$, which directly relates to the pairwise correlation coefficient of the variables $\rho$. To compute interaction between two groups of variables, you can use mutual information, which is just cross-entropy between the groups: $MU_{12} = I_{12} - I_{1} - I_{2}$ I found a reference on these notions after a quick google that might be helpful.
How to compute correlation between/within groups of variables?
I would suggest using as a replacement for the notion of correlation, which is defined only for pair-wise, the notion of mutual information and integration in Gaussian models. In Gaussian models, inte
How to compute correlation between/within groups of variables? I would suggest using as a replacement for the notion of correlation, which is defined only for pair-wise, the notion of mutual information and integration in Gaussian models. In Gaussian models, integration of a group of variables $G_1$ is defined as the entropy of the group: $I_1 \propto log(|C_1|)$ where $C_1$ is the correlation matrix of the group of variables $G_1$. It is easy to see that if $G_1$ is comprised only of 2 variables, its integration is $log ( 1 - \rho^2)$, which directly relates to the pairwise correlation coefficient of the variables $\rho$. To compute interaction between two groups of variables, you can use mutual information, which is just cross-entropy between the groups: $MU_{12} = I_{12} - I_{1} - I_{2}$ I found a reference on these notions after a quick google that might be helpful.
How to compute correlation between/within groups of variables? I would suggest using as a replacement for the notion of correlation, which is defined only for pair-wise, the notion of mutual information and integration in Gaussian models. In Gaussian models, inte
19,597
Looking for good introductory treatment of meta-analysis
I have two suggestions: Systematic Reviews in Health Care: Meta-Analysis in Context (Amazon link) Introduction to Meta-Analysis (Statistics in Practice) (Amazon link) Both books are very good, including introductory information as well as detailed information about how to actually perform meta-analyses.
Looking for good introductory treatment of meta-analysis
I have two suggestions: Systematic Reviews in Health Care: Meta-Analysis in Context (Amazon link) Introduction to Meta-Analysis (Statistics in Practice) (Amazon link) Both books are very good, inclu
Looking for good introductory treatment of meta-analysis I have two suggestions: Systematic Reviews in Health Care: Meta-Analysis in Context (Amazon link) Introduction to Meta-Analysis (Statistics in Practice) (Amazon link) Both books are very good, including introductory information as well as detailed information about how to actually perform meta-analyses.
Looking for good introductory treatment of meta-analysis I have two suggestions: Systematic Reviews in Health Care: Meta-Analysis in Context (Amazon link) Introduction to Meta-Analysis (Statistics in Practice) (Amazon link) Both books are very good, inclu
19,598
Looking for good introductory treatment of meta-analysis
I'll add an independent recommendation for Jeromy's blog post, and second the suggestions of James DeCoster's notes and the Borenstein textbook (propofols' no. 2). At risk of indulging in self-promotion, I recently published a methods paper entitled Getting Started with Meta-analysis. It's aimed at ecologists and evolutionary biologists, so the examples are taken from these fields, but I hope it will be useful for those working in other areas.
Looking for good introductory treatment of meta-analysis
I'll add an independent recommendation for Jeromy's blog post, and second the suggestions of James DeCoster's notes and the Borenstein textbook (propofols' no. 2). At risk of indulging in self-promoti
Looking for good introductory treatment of meta-analysis I'll add an independent recommendation for Jeromy's blog post, and second the suggestions of James DeCoster's notes and the Borenstein textbook (propofols' no. 2). At risk of indulging in self-promotion, I recently published a methods paper entitled Getting Started with Meta-analysis. It's aimed at ecologists and evolutionary biologists, so the examples are taken from these fields, but I hope it will be useful for those working in other areas.
Looking for good introductory treatment of meta-analysis I'll add an independent recommendation for Jeromy's blog post, and second the suggestions of James DeCoster's notes and the Borenstein textbook (propofols' no. 2). At risk of indulging in self-promoti
19,599
Looking for good introductory treatment of meta-analysis
I wrote a post a while back on getting started with meta analysis with: (a) tips on getting started, (b) links to online introductory texts, and (c) links to free software for meta analysis. Specifically, you might want to read James DeCoster's notes.
Looking for good introductory treatment of meta-analysis
I wrote a post a while back on getting started with meta analysis with: (a) tips on getting started, (b) links to online introductory texts, and (c) links to free software for meta analysis. Specific
Looking for good introductory treatment of meta-analysis I wrote a post a while back on getting started with meta analysis with: (a) tips on getting started, (b) links to online introductory texts, and (c) links to free software for meta analysis. Specifically, you might want to read James DeCoster's notes.
Looking for good introductory treatment of meta-analysis I wrote a post a while back on getting started with meta analysis with: (a) tips on getting started, (b) links to online introductory texts, and (c) links to free software for meta analysis. Specific
19,600
Looking for good introductory treatment of meta-analysis
Fredric M. Wolf's little green Sage book is worth the $18 or so. "Pleasantly mathematical" but not too technical, not too dogmatic either (it's a fiercely contested field, you probably know), good for a person with what I'd call intermediate-level stats/research experience.
Looking for good introductory treatment of meta-analysis
Fredric M. Wolf's little green Sage book is worth the $18 or so. "Pleasantly mathematical" but not too technical, not too dogmatic either (it's a fiercely contested field, you probably know), good fo
Looking for good introductory treatment of meta-analysis Fredric M. Wolf's little green Sage book is worth the $18 or so. "Pleasantly mathematical" but not too technical, not too dogmatic either (it's a fiercely contested field, you probably know), good for a person with what I'd call intermediate-level stats/research experience.
Looking for good introductory treatment of meta-analysis Fredric M. Wolf's little green Sage book is worth the $18 or so. "Pleasantly mathematical" but not too technical, not too dogmatic either (it's a fiercely contested field, you probably know), good fo